Off the Top: Information Architecture Entries

Showing posts: 76-90 of 308 total posts


January 19, 2005

Technorati Opens Spam Tagging - Updated

The talk this past week was all about Technorati and their tagging tool, but the tool offers very little value and may be an incubator for spam more than a folksonomy tool.

Where del.icio.us gets folksonomy right (I know this is reflexive) by having many people tag online objects, Technorati gets folksonomy backwards with one user spitting tags into an aggregator. The only link I would trust in Technorati's tool is one that I also found on del.icio.us.

Why so harsh? Technorati has created a tool not from social interaction and using the internet to build value through the network effect (Technorati made the power curve popular, which is the visualization of the net effect). Technorati has no moderating the content that can be dumped in my any slimy spammer that now has a ripe new target. Lacking moderation and any socially derived checks to the system I am quite disappointed with Technorati and this effort.

I use Technorati keywords to track things I have an interest in and their tool does a great job pulling in information (I also use Feedster for the same purpose) and find it to be the top of its class in this effort.

Updated

Eric Scheid provides an excellent suggestion, which made me realize it is easy for Technorati to get it right and much of my problem was the links went in the wrong direction. Eric states...

I have a suggestion for another link format for "technorati" tags which would turn things around ... it would look like this:

<a href="http://whatever.bloghost.com/page/etc" rel="tag.TAGNAME1 tag.TAGNAME2">descriptive text for the link</a>

This way I can tag the pages I *link* to, and not just the pages I publish.

I'm also able to assign multiple tags to the linked page, and of course since other people could well be linking to that same page they can apply their own tags too. Think of the social tagging nature of del.icio.us without the intermediary of del.icio.us.

All we need is the "tag." prefix to identify the tagging relationship, as distinct from other relationship types (eg. vote-for, XFN, the usual W3C things, etc).

Yes, this modification would make Technorati tags a true folksonomy. Will they fix it to get it right?



January 18, 2005

Folksonomy Explanations

The past few weeks have seen my inbox flooded with folksonomy questions. I am going to make things easier on my inbox by posting some common discussions here. Many of the items I am posting I have posted else where, but this will also be a great help for me.

There have been many people who have correctly discerned a difference between the two prime folksonomy examples, Flickr and del.icio.us. As I first stated in a comment to Clay Shirky's first article on Folksonomy, there are two derivations of folksonomy. There is a narrow folksonomy and a broad folksonomy. On August 26th I stated...

Clay, you bring in some very good points, particularly with the semantic differences of the terms film, movie, and cinema, which defy normalization. A broad folksonomy, like del.icio.us, allows for many layers of tagging. These many layers develop patterns of consistency (whether they are right or wrong in a professional's view is another matter, but that is what "the people" are calling things). These patterns eventually develop quasi power law for around the folk understanding of the terms as they relate to items.

Combining the power tags of "skateboarding, tricks, movie " (as you point out) will get to the desired information. The hard work of building a hierarchy is not truly essential, but a good tool that provides ease of use to tie the semantic tags is increasingly essential. This is a nascent example of a semantic web. What is really nice is the ability to use not only the power tags, but also the meta-noise (the tags that are not dominant, but add semantic understanding within a community). In the skateboarding example a meta-noise tag could be gnarly that has resonance in the skate community and adds another layer of refinement for them.

The narrow-folksonomy, where one or few users supply the tags for information, such as Flickr, does not supply power tags as easily. One or few people tagging one relatively narrowly distributed item makes normalizing more difficult to employ an tool that aggregates terms. This situation seems to require a tool up front that prompts the individuals creating the tags to add other, possibly, related tags to enhance the findability of the item. This could be a tool that pops up as the user is entering their tags that asks, "I see you entered mac do you want to add fruit, computer, artist, raincoat, macintosh, apple, friend, designer, hamburger, cosmetics, retail, daddy tag(s)?"

This same distinction is brought up on IAWiki' Folksonomy entry.

Since this time Flickr has added the ability for friends and family (and possibly contacts) to add tags, which gives Flickr a broader folksonomy. But, the focus point is still one object that is being tagged, where as del.icio.us has many people tagging one object. The broad-folksonomy is where much of the social benefit can be derived as synonyms and cross-discipline and cross-cultural vocabularies can be discovered. Flickr has an advantage in providing the individual the means to tag objects, which makes it easier for the object to get found.

This brings to the forefront the questions about Google's Gmail, which allows one person the ability to freely tag their e-mail entries. Is Gmail using a folksonomy? Since Gmail was included in the grouping of on-line tools that were in the discussion of what to call these things (along with Flickr and del.icio.us) when folksonomy was coined I say yes. But, my belief that Gmail uses a folksonomy (regular people's categorization through tagging) relates to it using the same means of one person adding tags so that object can be found by them. This is identical to how people tag in Flickr (as proven by the self-referential "me" that is ever prevalent) and del.icio.us. People tag in their own vocabulary for their own retrieval, but they also will tag for social context as well, such as Flickr's "MacWorld" tags. In this case Wikipedia is a little wrong and needs improving.

I suppose Gmail would be a personal folksonomy to the Flickr narrow folksonomy and the del.icio.us broad folksonomy. There are distinct futures for all three folkonomies to grow. Gmail is just the beginning of personal tagging of digital objects (and physical objects tagged with digital information). Lou Rosenfeld hit the nail on the head when he stated, "I'm not certain that the product of folksonomy development will have much long term value on their own, I'll bet dollars to donuts that the process of introducing a broader public to the act of developing and applying metadata will be incredibly invaluable.". These tools, including Gmail, are training for understanding metadata. People will learn new skills if they have a perceived greater value (this is why millions of people learned Palm's Graffiti as they found a benefit in learning the script).

Everybody has immense trouble finding information in their hierarchal folders on their hard drive. Documents and digital objects have more than one meaning than the one folder/directory, in which they reside. Sure there are short cuts, but tracking down and maintaining shortcuts is insanely awkward. Tags will be the step to the next generation of personal information managment.



January 8, 2005

From Tags to the Future

Merlin hit on something in his I Want a Pony: Snapshots of a Dream Productivity App where he discusses:

Tags - People have strong feelings about metadata and the smart money is usually against letting The User apply his or her own tags and titles for important shared data ("They do it wrong or not at all," the burghers moan). But things are changing for personal users. Two examples? iTunes and del.icio.us. Nobody cares what "metadata" means, but they for damn sure know they want their mp3s tagged correctly. Ditto for del.icio.us, where Master Joshua has shown the world that people will tag stuff that’s important in their world. Don't like someone else's homebrewed taxonomy? Doesn't matter, because you don't need to like it. If I have a repeatable system for tagging the information on just my Mac and it's working for me, that's really all that matters. I would definitley love that tagging ability for the most atomic piece of any work and personal information I touch.

This crossed my radar the same time as I read Jeff Hawkins' discussion about how he came up with Graffiti for Palm devices. He noticed people did not find touch typing intuitive, but they saw the benefit of it and it worked. Conversely in the early 90s people were interacting with handwriting interpreters that often did not understand one's own handwriting. Jeff came up with something that would give good results with a little bit of effort put in. Palm and Graffiti took off. (Personally, I was lucky when I got my first Palm, in that I was on the west coast and waking on east coast time, which gave me two or three hours of time to learn Graffiti before anybody else was awake. It only took two or three days to have it down perfectly).

Merlin's observation fits within these parameters. Where people have not cared at all about metadata they have learned to understand the value of good tags and often do so in a short period of time. iTunes really drives the value of proper tagging home to many (Napster and other shared music environments brought to light tagging to large segments of the population). In a sense folksonomies of del.icio.us and Flickr are decedents of the shared music environments. People could see that tagged objects, whose tags to be edited and leveraged had value in one's ability to find what one is looking for based on those tags.

As the web grew up on deep linking and open environments to find and share information. So to will tagging become that mantra for the masses. All objects, both digital and physical, will be tagged to provide immediacy of information access so to gain knowledge. Learning to search, parse, filter, and leverage predictive tools (ones that understand the person's desires, context, situation, and frame of reference so to quickly (if not instantly) gather, interpret, and make aware the information around the person). Should the person be late for a meeting their predictive filters are going to limit all be the required information, possibly a traffic jam on their normal route as well as their option A route. A person that has some free time may turn up the serendipity impact and get exposed to information they may normally have filtered out of their attention. The key will be understanding tags have value and just as metadata for other objects, like e-mail subject lines, can be erroneous and indicators of spam, our life filters will need the same or similar. We will want to attract information to us that we desire and will need to make smart and informed choices and tags are just one of the means to this end.



December 28, 2004

Information Waste is Rampant

Fast Company published costs facing business. The top four relate to poor design and information use: Poor knowledge harnessing ($1.4 Trillion); Digital publishing inefficiencies ($750 billion); Data quality problems ($600 billion); and Paper-based trade processes ($400 billion). That is 3.15 Trillion U.S. dollars down the tubes with no benefit.

The solutions are not that difficult, but everybody seems happy to use the rear view mirror to view the future.

Christina stated, "What me worry" about design and business. The whole CIO is a sham as the CIO is a technology driven person, which is tangentially related to information and technology still hinders information flow if not planned for properly (more on this is coming in the near future here on this site). There needs to be a chief level position that cares about the information, the people using it, and the people who create the information. To Christina's post I responded with the following on her site (posted here so I can better keep track of it):

It seems like the 80s all over again. The focus on design in the to late 80s, mostly with unified branding and creative practices formally brought in-house. There was a lot of push around design, mostly labelled branding (nearly the exact same discussions, but slightly different terms). Much of this was around the brandhouses like Landor. The business community embraced the results and tried to incorporate the creative culture as part of their own.
What happened? The innovators were bought by large advertising or public relation firms and the firms changed their industry term to communication companies. Companies created corporate communication divisions (comprised of adversising, PR, branding, and other creative endevors) and had high level management visability.
By the early 90s the corporate environment had largely subsumed the communication into marketing and business schools that has embraced the creative mindset followed suit. Today marketing is often what trumps design and there is no creative in marketing. The creative departments by the late 90s had been gutted by the web craze. This left business types with little creative craft understanding as those driving what was once good.
It is not suprising that currently named "design" is taking off, as what was good about the creative was gutted and most companies lack central design plans. There is tremendous waste in cross medium design, as few sites are built with an understanding of the digital medium, let alone cross platform design or true cross media design. Part of the problem is far too few designers actually understand cross-platform and/or cross-media design. There is millions wasted in bandwidth on poor web design that is using best practices from the late 90s not those from today. There is no integration of mobile, with a few exceptions in the travel industry. There is still heavy focus on print, but very little smart integration of design in the digital medium. This even applies to AIGA, which is a great offender of applying print design techniques on the web. How can we expect business design to get better if one of the pillars of the design profession has not seemed to catch on?

There are large problems today and we need to break some of our solutions were have been trying to get to solutions that work. Not only do today's solutions not work today, they will not work tomorrow as they are only stop gaps. Cross-platform, cross-device, and cross-medium design solutions are needed, but technology is not here to deliver and few that I have run across in the design world are ready for that change as they have not made the change to today's world.

Today's designer focusses on getting the information in front of the user and stops there. They do not consider how this person or machine may reuse the information. There is so much yet to improve and yet the world is progressing much faster than people can or want to change to keep up. There are designers and developers who will not build for mobile (it is not that hard to do) because they do not see them in the user logs. They fail to see the correlation that their sites suck for mobile and mobile users may test once and go somewhere else for their information. The people that are seeing mobile users in their logs are the ones that have figured out how to design and develop for them properly (most have found that it is relatively inexpensive to do this). This is not rocket science, it is using something other than the rear view mirror to design for now and the future.



December 17, 2004

Would We Create Hierarchies in a Computing Age?

Lou has posted my question:

Is hierarchy a means to classify and structure based on the tools available at the time (our minds)? Would we have structured things differently if we had computers from the beginning?

Hierarchy is a relatively easy means of classifying information, but only if people are familiar with the culture and topic of the item. We know there are problems with hierarchy and classification across disciplines and cultures and we know that items have many more attributes that which provide a means of classification. Think classification of animals, is it fish, mammal, reptile, etc.? It is a dolphin. Well what type of dolphin, as there are some that are mammal and some that are fish? Knowing that the dolphin swims in water does not help the matter at all in this case. It all depends on the context and the purpose.

Hierarchy and classification work well in limited domains. In the wild things are more difficult. On the web when we are building a site we often try to set hierarchies based on the intended or expected users of the information. But the web is open to anybody and outside the site anybody can link to any thing they wish that is on the web and addressable. The naming for the hyperlink can be whatever helps the person creating the link understand what that link is pointing to. This is the initial folksonomy, hyperlinks. Google was smart in using the link names in their algorithm for helping people find information they are seeking. Yes, people can disrupt the system with Googlebombing, but the it just takes a slightly smarter tool to get around these problems.

You see hierarchies are simple means of structuring information, but the world is not as neat nor simple. Things are far more complex and each person has their own derived means of structuring information in their memory that works for them. Some have been enculturated with scientific naming conventions, while others have not.

I have spent the last few years watching users of a site not understand some of the hierarchies developed as there are more than the one or two user-types that have found use in the information being provided. They can get to the information from search, but are lost in the hierarchies as the structure is foreign to them.

It is from this context that I asked the question. We are seeing new tools that allow for regular people to tag information objects with terms that these people would use to describe the object. We see tools that can help make sense of these tags in a manner that gets other people to information that is helpful to them. These folksonomy tools, like Flickr, del.icio.us, and Google (search and Gmail) provide the means to tame the whole in a manner that is addressable across cultures (including nationalities and language) and disciplines. This breadth is not easily achievable by hierarchies.

So looking back, would we build hierarchies given today's tools? Knowing the world is very complex and diverse do simple hierarchies make sense?



November 30, 2004

Flexibility in Folksonomies

Nick Mote posts his The New School of Ontologies essay, which is a nice overview of formal classification and folksonomies. The folksonomy is a good approach for bottom-up approach to information finding.

In Nick's paper I get quoted. I have cleaned up the quote that came out of an e-mail conversation. This quote pretty much summaries the many discussions I have had in the past couple months regarding folksonomies. Am I a great fan of the term? Not as much of a fan as what they are doing.

The problem of interest to me that folksonomies are solving is cross-discipline and cross-cultural access to information as well as non-hierarchical information structures. People call items different things depending on culture, discipline, and/or language. The folksonomy seems to be a way to find information based on what a person calls it. The network effect provides for more tagging of the information, which can be leveraged by those who have naming conventions that are divergent from the norm. The power law curve benefits the enculturated, but the tail of the curve also works for those out of the norm.



November 23, 2004

Cranky Interface to Bits and Bytes

Been a little cranky around these parts the past week or so. Much of it having to do with having personal observations of the web and design world fortified by my trip to Europe. The market I work in is somewhat behind what is going on in the U.S. in the design and information development is concerned. But, some of the problems I have been seeing as I have been working on Model of Attraction and Personal InfoCloud projects is a severe lack of understanding the cross device problems that users are running into.

My trip to Europe solidified the my hunch that others outside the U.S. are actually working to solve some the user cross device problems that occur. It seems the European market is at least thinking of the problems users face when going from a work desktop machine, to laptop, to mobile device and trying to access information. The U.S. is so desktop and laptop centered they are seemingly blind to the issues. Some of the problems everybody is facing are caused by the makers of the operating systems as the problems with syncing often begin with the operating system. Apple is definately ahead of others with their iSync, but it still has a ways to go.

It is painful to see many sites for mobile products in the U.S. that can't work on mobile devices because they are poorly designed and some even use FrontPage to throw their crud up. I have been finding many mobile users over the past year, across locations in the U.S., that find that lack of sites that will work on a mobile device appalling.

On the other side of the market I hear developers stating they do not develop for mobile users because they do not see them in their access logs. How many times do you think a user will come back and fill your user logs if your site does not work for them? Additionally we are talking about the internet here, not U.S. only information access points, and the rest of the world is mobile they are living in the present and not in the past like the U.S. I am being a little over the top? Not by much if any.

Part of the problem is only those around urban in the U.S. and ones that have usable public transit have the opportunity to use mobile devices similar to the rest of the world. Although mobile media streamed of a mobile is a killer application for those stuck in the commute drive (Fabio Sergio's From Collision to Convergence presentation at Design Engaged really woke me up to this option).

Getting back to information following the user... Providing mobile access to information is one solution and designers and developers have been making the web harder to use by not sticking to the easiest means of presenting information across all devices, XHTML. Information is posted in PDF with out notification that the information on the other side of the link is a PDF. After a lengthy download the mobile user gets nothing at best or their device locks up because it is out of memory or it can not process the PDF. This practice is getting to be just plain ignorant and inexcusable (ironically the U.S. Federal Communications Commission follows this practice for most of its destination pages, which only shows how far behind the U.S. truly is).

Another solution is to make it easier to sync devices across distance (not on the same network) or even have one's own information accessible to themself across the internet. Getting to the point of solving these problems should be around the corner, but with so many things that seem so simple to get and have not been grasped I have dented hope and frustration.



November 6, 2004

Model-T is User Experience Defined

Peter Boersma lays out Model T: Big IA is UX. I completely agree with this assessment and view. The field of Information Architecture is very muddled in the eyes of clients and managers as those pitching the services mean different things. Personally I think Richard Saul Wurman's incredible book on information design labeled "Information Architecture" caused a whole lot of the problem. The little IA was evident in the Wurman book and there are many concepts that were delivered to the IA profession from that book, but it was largely about information design.

Getting back to Peter Boersma's wonderful piece, the Model-T hits the correlated professions and roles dead on. This is essentially how things are organized. There are some of us that go deep in more than one area and others that are shallow in most, but also tend to provide great value.



October 31, 2004

Ninth Anniversary for My Personal Site

At some point nine years ago I began my first personal site. It was November 1995 and CompuServ opened up space for their users to publish their own site. This trek began with creating a page using a text browser and some prefab components from CompuServ. The computer this adventure began with is long gone. But, the remnants of the site remain, mostly in the links page, which became my bookmarks that I could access from anywhere. I never really went back to using browser based bookmarks after this point.

My personal site has changed over the years, from a site that was named the "Growing Place" that housed poetry, links, a snippet about consulting work I was doing, and a homepage. Version 2 was a move off of CompuServ to Clark.net hosting (which became Verio and was never the same after) came with frames and FrontPage buttons (the buttons never worked right after they were edited) and the links page grew and the consulting page moved from active to "under construction". V.2 also had some CGI form pages, mostly for mail and a guestbook that was not linked.

Version 3 (about 1998) was a move to vanderwal.net and had a black background with electric green and electric blue text. V.3 provided more links and had a small page of annotated links that was updated infrequently, and was mostly short notes to myself and was not linked to by anything but referrer logs. V.3 began using ColdFusion and then ASP, as that was what I was playing with at the time. This version was hosted at Interland, which was not a favorite ISP as I was doing bug fixing for them and their poor system administration.

Version 4 (November 2000) was moved to an ISP with PHP. This was just after our wedding and a photo gallery was born. The site stayed in black with blue and green for a short while, until it moved to a blue and orange theme (April 2001) inspired by the trip to the mother country Holland on our honeymoon. The annotated links were still being kept by hand, but were linked to finally. December 2000 I started using Blogger, which made the annotated links easier and provided a spark to post other information.

We are still in Version 4, possibly in version 5 as the graphic design morphed in November 2003 to its current state. This design validated to XHTML and made maintenance much easier. Off the Top weblog was converted to PHP in October 2001 after leaving Blogger and hand maintaining this section for months. The hosting has remained the same and has been steady.

There are many things in the works, but other outside commitments have been putting things on hold. The markup and CSS need to be cleaned up for greater ease. There are some hosting modifications coming, which could trigger some more changes on the back end programming side. There are some design and presentational structure changes that are being played with as there are a few things that really bug me. I really want the comments back online and I have plan for this, but it needs some time to work out the details. There are some changes external to this site that could be coming also, which will make things much easier in the long run. Maybe these revisions will be done by the 10th anniversary.



October 6, 2004

Personal Information Aggregation Nodes

Agnostic aggregators are the focal point of information aggregation. The tools that are growing increasingly popular for the information aggregation from internet sources are those that permit the incorportation of info from any valid source. The person in control of the aggregator is the one who chooses what she wants to draw in to their aggregator.

People desiring info agregation seemingly want to have control over all sources of info. She wants one place, a central resource node, to follow and to use as a starting point.

The syndication/pull model not only adds value to the central node for the user, but to those points that provide information. This personal node is similar (but conversely) to network nodes in that the node is gaining value as the individual users make use of the node. The central info aggregation node gains value for the individual the more information is centralized there. (The network nodes gain value the more people use them, e.g. the more people that use del.icio.us the more valuable the resource is for finding information.) This personal aggregation become a usable component of the person's Personal InfoCloud.

What drives the usefulness? Portability of information is the driver behind usefulness and value. The originating information source enables value by making the information usable and reusable by syndicating the info. Portabiliry is also important for the aggregators so that information can move easily between devices and formats.

Looking a del.icio.us we see an aggrgator that leverages a social network of people as aggregators and filters. Del.icio.us allows the user to build their own bookmarks that also provides a RSS feed for those bookmarks (actually most everything in del.icio.us provides feeds for most everything) and an API to access the feeds and use then as the user wishes. This even applies to using the feed in another aggregator.

The world of syndication leads to redundant information. This where developments like attention.xml will be extremely important. Attention.xml will parse out redundant info so that you only have one resource. This work could also help provide an Amazon like recommendation system for feeds and information.

The personal aggregation node also provides the user the means to categorize information as they wish and as makes most sense to themselves. Information is often not found and lost because it is not categorized in a way that is meaningful to the person seeking the information (either for the first time or to access the information again). A tool like del.icio.us, as well as Flickr, allows the individual person to add tags (metadata) that allows them to find the information again, hopefully easily. The tool also allows the multiple tagging of information. Information (be it text, photo, audio file, etc.) does not always permit itself easy narrow classification. Pushing a person to use distinct classifications can be problematic. On this site I built my category tool to provide broad structure rather than heirarchial, because it allows for more flexibility and can provide hooks to get back to information that is tangential or a minor topic in a larger piece. For me this works well and it seems the folksonomy systems in del.icio.us and Flickr are finding similar acceptance.



October 3, 2004

Feed On This

The "My" portal hype died for all but a few central "MyX" portals, like my.yahoo. Two to three years ago "My" was hot and everybody and their brother spent a ton of money building a personal portal to their site. Many newspapers had their own news portals, such as the my.washingtonpost.com and others. Building this personalization was expensive and there were very few takers. Companies fell down this same rabbit hole offering a personalized view to their sites and so some degree this made sense and to a for a few companies this works well for their paying customers. Many large organizations have moved in this direction with their corporate intranets, which does work rather well.

Where Do Personalization Portals Work Well

The places where personalization works points where information aggregation makes sense. The my.yahoo's work because it is the one place for a person to do their one-stop information aggregation. People that use personalized portals often have one for work and one for Personal life. People using personalized portals are used because they provide one place to look for information they need.

The corporate Intranet one place having one centralized portal works well. These interfaces to a centralized resource that has information each of the people wants according to their needs and desires can be found to be very helpful. Having more than one portal often leads to quick failure as their is no centralized point that is easy to work from to get to what is desired. The user uses these tools as part of their Personal InfoCloud, which has information aggregated as they need it and it is categorized and labeled in a manner that is easiest for them to understand (some organizations use portals as a means of enculturation the users to the common vocabulary that is desired for use in the organization - this top-down approach can work over time, but also leads to users not finding what they need). People in organizations often want information about the organization's changes, employee information, calendars, discussion areas, etc. to be easily found.

Think of personalized portals as very large umbrellas. If you can think of logical umbrellas above your organization then you probably are in the wrong place to build a personalized portal and your time and effort will be far better spent providing information in a format that can be easily used in a portal or information aggregator. Sites like the Washington Post's personalized portal did not last because of the cost's to keep the software running and the relatively small group of users that wanted or used that service. Was the Post wrong to move in this direction? No, not at the time, but now that there is an abundance of lesson's learned in this area it would be extremely foolish to move in this direction.

You ask about Amazon? Amazon does an incredible job at providing personalization, but like your local stores that is part of their customer service. In San Francisco I used to frequent a video store near my house on Arguello. I loved that neighborhood video store because the owner knew me and my preferences and off the top of his head he remembered what I had rented and what would be a great suggestion for me. The store was still set up for me to use just like it was for those that were not regulars, but he provided a wonderful service for me, which kept me from going to the large chains that recorded everything about me, but offered no service that helped me enjoy their offerings. Amazon does a similar thing and it does it behind the scenes as part of what it does.

How does Amazon differ from a personalized portal? Aggregation of the information. A personalized portal aggregates what you want and that is its main purpose. Amazon allows its information to be aggregated using its API. Amazon's goal is to help you buy from them. A personalized portal has as its goal to provide one-stop information access. Yes, my.yahoo does have advertising, but its goal is to aggregate information in an interface helps the users find out the information they want easily.

Should government agencies provide personalized portals? It makes the most sense to provide this at the government-wide level. Similar to First.gov a portal that allows tracking of government info would be very helpful. Why not the agency level? Cost and effort! If you believe in government running efficiently it makes sense to centralize a service such as a personalized portal. The U.S. Federal Government has very strong restriction on privacy, which greatly limits the login for a personalized service. The U.S. Government's e-gov initiatives could be other places to provide these services as their is information aggregation at these points also. The downside is having many login names and password to remember to get to the various aggregation points, which is one of the large downfalls of the MyX players of the past few years.

What Should We Provide

The best solution for many is to provide information that can be aggregated. The centralized personalized portals have been moving toward allowing the inclusion of any syndicated information feed. Yahoo has been moving in this direction for some time and in its new beta version of my.yahoo that was released in the past week it allows the users to select the feeds they would like in their portal, even from non-Yahoo resources. In the new my.yahoo any information that has a feed can be pulled into that information aggregator. Many of us have been doing this for some time with RSS Feeds and it has greatly changed the way we consume information, but making information consumption fore efficient.

There are at least three layers in this syndication model. The first is the information syndication layer, where information (or its abstraction and related metadata) are put into a feed. These feeds can then be aggregated with other feeds (similar to what del.icio.us provides (del.icio.us also provides a social software and sharing tool that can be helpful to share out personal tagged information and aggregations based on this bottom-up categorization (folksonomy). The next layer is the information aggregator or personalized portals, which is where people consume the information and choose whether they want to follow the links in the syndication to get more information.

There is little need to provide another personalized portal, but there is great need for information syndication. Just as people have learned with internet search, the information has to be structured properly. The model of information consumption relies on the information being found. Today information is often found through search and information aggregators and these trends seem to be the foundation of information use of tomorrow.



September 1, 2004

Gordon Rugg and the Verifier Method

In the current Wired Magazine an article on Gordon Rugg - Scientific Method Man (yes, it is the same Gordon Rugg of card sorting notoriety). The article focuses on his solving the Voynich manuscript, actually deciphering it as a hoax. How he goes about solving the manuscript is what really has me intrigued.

Rugg uses a method he has been developing, called the verifier approach, which develops a means critical examination using:

The verifier method boils down to seven steps: 1) amass knowledge of a discipline through interviews and reading; 2) determine whether critical expertise has yet to be applied in the field; 3) look for bias and mistakenly held assumptions in the research; 4) analyze jargon to uncover differing definitions of key terms; 5) check for classic mistakes using human-error tools; 6) follow the errors as they ripple through underlying assumptions; 7) suggest new avenues for research that emerge from steps one through six.

One area that Rugg has used this has been solving cross-discipline terminology problems leading to communication difficulties. He also found that pattern-matching is often used to solve problems or diagnose illness, but a more thorough inquiry may have found a more exact cause, which leads to a better solution and better cure.

Can the verifier method be applied to web development? Information Architecture? Maybe, but the depth of knowledge and experience is still rather shallow, but getting better every day. Much of the confounding issues in getting to optimal solutions is the cross discipline backgrounds as well as the splintered communities that "focus" on claimed distinct areas that have no definite boundaries and even have extensive cross over. Where does HCI end and Usability Engineering begin? Information Architecture, Information Design, Interaction Design, etc. begin and end. There is a lot of "big umbrella" talk from all the groups as well as those that desire smaller distinct roles for their niche. There is a lot of cross-pollination across these roles and fields as they all are needed in part to get to a good solution for the products they work on.

One thing seems sure, I want to know much more about the verifier method. It seems like understanding the criteria better for the verifier method will help frame a language of criticism and cross-boundary peer review for development and design.



August 26, 2004

Quick Links in the Side Bar is not Optimal

Paul wants to "set up one of those link-sidebar thingies again" for his quick link list. Actually I am finding those the side link lists, like mine cause problems for folks tracking referrer links back and for search engines. Context of the links is helpful, but so is being able to find the date and page where these links came from. The way Paul is doing his quick links now works well. I was able to point directly to these links, the links he make have context, even if it is only a list of links.

Quite similar to the Fixing Permalink to Mean Something post the other day, the links in the side bar are temporary. I find links from Technorati back to my site from some poor soul looking for what comment and link vanderwal.net had placed. These links do not have a permalink as they are ever rotating. I have received a few e-mails asking where the link was from and if I was spamming in some way.

Why do I have the quick links? I don't have the time to do a full or even short write-up. I clear my tabbed browser windows and put the items I have not read in full in the Quick Links. Some things I want access to from my mobile device or work to read the info in full or make use of the information. Other things I want to keep track of and include in a write-up.

The other advantage of moving the quick links into the main content area is they would be easier to include in one aggregated feed. I know I can join my current feeds, but I like the sites that provide the feeds in the same context as they appear on the site as it eases the ability to find the information. This change will take a more than a five or ten minute fix for my site, but it is on my to do list.



August 25, 2004

Chevy Redesigns with Standards

Chevrolet has redesigned with fully valid (one minor issue in the style sheet) XHTML (strict) and CSS. It is beautiful and wonderfully functional. All the information can be easily copied and pasted to help the discerning car buyer build their own crib sheet. The left navigation (browsing structure) is wonderful and not a silly image, but a definition list that is expandable. The style layer is semantic, which is a great help also (for those IAs who understand). Those of you so inclined, take a look under the hood as there are many good things there.



August 20, 2004

Fixing Permalink to Mean Something

This has been a very busy week and this weekend it continues with the same. But, I took two minutes to see if I could solve a tiny problem bugging me. I get links to the main blog, Off the Top, from outside search engines and aggregators (Technorati, etc.) that are referencing content in specific entries, but not all of those entries live on the ever-changing blog home page. All of the entries had the same link to their permanant location. The dumb thing was every link to their permanant home was named the same damn thing, "permalink". Google and other search engines use the information in the link name to give value to the page being linked to. Did I help the cause? No.

So now every permanent link states "permalink for: incert entry title". I am hoping this will help solve the problem. I will modify the other pages most likely next week sometime (it is only a two minute fix) as I am toast.



This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License.