Off the Top: Web design Entries

Showing posts: 91-105 of 222 total posts


December 8, 2003

WaSP interview with Todd Dominey

The Web Standards Project interviews Todd Dominey, who was behind the standards-based PGA redesign. The interview raises the problems Content Management Systems cause with valid markup. Todd also highlights it is much easier to move towards standards when working from scratch than cleaning up previously marked-up content.



December 4, 2003

CSS Cribsheet

I nearly forgot, Dave Shea has come up with a CSS cribsheet that flat out rocks. Dabbling or living in CSS? Use it.



December 3, 2003

Testing the Three Click Rule

Josh Porter of UIE test the Myth of the Three Click Rule. Josh finds out that users will continue seeking what the want to find after three clicks as long as they feel they are on the right track and getting closer. Most users will not abandon their quest after three clicks as had been suggested.

Oddly I remember this three click rule from four to five years ago and when we tested it we found the users we tested did not give up. There were other studies at that time that backed up what we were finding. Now in the last couple of years folks that are new to the Web are pontificating the three click rule again.

As always it is always best to test and just follow blindly.



December 2, 2003

Harpers redesigned

Harpers Magazine has been redesigned by Paul Ford. Paul discusses the Harpers redesign on his own site Ftrain.

The site is filled with all the good stuff we love, valid XHTML, CSS, accessible content (meaning well structured content). The site is clean and highlights the content, which is what Harpers is all about - great content. The site is not overfilled with images and items striking out for your attention, it is simply straightforward.

We bow down before Paul and congratulate him on a job very well done.



November 1, 2003

QuirksMode launched

I nearly forgot, Peter-Paul Koch has delivered QuirksMode filled with the good stuff for JavaScript, CSS, etc.



Why page numbers fail us

I keep running into a deep information habit that has never worked well for its intended purpose, the page number has been an information curse. Printed documents use page numbers, which are intended as a reference point (not bragging rights often referenced in Harry Potter and Neal Stephenson books - I am on page 674 and you are on page 233). All of us are familiar with this problem from high school and college if you happened to have a different printed copy of a classic text. Page 75 of Hemmingway's Old Man and the Sea was not the same in everybody's copy.

Even modern books fail when trying to reference pages, just look at the mass market edition of Crypnomicon with 1168 pages and the hardcopy version of Crypnomicon with 928 pages of the same text. Trying to use a page number as a reference does absolutely no good.

Now we try and reference information on the Web, which should not be chunked up by page count, but by logical information breaks. These breaks are often done by chapter or headings and rightly so as it most often helps the reader with context. Documents that are placed on the Internet, many times for two purposes - the ability to print and to keep the page numbers. Having information that is broken logically for a print presentation makes some sense if it is going to be printed and read in that manner, but more and more electronic information is being read on electronic devices and not printed. The Adobe reader does not easily flow from page to page, which is a complaint I often hear when readers are trying to read page delimited PDF files.

So if page numbers fail us in the printed world and are even more abysmal in the realm of the electronic medium, what do we use? One option is to use natural information breaks, which are chapters, headers, and paragraphs. These breaks in the information occur in every medium and would cause problems for readers and the information's structure if they are missing.

If we use remove page numbers, essentially going native as books and documents did not havepage numbers originally (Gutenberg's Bible did not rely on page numbers, actually page numbers in any Bible are almost never used Biblical reference), then we can easily place small paragraph numbers in the margins to the left and right. In books, journals, and periodicals with tables of contents the page or article jumps the page numbers can remain as the documents self-reference. The external reference could have a solid means of reference that actually worked.

Electronic media do not necessarily needs the page numbers for self-references within the document as the medium uses hyper-linking to perform the same task appropriately. To reference externally from a document one would use the chapter, header, and paragraph to point the reader to the exact location of text or microcontent. In (X)HTML each paragraph tag could use an incremented "id" attribute. This could be scripted to display in the presentation as well as be used as hyperlink directly to the content using the "id" as an anchor.

I guess the next question is what to do about "blockquote" and "table" tags, etc., which are block level elements? One option is to not use an id attributes in these tags as they are not paragraphs and may be placed in different locations in various presentation mediums the document is published in. The other option is to include the id tag, but then the ease of creating the reference information for each document type is eliminated.

We need references in our documents that are not failures from the beginning.

Other ideas?



October 30, 2003

CSS Tabs part 2

Doug Bowman provides Sliding Doors 2 for ALA. The sliding doors rounded tabs done with CSS, meaning the text is not in a graphic and the tabs have rollover effects with out having to build rollover images and deal with JavaScript. Doug's version 2 of sliding doors provides those with pages in CMS or other non-hand built pages. This beats the JavaScript sniffing the URL to set the local tab setting.



October 19, 2003

RSS on PDAs and information reuse

Three times the past week I have run across folks mentioning Hand/RSS for Palm. This seems to fill the hole that AvantGo does not completely fill. Many of the information resources I find to be helpful/insightful have RSS feeds, but do not have a "mobile" version (more importantly the content is not made with standard (X)HTML validating markup with a malleable page layout that will work for desktop/laptop web browsers and smaller mobile screens).

I currently pull to scan then read content from 125 RSS feeds. Having these some of these feeds pulled and stored in my PDA would be a great help.

Another idea I have been playing with is to pull and convert RSS feeds for mobile browser access and use. This can be readily done with PHP. It seems that MobileRSS already does something like this.

Content, make that information in general, stored and presented in a format that is only usable in one device type or application is very short sighted. Information should be reusable to be more useful. Users copy and paste information into documents, todo lists, calendars, PDAs, e-mail, weblogs, text searchable data stores (databases, XML respositories, etc.), etc. Digital information from the early creation was about reusing the information. Putting text only in a graphic is foolish (AIGA websites need to learn this lesson) as is locking the information in a proprietary application or proprietary format.

The whole of the Personal Information Cloud, the rough cloud of information that the user has chosen to follow them so that it is available when they need that information is only usable if information is in an open format.



October 15, 2003

Personalization is not a preference

News.com writes that a new Forester report states personalization is over rated. This comes from many of the portal tool developers who are trying to push the technology. In discussions with users, many like their one personalized site, (my.yahoo.com or my.washintonpost.com) and they prefer to pull content into these single broad portals.

It seems like the folks selling the tools should be focussing on syndication rather personalization of everything. Syndication, such as RSS, can be pulled into what ever personalized interface the user desires. Anecdotally I have found users also like getting e-mail opt-in as a means to find out when information is updated.

The user does not really have control of the personalized information as it is maintained on an external resource and not one that is truly close to the user. Users often prefer to have the information come to them where they can control it and sort into a system that works for themselves. The one personalized site may fall into a user's personal info cloud as they have a central place to find information, but if every site is personalized it is the disruptive factor is still in place, which keeps the user from having the information that they need when they need it.



October 7, 2003

Building Web pages for crippled IE browser

Microsoft and others are posting the work arounds needed for the Web pages you build if they require plug-ins. Java and Active Script seem to been the focus at this point. Here we go: Microsoft guide for building to the new neutered IE browser, Apple developer guide for post EOLA development, Real Networks guide for embedded, and Macromedia guide. [hat tip Craig Salia]



October 2, 2003

Compassion and the crafting of user experience

Adam provides a good form versus function essay in his Compassion and the crafting of user experience post. Make the time to read. Once again design without function is an unusable product, but function with good design is very enjoyable. Top designers understand the balance of form and function and make decisions on how the design will impact use. Those that are not to this point yet, do not have command of their craft, which should be a goal.



September 22, 2003


September 9, 2003

Getting Site maps and Site indexes right

Chiara Fox provides and excellent overview site maps and site indexes in her Sitemaps and Site Indexes: what they are and why you should have them. This overview and is very insightful. Many experienced users find well developed site maps very helpful.

The odd thing is that for the great assistance site maps and site indexes provide, new users and even general users rarely turn to these assistive tools. In the past five years I have only seen one or two users click on the site map or index in user testing sessions. When questioned why the user often states they do not find the tools helpful (read Chiara's article to build better tools) or they did not know to look for the links.



Jess offers Searching for the Center of Design

Jess provides an excellent take on Searching for the center of design in Boxes and Arrows this month. Whether you develop "top-down" or "bottom-up" this is a great read and show great understanding. He really hits the nail on the head in that there is usually one person who chooses which direction to go, this is usually not a user group but a powerful stakeholder.

The best we can do is be well educated and bring a lot of experience and educate the stakeholder, if that is permitted. Add to your education by taking in Jess article.



August 27, 2003

Kottke and others on standards and semanticsk

Kottke provides a good overview of Web standards and semantically correct site development. Jason points out, as many have, that just because a site validates to the W3C does not mean that it is semantically correct. Actually there are those that take umbrage with the use of the term semantically for (X)HTML, when many consider it structural tagging of the content instead, but I digress. A "valid" site could use a div tag where it should not have, for example where it should have been a paragraph tag instead. Proper structural markup is just important as valid markup. The two are not mutually exclusive, in fact they are very good partners.

One means to marking-up a page is to begin with NO tags on the page in a text editor then markup the content items based on what type of content they are. A paragraph gets a "p" tag, tabular data is placed in a table, a long quote is put in a "blockquote" tag, an ordered list gets "ol" tags surrounding them with items in the list getting wrapped with "li" tags, and so forth. Using list tags to indent content can be avoided in using this method. Once the structure has been properly added to the document it is time to work with the CSS to add presentation flair. This is not rocket science and the benefits are very helpful in transitioning the content to handheld devices and other uses. The information can more easily scraped for automated purposes too if needed.

It is unfortunate that many manufacturers of information tools do not follow this framework when transforming information in to HTML from their proprietary mirth. A MS Word document creates horrible garbage that is both non-structural and not valid. The Web is a wonderful means to share content, but mangled markup and no structure can render information inconsistent at best, if not useless.

While proper development is not rocket science, it does take somebody who knows what they are doing, and not guessing, to get it right.

Others are posting on Jason's post, like Doug Bowman and Dave Shea and have opened up comments. The feedback in Doug's comments is pretty good.



This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike License.