Off the Top: XML Entries
Showing posts: 1-15 of 68 total posts
2025 Vanderwal.net Backend Modernization is Done
A couple years ago I thought I would update the backend code from PHP 5.6 to PHP 7 and initial progress on it was hindered by time available.
Planning the Modernization Work
A few weeks back I started looking at it again and mapped it out properly like a project. I realized PHP 7 was deprecated and I should really head to PHP 8, so that target was set. I was planning on keeping things relatively simple using a database connection quite similar to what I had used, but digging through PHP 8 books and resources on O’Reilly Learning Platform everything was using a newer more flexible method. After digging further I took the route that would take a bit more work modifying existing code (some going back to 2000 and 2001). But, as I dug into the work I realized I was only needing to modify and modernize about 20% to 30% of code on the pages and templates.
In doing this I also realized my old method of security around the system management backend was no longer working, so it had to be rewritten as well. That meant rebuilding the backend screens. Those updates went live two days ago on the 19th.
With that done it was back to the last third or so of the pages and templates that are public facing. I had already reworked the category output pages and adding pagination to them. No longer will all 121 Folksonomy categorized posts show up on one screen, only 15 at a time will. The “Personal” category has 369 posts (it is a blog so it is about me, you see, but just not all of it).
The RSS feed received a very minor update to RSS 0.92 to keep in line with many of the OG methods that remain.
The Actual Homepage has been Restructured
The homepage for vanderwal.net has been restructured to make it easier to find information that isn’t directly in the blog and I get emails and DMs about somewhat regularly. Moving it to two columns helped this. I do need to modify this to flex or grid CSS model as tweaking the layout was rather tedious.
This Modernization was like Changing the Plumbing and Wiring in a Building
This modernization was like bringing the plumbing and wiring of a building up to new building code. The walls and structure are all pretty much the same. The top layer stays the same for now.
This modernization does allow me to hopefully finish setting up webmentions, which I’ve had partly wired since around 2021 or so. I just need the last piece to that to work. There are also other IndieWeb related updates I’m planning on making and have been waiting to get this code updated before modifying and adding them into place. By the way, if you are running your own site and/or blog, the IndieWeb community has a gem. There are a lot of resources in their wiki and pages helping anybody with their own site.
The pagination for the blog is likely going to change from a date with month focussed pagination to a page model with the oldest selection being page 1. The archive page will get a long over due update so it doesn’t stop at 2003 (looks at calendar, yep it is out of date). I’m hoping to have an archive page that shows activity, but also addresses the different post types (essay, journal, and weblog) that only lasted the first few years, but also around the 2014 code update and site move the entry type template went missing.
The category listings pages will also likely get an update and the category page may likely get some ease of moving through the posts over time, beyond general pagination.
Assistance with the Update
This being 2025 the question pops up if and how I was using generative AI as part of this. I was using Claude.ai from Anthropic with some initial questions, then I’d head to O’Reilly’s resources to validate them and learn what I needed to know (it had been about 10 years since I was knee deep into PHP). When coding and modernizing the pages and templates I’d and hit defects I’d run those past Claude to sort out what the issue may be (sometimes missing “;”, others the new query wrapper and parsing method caused me to miss something, or I had deprecated code I hadn’t converted). Claude would point out my errors and instruct me how to correct it. Sometimes it would offer a few options for approaches (some were not quite right and others were good and I needed to select a path - after verifying and learning about them further). It also would crank out code. I gave Claude instructions not to bother with large chunks of my pages and code, which it left alone.
I use Claude stand alone and used is Project function to keep things focussed. I fed it the outlines and high level task areas I have in GitHub and Obsidian and it was keeping track of what was accomplished and how the work met the goals. The most impressive thing, compared to other generative AI options is it was very strict with identifying things not viable in PHP 8 (and its iterative versions) as nothing else did this well. Claude also had the code of pages and templates I had worked on and would point out I was using a structure and method in other page and ask if I shouldn’t use that practice on the page I just fed it to sort out some defect I was working through. My code has had four or more iterations over the 25 years and my early coding wasn’t so hot and still remained. Claude helped my code get more consistent, not by it fixing it, but pointing out I had something good and modern and I should keep consistent with that. By the last couple of templates I didn’t need to have Claude check them as they worked with my own editing, but I still fed them in as it seems to help improve suggestions and catching lack of consistency of my own doing.
A year ago I tried this with OpenAI and its ChatGPT and it was a hot mess. It couldn’t keep PHP versions correct. I try it with every update and I find it really problematic and what it outputs (code and other attempts) as nothing better than mediocre and often not correct.
IDE Use
In the last 10 to 15 years the IDE I’ve used to code and work on vanderwal.net has been from Panic and either Coda or now Nova, which have worked well. I have kept a good firewall between AI assistance and the IDE. I don’t mind type ahead suggestions. But, finding deprecated code to address was something I was going to need. Some friends suggested I try PhpStorm by JetBrains, which seemed good as I’ve used PyCharm a few times in the past and really enjoyed it. I knew I didn’t want VS Code near this, as I’ve pretty much had it with VS Code (I mostly use it with Python for data analytics) due to plug-in issues and lack of ease keeping projects separated.
I picked-up a trial of PHPStorm and after a day or so I had the hang of a good portion of what I needed to do. My favorite part is the setting the exact version of PHP you are working with. It highlights where there are errors and problems. In the last couple of days as I finally was getting the hang of PHP 8 and the methods I was regularly using PHPStorm was helping with type ahead suggestions (there were a few times where I accidentally triggered them when I didn’t want them and nearly turned of that functionality - control Z is your friend). PHPStorm also can make use of GitHub CoPilot, which I don’t find helpful with OpenAI connected to it, but is better with Claude Sonnet. The downside with CoPilot is it doesn’t have access to the Project space in Claude I’ve been working with and therefore its suggestions are less on target - CoPilot with Claude is light years better for PHP than OpenAI offerings). Essentially I didn’t use the incorporated genAI functionality and I was very happy with that setup.
Posting Ease
One of the things I’m looking forward to are slightly better methods for posting to this site and managing posts. Many of the steps beyond creating and posting are manual steps, like kicking off creation of the RSS feed (I do that after a quick review of the created post as it is live, I kick the RSS feed after that review). The alerting the media, or the alerts beyond basic RSS, is also a manual step done after that review. I may automate the combination of those two kicks after a review.
The Data Journalism Handbook is Available
The Data Journalism Handbook is finally available online and soon as the book Data Journalism Handbook - from Amazon or The Data Journalism Handbook - from O’Reilly, which is quite exciting. Why you ask?
In the October of 2010 the Guardian in the UK posted a Data Journalism How To Guide that was fantastic. This was a great resource not only for data journalists, but for anybody who has interest in finding, gathering, assessing, and doing something with the data that is shared found in the world around us. These skill are not the sort of thing that many of us grew up with nor learned in school, nor are taught in most schools today (that is another giant problem). This tutorial taught me a few things that have been of great benefit and filled in gaps I had in my tool bag that was still mostly rusty and built using the tool set I picked up in the mid-90s in grad school in public policy analysis.
In the Fall of 2011 at MozFest in London many data journalist and others of like mind got together to share their knowledge. Out of this gathering was the realization and starting point for the handbook. Journalists are not typically those who have the deep data skills, but if they can learn (not a huge mound to climb) and have it made sensible and relatively easy in bite sized chunk the journalists will be better off.
All of us can benefit from this book in our own hands. Getting to the basics of how gather and think through data and the questions around it, all the way through how to graphically display that data is incredibly beneficial. I know many people who have contributed to this Handbook and think the world of their contributions. Skimming through the version that is one the web I can quickly see this is going to be an essential reference for all, not just journalists, nor bloggers, but for everybody. This could and likely should be the book used in classes in high schools and university information essentials taught first semester first year.
Trip and d.construct Wrap-up
I am back home from the d.construct trip, which included London and Brighton. The trip was very enjoyable, the d.construct conference is a pure winner, and I met fantastic people that keep my passion for the web alive.
d.construct
The d.construct conference had Jeff Barr from Amazon talking about Amazon Web Services, Paul Hammond and Simon Willison discussing Yahoo and its creation and use of web services for internal and external uses, Jeremy Keith discussing the Joy of the API, Aral Balkan presenting the use of Adobe Flex for web services, Derek Featherstone discussing accessibility for Javascript and Ajax and how they can hurt and help the web for those with disabilities, myself (Thomas) discussing tagging that works, and Jeff Veen pulling the day together with designing the complete user experience.
Jeff Barr provided not only a good overview of the Amazon offerings for developers, but his presentation kept me interested (the previous 2 times my mind wandered) and I got some new things out of it (like the S3 Organizer extension for Firefox.
Jeremy was his usual great presenting form (unfortunately a call from home caused me to miss the some of the middle, but he kept things going well and I heard after that many people learned something from the session, which they thought they knew it all already.
Paul and Simon did a wonderful tag team approach on what Yahoo is up to and how they "eat their own dog food" and how the Yahoo Local uses microformats (Wahoo!).
Aral was somebody I did not know before d.construct, but I really enjoyed getting to know him as well as his high energy presentation style and mastery of the content that showed Flash/Flex 2.0 are fluent in Web 2.0 rich interfaces for web services.
Derek was fantastic as he took a dry subject (accessibility) and brought it life, he also made me miss the world of accessibility by talking about how JavaScript and Ajax can actually improve the accessibility of a site (if the developer knows what they are doing - this is not an easy area to tread) and made it logical and relatively easy to grasp.
I can not comment on my own presentation, other than the many people what sought me out to express appreciation, and to ask questions (many questions about spamming, which is difficult if the tagging system is built well). I was also asked if I had somebody explain the term dogging (forgetting there was a rather bawdy use of the term in British culture and using the term as those people who are dog lovers - this lead to very heavy laughter). Given the odd technical problems at the beginning of the presentation (mouse not clicking) things went alright about 5 minutes or so in.
Lastly, the man I never want to follow when giving a presentation, Jeff Veen rocked the house with his easy style and lively interaction with his slides.
I am really wanting to hear much more from Aral and Derek now that I have heard them speak. I am looking forward to seeing their slides up and their podcasts, both should be posted on the d.construct schedule page.
London Stays
The trip also included an overnight stay in London on the front and back end of the conference. Through an on-line resource I had two last minute rooms booked at Best Western Premiers that were great rooms in well appointed hotels. The hotels even had free WiFi (yes, free in Europe is a huge savings), which was my main reason for staying at these hotels I knew nothing about. I really like both locations, one near Earls Court Tube Station and the other Charing Cross Road and SoHo. The rooms were well under 200 U.S. dollars, which is a rarity in central London. I think I have a new place to track down then next time I visit London.
London People & Places
I had a few impromptu meetings in London and an accidental chat. When I first got in I was able to clean-up and go meet friends Tom and Simon for lunch at China Experience. We had good conversations about the state of many things web. Then Tom showed me Cyber Candy, which I have been following online. I was then off to Neal's Yard Dairy to pick-up some Stinking Bishop (quite excellent), Oggleshield, and Berkswell. I then did a pilgrimage to Muji to stock up on pens and all the while using Yahoo Messanger in a mobile browser (a very painful way to communicate, as there is no alert for return messages and when moving the web connection seems to need resetting often).
That evening I met up with Eric Miraglia for a great chat and dinner, then included Christian Heillmann (who has a great new book (from my initial read) on Beginning JavaSctipt with DOM Scripting and Ajax) in our evening. The discussions were wonderful and it was a really good way to find people of similar minds and interests.
On my last day in London I ended up running into Ben Hammersley as he was waiting for a dinner meeting. It was great to meet Ben in person and have a good brief chat. Somehow when walking down the street and seeing a man in a black utilikilt, with short hair, and intently using his mobile there are a short list of possibilities who this may be.
Food
My trip I had a few full English breakfasts, including one in Brighton at 3:30am (using the term gut buster), which was my first full meal of the day. The breakfast at the Blanche House (the name of the hotel never stuck in my head and the keys just had their logo on them, so getting back to the hotel was a wee bit more challenging than normal) was quite good, particularly the scrambled eggs wrapped in smoked Scottish salmon. The food the first night in Brighton at the Seven Dials was fantastic and a great treat. Sunday brunch at SOHo Social in Brighton was quite good and needed to bring me back from another late night chatting, but the fish cakes were outstanding. The last evening in London I stopped in at Hamburger Union for a really good burger with rashers bacon. The burgers are made with only natural fed, grass-reared additive free beef. This is not only eco-friendly, but really tasty. I wish there were a Hamburger Union near where I work as I would make use of it regularly.
Too Short a Visit
As it is with nearly every trip this year, the time was too short and the people I met were fantastic. I really met some interesting and bright people while in Brighton and I really look forward to keeping in touch as well as seeing them again.
Cultures of Simplicity and Information Structures
Two Conferences Draw Focus
I am now getting back to responding to e-mail sent in the last two or three weeks and digging through my to do list. As time wears I am still rather impressed with both XTech and the Microlearning conferences. Both have a focus on information and data that mirrors my approaches from years ago and are the foundation for how I view all information and services. Both rely on well structured data. This is why I pay attention and keep involved in the information architecture community. Well structured data is the foundation of what falls into the description of web 2.0. All of our tools for open data reuse demands that the underlying data is structured well.Simplicity of the Complex
One theme that continually bubbled up at Microlearning was simplicity. Peter A. Bruck in his opening remarks at Microlearning focussed on simplicity being the means to take the complex and make it understandable. There are many things in the world that are complex and seemingly difficult to understand, but many of the complex systems are made up of simple steps and simple to understand concepts that are strung together to build complex systems and complex ideas. Every time I think of breaking down the complex into the simple components I think of Instructables, which allows people to build step-by-step instructions for anything, but they make each of the steps as reusable objects for other instructions. The Instructables approach is utterly brilliant and dead in-line with the microlearning approach to breaking down learning components into simple lessons that can be used and reused across devices, based on the person wanting or needing the instruction and providing it in the delivery media that matches their context (mobile, desktop, laptop, tv, etc.).
Simple Clear Structures
This structuring of information ties back into the frameworks for syndication of content and well structured data and information. People have various uses and reuses for information, data, and media in their lives. This is the focus on the Personal InfoCloud. This is the foundation for information architecture, addressable information that can be easily found. But, in our world of information floods and information pollution due to there being too much information to sort through, findability of information is important as refindability (this is rarely addressed). But, along with refindability is the means to aggregate the information in interfaces that make sense of the information, data, and media so to provide clarity and simplicity of understanding.
Europe Thing Again
Another perspective of the two conferences was they were both in Europe. This is not a trivial variable. At XTech there were a few other Americans, but at Microlearning I was the only one from the United States and there were a couple Canadians. This European approach to understanding and building is slightly different from the approach in the USA. In the USA there is a lot of building and then learning and understanding, where as in Europe there seems to be much more effort in understanding and then building. The results are somewhat different and the professional nature of European products out of the gate where things work is different than in the USA. This was really apparent with System One, which is an incredible product. System One has all the web 2.0 buzzwords under the hood, but they focus on a simple to use tool that pulls together the best of the new components, but only where it makes sense to create a simple tool that addresses complex problems.
Culture of Understanding Complex to Make Simple
It seems the European approach is to understand and embrace the complex and make it simple through deep understanding of how things are built. It is very similar to Instructables as a culture. The approach in the USA seems to include the tools, but have lacked the understanding of the underlying components and in turn have left out elements that really embrace simplicity. Google is a perfect example of this approach. They talk simplicity, but nearly every tool is missing elements that make it fully usable (calendar not having sync, not being able to only have one or two Google tools on rather than everything on). This simplicity is well understood by the designers and they have wonderful solutions to the problems, but the corporate culture of churning things out gets in the way.
Breaking It Down for Use and Reuse
Information in simple forms that can be aggregated and viewed as people need in their lives is essential to us moving forward and taking the pain out of technology that most regular people experience on a daily basis. It is our jobs to understand the underlying complexity, create simple usable and reusable structures for that data and information, and allow simple solutions that are robust to be built around that simplicity.
More XTech 2006
I have had a little time to sit back and think about XTech I am quite impressed with the conference. The caliber of presenter and the quality of their presentations was some of the best of any I have been to in a while. The presentations got beneath the surface level of the subjects and provided insight that I had not run across elsewhere.
The conference focus on browser, open data (XML), and high level presentations was a great mix. There was much cross-over in the presentations and once I got the hang that this was not a conference of stuff I already knew (or presented at a level that is more introductory), but things I wanted to dig deeper into. I began to realize late into the conference (or after in many cases) that the people presenting were people whose writting and contributions I had followed regularly when I was doing deep development (not managing web development) of web applications. I changed my focus last Fall to get back to developing innovative applications, working on projects that are built around open data, and that filled some of the many gaps in the Personal InfoCloud (I also left to write, but that did get side tracked).
As I mentioned before, XTech had the right amount of geek mindset in the presentations. The one that really brought this to the forefront of my mind was on XForms, an Alternative to Ajax by Erik Bruchez. It focussed on using XForms as a means to interact with structured data with Ajax.
Once it dawned on me that this conference was rather killer and I sould be paying attention to the content and not just those in the floating island of friends the event was nearly two-thirds the way through. This huge mistake on my part was the busy nature of things that lead up to XTech, as well as not getting there a day or two earlier to adjust to the time, and attend the pre-conference sessions and tutorials on Ajax.
I was thrilled ot see the Platial presentation and meet the makers of the service. When I went to attend Simon Willison's presentation rather than attending the GeoRSS session, I realized there was much good content at XTech and it is now one on my must attend list.
As the conference was progressing I was thinking of all of the people that would have really benefitted and enjoyed XTech as well. A conference about open data and systems to build applications with that meet real people's needs is essential for most developers working out on the live web these days.
If XTech sounded good this year in Amsterdam, you may want to note that it will be in Paris next year.
Light Overview of XTech and Amsterdam (including BarCamp Amsterdam)
This trip to Amsterdam for XTech 2006 (and now bits of BarCamp Amsterdam II has been quite different from previous trips, in that Amsterdam is now getting to be very familiar. I also did not spend a day on the front end of the trip walking around adjusting to the time change and spent it inside at XTech, where I saw many friends, which really made it feel more like a floating island comprised of geographically distributed friends that I see when I travel.
It has been great seeing good friends that I really wish I could see more and/or work with on projects as I believe some killer things could get done. I also met people and got to hang out with many new people, which is always great. I was pleased to spend time with people I have only partially spent time with in the past.
I quite enjoyed XTech as it was a good amount of geekery, which provided sparks of inspiration, very good feedback on the "Come to Me Web" and "Personal InfoCloud" stuff I presented. The session had Paul Hammond, Tom Coates, and then myself presenting ideas that focussed on open data, using open data, and building for personal use and reuse of information in our three presentations. It was a fantastic set up.
There were many Mozilla folks around, which was fantastic to hear where the Mozilla/Firefox development is going. This was a very good cross pollination of people, ideas, and interests.
I also realized I need to through out my presentation on Personal InfoCloud and Come to Me Web and rebuild it from scratch. I was finding that my presentation that I have been iterating on for the past year or so is something that needs restructuring and refocussing. I get very positive comments on the presentation, but in delivering the presentation I have made many minor tweaks that have disrupted my flow of delivery. I believe that starting from scratch will help me focus on what gets delivered when. I am really do not write out the presentation in long form as I think that would make it stale for me.
I am heading home tomorrow, but I have not quite felt like I was in Amsterdam as it is really no longer a foreign place. It is still one of my favorite places to be. I spent much time exploring thoughts, spending time with people, playing with digital things, but not deeply finding the new bits of Amsterdam (outside of a few hours this morning). Ah well, I am back in a few short weeks.
Live Data Could Solve the Social Bookmarking Problem with Information Volatility
Alex brings up something in his Go and microformat stuff! covering what is in the works with Microformats at Microsoft. Scroll down to where Alex talks about "mRc = Live data wiring", now this live data access is incredibly important.
One of the elements that has been bugging me with social bookmarking it the volatility of the information is not taken into account when the bookmark is made. No, I am not talking about the information blowing up, but the blood pressure of the person bookmarking may rise if the data changes in some way. I look at social bookmarking, or bookmarking in general as a means to mark a place, but it fails as an indicator of state or status change of the information we are pointing to. The expressing of bookmarking and/or tagging is an expression of our explicit interest in that object we bookmarked and/or tagged. The problem is our systems so far are saying, "yes, you have interest, but so what".
What the live data approach does is makes our Personal InfoCloud active. If we could bookmark information and/or tag chunks of information as important we should be able to find out when that information changes, or better get an alert before the information changes. One area where this is essential and will add huge value is shopping. What happens with products in the real world? The prices change, they go out of stock, the product is modified, production of the product stopped, etc. The permeations are many, but those expressing interest should be alerted and have their information updated.
One of the things I have been including in my "Come to Me Web" presentations is the ability to think about what a person needs when they use and want to reuse information. We read about a product we desire, we read the price, but we may think about the product or put it on a wish list that is related to an event in the future. When we go to act on the purchase the information we have gathered and bookmarked may be out of date.
One solution I have been talking about in my presentations is providing an RSS/ATOM feed for the page as it is bookmarked so the person gets the ability to get updated information. I have built similar functionality into past products years ago that let people using data know when the data changed (e-mail) but also provided the means to show what the data was prior and what it had changed to. It was functionality that was deeply helpful to the users of the system. Live data seems a more elegant solution, if it provides the means to see what information had changed should the person relying on or desiring the information want it.
For Many AJAX is Not Degrading, But it Must
A little over two months ago Chad Dickerson posted one of the most insightful things on his site, Web 0.1 head-to-head: 37Signals' Backpackit vs. Gmail in Lynx. You are saying Lynx? Yes! The point is what 37Signals turns out degrades wonderfully and it is still usable. It could work on your mobile device or on a six year old low end computer in Eritrea in a coffee house or internet cafe (I have known two people who have just done that in the last year and found Gmail did not work nor did MSN, but Yahoo did beautifully).
Degrading is a Good Thing
Part of my problem with much of the push towards AJAX (it is a good, no great thing that XMLHTTPRequest is finally catching on). But, it must degrade well. It must still be accessible. It must be usable. If not, it is a cool useless piece of rubbish for some or many people. I have been living through this with airline sites (Continental), commerce sites (Amazon - now slightly improved), actually you name it and they adopted some where in this past year. In most cases it did not work in all browsers (many times only in my browser of last resort, which by that time I am completely peeved).
When Amazon had its wish list break on my mobile device (I (and I have found a relatively large amount of others this past couple years doing the same thing) use it to remember what books I want when I am in brick bookstores and I will check book prices as well as often add books to my wish list directly) I went nuts. The page had a ghastly sized JavaScript, which did some nice things on desktops and laptops but made the page far too large to download on a mobile device (well over 250 kb). In the past few weeks things seemed to have reversed themselves as the page degrades much better.
Is There Hope?
Chad's write-up was a nice place to start pointing, as well as pointing out the millions of dollars lost over the course of time (Continental admitted they had a problem and had waived the additional phone booking fee as well as said their calls were up considerably since the web redesign that broke things for many). Besides Chad and 37Signals I have found Donna Mauer's Designing usable rich internet applications as a starting point. I also finally picked up DOM Scripting: Web Design with JavaScript and the Document Object Model by Jeremy Keith, which focusses on getting JavaScript (and that means AJAX too) to degrade. It is a great book for designers, developers, and those managing these people.
I have an awful lot of hope, but it pains me as most of us learned these lessons five to seven years ago. Things are much better now with web standards in browsers, but one last hurdle is DOM standardization and that deeply impacts JavaScript/DOMScripting.
Structured Blogging has (Re)Launched
Structured Blogging has launched and it may be one of the brightest ideas of 2005. This has the capability to pull web services into nearly every page and to aggregate information more seamlessly across the web. The semantic components help pull all of this together so services can be built around them.
This fits wonderfully in the Model of Attraction framework by allowing people and tools to attract the information they want, in this case from all around the web far more easily than ever before.
[Update] A heads-up from Ryan pointed out this is a relaunch. Indeed, Structured Blogging is pointing out all of the groups that are supporting and integrating the effort. The newest version is of Structured Blogging is now microformat friendly (insanely important).
Microformats hCard and hCalendar Used for Web 2.0 Conference Speakers
Tantek has posted new microformat favelets (bookmarklets you put in your browser's toolbar). The microformat favelets available are: Copy hCards; Copy hCalendars; Subscribe to hCalendars; feed Copy hCalendars (beta); Subscribe to hCalendars feed (beta). Look at Tantek's Web 2.0 Speakers hCard and hCalendar blog post to understand the power behind this.
Microformats are one of the ways that sites can make their information more usable and reusable to people who have an interest. If you have a store and are providing the address you have a few options to make it easy for people, but a simple option seems to be using the microformat hCard (other options include vCard and links to the common mapping programs with "driving directions").
There will be more to come on microformats in the near future here.
Minor Changes in Off the Top
Last night I was able to add back the Quick Links (my current bookmarks from del.icio.us. This was due in great part to the folks at del.icio.us who now have a JavaScript that makes the process easy on you and easy on them (I am not sure how accessible this is as I have not tested it, but normally they are not accessible).
I also brought back to the link to just the Off the Top RSS feed, which has nothing but the last 10 entries in archaic RSS .91 format. I still am offering the wonderful Feedburner for Off the Top option, which has Off the Top entries, my del.icio.us entries, and my Flickr photo feed all bundled in one. I have quite a few people reading this in RSS on mobile devices at the moment and I thought I would make it easier for other that are going that route to get just the content of Off the Top.
Replacement RSS and XML Button
Mike just posted a killer international and language-free RSS logo button on his site. I really like it. Mainly is works for those of use who understand the RSS text version, but for those who are not as technically forward or in non-English/Western languages this could still work. The RSS and XML text on the buttons always need explanation to those not familiar with the terms. The end of many of the tutorials is often, "just click it, you do not really need to know what it means, just click". Something tells me Mike is on to something profound yet so wonderfully simple.
Response to Usability of Feeds
Jeffrey Veen has a wonderful post about the usability of RSS/Atom/feeds on his site. I posted a response that I really want to keep track of here, so it follows...
I think Tom's pointer to the BBC is a fairly good transition to where we are heading. It will take the desktop OS or browser to make it easier. Neither of these are very innovative or quickly adaptive on the Windows side of the world.
Firefox was the first browser (at least that I know of) to handle RSS outside the browser window, but it was still done handled in a side-window of the browser. Safari has taken this to the next step, which is to use a mime-type to connect the RSS feed to the desktop device of preference. But, we are still not where we should be, which is to click on the RSS button on a web page and dump that link into ones preferred reader, which may be an application on the desktop or a web/internet based solution such as Bloglines.
All of this depends on who we test as users. Many times as developers we test in the communities that surround us, which is a skewed sample of the population. If one is in the Bay Area it may be best to go out to Stockton, Modesto, Fresno, or up to the foothills to get a sample of the population that is representative of those less technically adept, who will have very different usage patterns from those we normally test.
When we test with these lesser adept populations it is the one-click solutions that make the most sense. Reading a pop-up takes them beyond their comfort zone or capability. Many have really borked things on their devices/machines by trying to follow directions (be they well or poorly written). Most only trust easy solutions. Many do not update their OS as it is beyond their trust or understanding.
When trends start happening out in the suburbs, exurbs, and beyond the centers of technical adeptness (often major cities) that is when they have tipped. Most often they tip because the solutions are easy and integrated to their technical environment. Take the Apple iPod, it tipped because it is so easy to set up and use. Granted the lack of reading is, at least, an American problem (Japanese are known to sit down with their manuals and read them cover to cover before using their device).
We will get to the point of ease of use for RSS and other feeds in America, but it will take more than just a text pop-up to get us there.
State is the Web
The use and apparent mis-use of state on the web has bugged me for some time, but now that AJAX, or whatever one wants to call "XMLHttpRequests", is opening the door to non-Flash developers to ignore state. The latest Adaptive Path essay, It's A Whole New Internet, quotes Michael Buffington, "The idea of the webpage itself is nearing its useful end. With the way Ajax allows you to build nearly stateless applications that happen to be web accessible, everything changes." And states, "Where will our bookmarks go when the idea of the 'webpage' becomes obsolete?"
I agree with much of the article, but these statements are wholly naive in my perspective. Not are they naive, but they hold up examples of the web going in the wrong direction. Yes, the web has the ability to build application that are more seemless thanks to the that vast majority of people using web browsers that can support these dynamic HTML techniques (the techniques are nothing new, in fact on intranets many of us were employing them four or five years ago in single browser environments).
That is not the web for many, as the web has been moving toward adding more granular information chunks that can be served up and are addressible. RESTful interfaces and "share this page" links are solutions. The better developers in the Flash community has been working to build state into their Flash presentations to people can link to information that is important, rather than instructing others to click through a series of buttons or wait through a few movies to get to desired/needed information. The day of one stateless interface for all information was behind us, I hope to hell it is not enticing a whole new generation of web developers to lack understanding of state.
Who are providing best examples? Flickr and Google Maps are two that jump to mind. Flickr does one of the best jobs with fluid interfaces, while keeping links to state that is important (the object that the information surrounds, in this case a photograph). Google Maps are stunning in their fluidity, but during the whole of one's zooming and scrolling to new locations the URL remains the same. Google Map's solution is to provide a "Link to this page" hyperlink (in my opinion needs to be brought to the visual forefront a little better as I have problems getting people to recognize the link when they have sent me a link to maps.google.com rather than their intended page).
Current examples of a poor grasp of state is found on the DUX 2005 conference site. Every page has the same URL, from the home page, to submission page, to about page. You can not bookmark the information that is important to yourself, nor can you send a link to the page your friend is having problems locating. The site is stateless in all of its failing glory. The designer is most likely not clueless, just thoughtless. They have left out the person using the site (not users, as I am sure their friends whom looked at the design thought it was cool and brilliant). We have to design with people using and resusing our site's information in mind. This requires state.
When is State Helpful?
If you have important information that the people using your site may want to directly link to, state is important as these people will need a URL. If you have large datasets that change over time and you have people using the data for research and reports, the data must have state (in this case it is the state of the data at some point in time). Data that change that does not have state will only be use for people that enjoy being selected as a fool. Results over time will change and all good academic research or professional researchers note the state of the data with time and date. All recommendations made on the data are only wholly relevant to that state of the data.
Nearly all blogging tools have "permalinks", or links that link directly to an unchanging URL for distinct articles or postings, built into the default settings. These permalinks are the state function, as the main page of a blog is fluid and ever changing. The individual posts are the usual granular elements that have value to those linking to them (some sites provide links down to the paragraph level, which is even more helpful for holding a conversation with one's readers).
State is important for distinct chunks of information found on a site. Actions do not seem state-worthy for things like uploading files, "loading screens", select your location screens (the pages prior and following should have state relative to the locations being shown on those pages), etc.
The back button should be a guide to state. If the back button takes the user to the same page they left, that page should be addressable. If the back button does not provide the same information, it most likely should present the same information if the person using the site is clicking on "next" or "previous". When filling out an application one should be able to save the state of the application progress and get a means to come back to that state of progress, as people are often extremely aggravated when filling out longs forms and have to get information that is not in reach, only to find the application times out while they are gone and they have to start at step one after being many steps into the process.
State requires a lot of thought and consideration. If we are going to build the web for amateurization or personal information architectures that ease how people build and structure their use of the web, we must provide state.
Removing the Stench from Mobile Information
Standing in Amsterdam in front of the Dam, I was taking in the remnants of a memorial to Theodore van Gogh (including poetry to Theo). While absorbing what was in front of me, I had a couple people ask me what the flowers and sayings were about. I roughly explained the street murder of Theo van Gogh.
While I was at the Design Engaged conference listening to presentations about mobile information and location-based information I thought a lot about the moment at the Dam. I thought about adding information to the Dam in an electronic means. If one were standing at the Dam you could get a history of the Dam placed by the City of Amsterdam or a historical society. You could get a timeline of memorials and major events at the Dam. You could also get every human annotation.
Would we want every annotation? That question kept running reoccurring and still does. How would one dig through all the digital markings? The scent of information could become the "stench of information" very quickly. Would all messages even be friendly, would they contain viruses? Locations would need their own Google search to find the relevant pieces of information. This would all be done on a mobile phone, those lovely creatures with their still developing processors.
As we move to a world where we can access information by location and in some cases access the information by short range radio signals or touching our devices there needs to be an easy to accept these messages. The messaging needs some predictive understanding on our mobiles or some preparsing of content and messaging done remotely (more on remote access farther down).
If was are going to have some patterning tools built in our mobiles what information would they need to base predictions? It seems the pieces that could make it work are based on trust, value, context, where, time, action, and message pattern. Some of this predictive nature will need some processing power on the mobile or a connection to a service that can provide the muscle to predict based on the following metadata assets of the message.
Trust is based on who left the message and whether you know this person or not. If the person is known do you trust them? This could need an ensured name identification, which could be mobile number, their tagging name crossed with some sort of key that proves the identity, or some combination of known and secure metadata items. It would also be good to have a means to identify the contributor as the (or an) official maintainer of the location (a museum curator annotating galleries in a large museum is one instance). Some trusted social tool could do some predicting of the person's worthiness to us also. The social tools would have to be better than most of today's variants of social networking tools as they do not have the capability for us to have a close friend, but not really like or trust their circle(s) of friends. It would be a good first pass to go through our own list of trusted people and accept a message left by any one of these people. Based on our liking or disliking of the message a rating would be associated with this person to be used over time.
Value is a measure of the worthiness of the information, normally based on the source of the message. Should the person who left the message have a high ranking of content value it could be predicted that the message before us is of high value. If these are message that have been reviews of restaurants and we have liked RacerX previous reviews we found in five other cities and they just gave the restaurant we are in front of a solid review that meets our interests. Does RacerX have all the same interests?
Context is a difficult predictive pattern as there are many contextual elements such as mood, weather, what the information relates to (restaurant reviews, movie reviews, tour recommendations, etc.). Can we set our mood and the weather when predicting our interest in a message. Is our mood always the same in certain locations?
Where we are is more important than location. Yes, do we know where we are? Are we lost? Are we comfortable where we are? These are important questions that may help be a predictor that are somewhat based on our location. Or location is the physical space we occupy, but how we feel about that spot or what is around us at that spot may trigger our desire to not accept a location-based message. Some of us feel very comfortable and grounded in any Chinatown anywhere around the globe and we seek them out in any new city. Knowing that we are in or bordering on a red-light district may trigger a predictive nature that would turn off all location-based messages. Again these are all personal to us and our preferences. Do our preferences stay constant over time?
Time has two variables on two planes. The first plane is our own time variables while the other relates to the time of the messages. One variable is the current moment and the other is historical time series. The current moment may be important to us if it is early morning and we enjoy exploring in the early morning and want to receive information that will augment our explorative nature. Current messages may be more important than historical messages to us. The other variable of historical time and how we treat the past. Some of us want all of our information to be of equal value, while others will want the most current decisions to have a stronger weight so that new events can keep information flowing that is most attune to our current interests and desires. We may have received a virus from one of our recent messages and want to change our patterns of acceptance to reflect our new cautionary nature. We may want to limit how far back we want to read messages.
Action is a very important variable to follow when the possibility of malicious code can damage our mobile or the information we have stored in the mobile or associated with that mobile. Is the item we are about to receive trigger some action on our device or is is a static docile message. Do we want to load active messages into a sandbox on our mobile so the could not infect anything else? Or, do we want to accept the active messages if they meet certain other criteria.
Lastly, message pattern involved the actual content of the message and would predict if we would want to read the information if it is identical or similar to other messages, think attention.xml. If the Dam has 350 messages similar to "I am standing at the Dam" I think we may want to limit that to ones that meet some other criteria or to just one, if we had the option. Do we have predictors that are based on the language patterns in messages? Does our circle of trusted message writers always have the same spellings for certain wordz?
All of these variables could lead to a tight predictive pattern that eases the information that we access. The big question is how is all of this built into a predictive system that works for us the moment we get our mobile device and start using the predictive services? Do we have a questionnaire we fill out that creates our initial settings? Will new phones have ranking buttons for messages and calls (nice to rank calls we received so that our mobile would put certain calls directly into voice mail) so it is an easier interface to set our preferences and patterns.
Getting back to remote access to location-based information seems, for me, to provide some excellent benefits. There are two benefits I see related to setting our predictive patterns. The first is remote access to information could be done through a more interactive device than our mobile. Reading and ranking information from a desktop on a network or a laptop on WiFi could allow us to get through more information more quickly. The second benefit is helping us plan and learn from the location-based information prior to our going to that location so we could absorb the surroundings, like a museum or important architecture, with minimal local interaction with the information. Just think if we could have had our predictive service parse through 350 messages that are located at the Dam and we previews the messages remotely and flagged four that could have interest to us while we are standing at the Dam. That could be the sweet smell of information.