Journal tags: links

50

sparkline

Fragmentions

Cennydd’s latest piece in A List Apart is the beautifully written Letter to a Junior Designer.

I really like the way that Cennydd emphasises the importance of being able to explain the reasoning behind your design decisions:

If you haven’t already, sometime in your career you’ll meet an awkward sonofabitch who wants to know why every pixel is where you put it. You should be able to articulate an answer for that person—yes, for every pixel.

That reminds me of something I read fourteen(!) years ago that’s always stayed with me. In an interview in Digital Web magazine, Joshua Davis was asked “What would you say is beauty in design?” His answer:

Being able to justify every pixel.

Here’s a link to the direct quote …except that link probably won’t work for you. Not unless you’ve installed this Chrome extension.

What the hell am I talking about? Well, this is something that Kevin Marks has been working on following on from the recent W3C annotation workshop.

It’s called fragmentions and it builds on the work done by Eric and Simon. They proposed using CSS selectors as fragment identifiers. Kevin’s idea is to use the words within the text as anchor points (like an automatic Command+F):

To tell these apart from an id link, I suggest using a double hash - ## for the fragment, and then words that identify the text. For example:

http://epeus.blogspot.com/2003_02_01_archive.html##annotate+the+web

That link will work in your browser because of this script, which Kevin has added to his site. I may well add that script to this site too.

Fragmentions are a nice idea and—to bring it back to Cennydd’s point—nicely explained.

When is a link not a link?

Google has a web page for its Chrome browser. This page provides information about the browser, but its primary purpose—its call-to-action, if you will—is to encourage you to download the browser. Hence the nice big blue button-like link that says “Download Chrome.”

Tech bloggy publication thingy The Next Web posted some words pointing out that, for a while there, the link wasn’t working. At all. There was no way to download Chrome from the page created for the purpose of letting you download Chrome.

Download Chrome

The problem was that the link isn’t a real link. I mean, technically it’s an A element, and it does have an href attribute …but the value of that attribute isn’t a resource (like say, an installer for a web browser, or terms of service for downloading a browser). Instead it uses the JavaScript pseudo-protocol—meaning: not actually a protocol—to point to void(0).

<a class="button eula-download-button" data-g-event="cta" data-g-label="download-chrome" href="javascript:void(0)">Download Chrome</a>

So when there was a problem with the JavaScript, the link stopped working:

Uncaught TypeError: Cannot read property 'Installer' of undefined

HTML has a very fault-tolerant way of handling errors: if it sees an element or attribute it doesn’t understand, it just ignores it—it doesn’t break the page, it just moves on to then next element. Likewise with CSS. Unknown selectors, properties, or values are simply ignored. Not so with JavaScript. A syntax error stops execution of the script. That’s actually quite handy when you’re trying to debug your code, but no so handy when it’s out on the web.

Given the brittleness of JavaScript’s error-handling, it seems unwise to entrust the core functionality of your page/app/site/whatever to the most fragile part of the front-end stack …especially when that same functionality is provided by a native HTML element.

I don’t want to pick on Google in particular here—there are far too many other sites exhibiting the same kind of over-engineering:

<a href="javascript:void(0)">

<a href="#" >

<span class="button" >

<div class="link" >

By all means add all the JavaScript whizzbangery to your site that you want. But please make sure you’re adding it on a solid base of working markup. Progressive enhancement is your friend. Just like any good friend, it will help you out when unexpected bad things happen.

My links, my links (my lovely lady links)

Thank you for reading my journal here at adactio.com. I appreciate your kind attention.

I feel should point out that if you’re only reading my journal (or “blog” or “weblog” or whatever the kids call it) then you’re missing out on some good stuff over in the links section.

Just so you know, there are multiple RSS feeds you can subscribe to:

Now it might be that you’re already subscribed to an RSS feed of my links through Delicious. Whenever I post a link to my own site, it automatically gets posted to Delicious too.

Or at least it did.

Despite the assurances from the new overlords of Delicious, the API appears to be kaput. That means my links and my Delicious profile are now out of sync. The canonical source for my links is right here on my own site so if you’re currently subscribed to my Delicious RSS feed, I recommend that you update your RSS reader to point at the RSS feed for my links instead.

By the way, if you don’t want to subscribe to the firehose of all my links, you can subscribe to a specific tag instead. For example, here’s everything tagged with “futurefriendly”:

/links/tags/futurefriendly

And here’s the corresponding RSS feed:

/links/tags/futurefriendly/rss

So feel free to explore the links section and do some URL hacking.

All Our Yesterdays: the links

If you were at An Event Apart in Boston and you want to follow up on some of the things I mentioned in my talk, here are some links:

Here are some related posts of my own:

More recently, Nora Young interviewed Jason Scott on online video and digital heritage.

Full Interview: Jason Scott on online video and digital heritage | Spark | CBC Radio on Huffduffer

Jared Spool: The Secret Lives of Links

The final speaker of the first day of An Event Apart in Boston is Jared Spool. Now, when Jared gives a talk …well, you really have to be there. So I don’t know how well liveblogging is going to work but here goes anyway.

The talk is called The Secret Lives of Links. He starts by talking about one of the pre-eminent young scientists in the USA: Lisa Simpson. One day, she lost a tooth, put it in a bowl and when she later examined it under a microscope, she discovered a civilisation going about its business, all the citizens with their secret lives.

The web is like that.

Right before the threatened government shutdown, Jared was looking at news sites and how they were updating their links. Jared suggests that CNN redesign its site to simply have this list of links:

  1. The most important story.
  2. The second most important story.
  3. The third most important story.
  4. An unimportant, yet entertaining story.
  5. The Charlie Sheen story.

But of course it doesn’t work like that. The content of the links tells the importance. Links secretly live to drive the user to their content.

Compare the old CNN design to the current one. The visual design is different but the underlying essence is the same. The links work the same way.

All the news sites were reporting the imminent government shutdown with links that had different text but were all doing the same thing.

Jared has been working on the web since 1995. That whole time, he’s been watching users use websites. The pattern he has seen is that the content speaks to the user through the links. Everything hinges on the links. They provide the scent of information.

This goes back to a theory at Xerox PARC: if you modelled user behaviour when searching for information, it’s very much like a fox sniffing a trail. The users are informavores.

We can see this in educational websites. The designs may change but links are the constant.

http://xkcd.com/773/

We’ve all felt the pain of battling the site owner who wants to prioritise content that the users aren’t that interested in.

The Walgreens site is an interesting example. One fifth of the visitors follow the “photo” link. 16% go to search. The third most important link is about refilling prescriptions. The fourth is the pharmacy link. The fifth most used links is finding the physical stores. Those five links add up to 59% of the total traffic …but those links take up just 3.8% of the page.

This violates Fitts’s Law:

The speed that a user can acquire a target is proportional to the size of the target and indirectly proportional to the distance from the target.

Basically, the bigger and closer, the easier to hit. The Walgreens site violates that. Now, it would look ugly if the “photo” link was one fifth of the whole page, but the point remains: there’s a lot of stuff being foisted on the user by the business.

Another example of Fitts’s Law are those annoying giant interstitial ads that have tiny “close” links.

Deliver users to their desired objective. Give them links that communicate scent in a meaningful way. Make the real estate reflect the user’s desires.

Let’s go back to an educational web site: Ohio State. People come to websites for all sorts of reasons. Most people don’t just go to a website just to see how it looks (except for us). People go to the Ohio State website to get information about grades and schedules. The text of these links are called trigger words: the trigger an action from the user. When done correctly, trigger words lead the user to their desired goal.

It’s hard to know when your information scent is good, but it’s easy to know when your information scent is bad. User behaviour will let you know: using the back button, pogo-sticking, and using search.

Jared has seen the same patterns across hundreds of sites that he’s watched people using. They could take all the clickstreams that succeeded and all the clickstreams that failed. For 15 years there’s a consistent 58% failure rate. That’s quite shocking.

One pattern that emerges in the failed clickstreams is the presence of the back button. If a user hits the back button, the failure rate of those clickstreams rises to above 80%. If a user hits the back button twice, the failure rate rises to 98%.

The back button is the button of doom.

The user clicks the back button when they run out of scent, just like a fox circling back. But foxes succeed ‘cause rabbits are stupid and they go back to where they live and eat, so the fox can go back there and wait. Users hit the back button hoping that the page will somehow have changed when they get back.

Pay attention to the back button. The user is telling you they’ve lost the scent.

Another behaviour is pogo-sticking, hopping back and forward from a “gallery” page with a list of links to the linked pages. Pogo-sticking results in a failure rate of 89%. There’s a myth with e-commerce sites that users want to pogo-stick between product pages to compare product pages but it’s not true: the more a user pogo-sticks, the less likely they are to find what they want and make a purchase.

Users scan a page looking for trigger words. If they find a trigger word, they click on it but if they don’t find it, they go to search. That’s the way it works on 99% of sites, although Amazon is an exception. That’s because Amazon has done a great job of training users to know that absolutely nothing on the home page is of any use.

Some sites try to imitate Google and just have a search box. Don’t to that.

A more accurate name for the search box would be B.Y.O.L.: Bring Your Own Link. What do people type into this box: trigger words!

Pro tip: your search logs are completely filled with trigger words. Have you looked there lately? Your users are telling you what your trigger words should be. If you’re tracking where searches come from, you even know on what pages you should be putting those trigger words.

The key thing to understand is that people don’t want to search. There’s a myth that some people prefer to search. It’s the design of the site that forces them to search. The failure rate for search is 70%.

Jared imagines an experiment called the 7-11 milk experiment. Imagine that someone has run out of milk. We take them to the nearest 7-11. We give them the cash to buy milk. There should be a 100% milk-purchasing result.

That’s what Jared does with websites. He gives people the cash to buy a product, brings them to the website and asks them to purchase the product. Ideally you should see a 100% spending rate. But the best performing site—The Gap—got a 66% spending. The worst site got 6%.

The top variables that contributed to this pattern are: the ratio of number of pages to purchase. Purchases were made at Gap.com in 11.9 pages. On the worst performers, the ratio was 51 pages per purchase. You know what patterns they saw in the worst performers: back button usage, pogo-sticking and search.

Give users information they want. Pages that we would describe as “cluttered” don’t appear that way to a user if the content is what the user wants. Clutter is a relative term based on how much you are interested in the content.

It’s hard to show you good examples of information scent because you’re not the user looking for something specific. Good design is invisible. You don’t notice air conditioning when it’s set just right, only when it’s too hot or too cold. We don’t notice good design.

Links secretly live to look good …while still looking like links. There was a time when the prevailing belief was that links are supposed to be blue and underlined. We couldn’t have made a worse choice. Who decided that? Not designers. Astrophysicists at CERN decided. As it turns, blue is the hardest colour to perceive. Men start to lose the ability to perceive blue at 40. Women start to lose the ability at 55 …because they’re better. Underlines change the geometry of a word, slowing down reading speed.

Thankfully we’ve moved on and we can have “links of colour.” But sometimes we take it far, like the LA Times, where it’s hard to figure out what is and isn’t a link. Users have to wave their mouse around on the page hoping that the browser will give them the finger.

Have a consistent vocabulary. Try to make it clear which links leads to a different page and which links perform on action on the current page.

We confuse users with things that look like links, but aren’t.

Links secretly live to do what the user expects.

Place your links wisely. Don’t put links to related articles in the middle of an article that someone is reading.

Don’t use mystery meat navigation. Users don’t move their mouse until they know what they’re going to click on so don’t hide links behind a mouseover: by the time those links are revealed, it’s too late: users have already made a decision on what they’re going to click. Flyout menus are the worst.

Some of Jared’s favourite links are “Stuff our lawyers made us put here”, “Fewer choices” and “Everything else.”

In summary, this is what links secretly want to do:

  • Deliver users to their desired objective.
  • Emit the right scent.
  • Look good, while still looking like a link.
  • Do what the user expects.

Home-grown and Delicious

I’ve been using Delicious since 2005—back when it was del.icio.us. I have over 2,000 bookmarks stored there. I moved to Magnolia for a while but we all know how that ended.

Back then I wrote:

Really, I should be keeping my links here on adactio.com, maybe pinging Delicious or some other social bookmarking site as a back-up.

Recently Delicious updated its bookmarklet-conjured interface, not for the better. I thought that I could get used to the changes, but I found them getting more annoying over time. Once again, I began to toy with the idea of self-hosting my bookmarks. I even exported all my data into a big XML file.

The very next day, some of Yahoo’s shit hit the web’s fan. Delicious, it was revealed, was to be sunsetted. As someone who doesn’t randomly choose to use meteorological phenomena as verbs, I didn’t know what that meant, but it didn’t sound good.

As the twittersphere erupted in anger and indignation, I was able to share my recently-acquired knowledge:

curl https://{your username}:{your password}@api.del.icio.us/v1/posts/all to get an XML file of your Delicious bookmarks.

A lot of people immediately migrated to Pinboard, which looks like an excellent service (and happens to be the work of Maciej Ceglowski, one of the best bloggers ever to put pixels to screen).

After all that, it turns out that “sunsetting” doesn’t mean “shooting in the head”, it means something more like “flogging off”, as clarified on the Delicious blog. But the damage had been done and, anyway, I had already made up my mind to bring my bookmarks in-house, so I began a fun weekend of hacking.

Setting up a new section of the site for links and importing my Delicious bookmarks was pretty straightforward. Creating a bookmarklet was pretty easy too—I already some experience of that with Huffduffer.

So now I’ll do my bookmarking right here on my own site. All’s well that ends well, right?

Well, not quite. Dom sounded a note of concern:

sigh. There goes the one thing I actually used delicious for, the social network. :(

Paul also pointed to the social aspect as the reason why he’s sticking with Delicious:

Personally, while I’ve always valued the site for its ability to store stuff, what’s always made Delicious most useful to me is its network pages in general, and mine in particular.

But it’s possible to have your Delicious cake and eat it at home. The Delicious API makes it quite easy to post links so I’ve added that into my own bookmarking code. Whenever I post a link here, it will also show up on my Delicious account. If you’re subscribed to my Delicious links, you should notice no change whatsoever.

This is exactly what Steven Pemberton was talking about when I liveblogged his XTech talk two years ago. Another Stephen, the good Mr. Hay, summed up the absurdity of the usual situation:

For a while we’ve posted our data all over the internet on all types of services. These services provide APIs so we can access the data we put into them, so that we can do things with that data. Read that again.

Now I’m hosting the canonical copies of my bookmarks, much like Tantek hosts the canonical copies of his tweets and syndicates them out to Twitter. Delicious gets to have my links as well, and I get to use Delicious as a tool for interacting with my data …only now I’m not limited to just what Delicious can offer me.

Once I had my new links section up and running, I started playing around with the Embedly API (I recently added the excellent oEmbed format to Huffduffer and I was impressed with its power). Whenever I bookmark a page with oEmbed support, I can pull content directly into my site. Take a look at the links I’ve tagged with “sci-fi” to see some examples of embedded Vimeo and Flickr content.

I definitely prefer this self-hosting-with-syndication way of doing things. I can use a service like Delicious without worrying about it going tits-up and taking all my data with it. The real challenge is going to be figuring out a way of applying that model to Twitter and Flickr. I’m curious to see which milestone I’ll hit first: 10,000 tweets or 10,000 photos. Either way, that’s a lot of my content on somebody else’s servers.

Revving up

I was away in Berlin for a few days, delivering a to the good people at Aperto. I had a good time, made even better by some excellent Spring weather and the opportunity to meet up with Anthony and Colin while I was there.

I came home to find that, in my absence, rev="canonical" usage has gone stratospheric. First off, there are the personal sites like CollyLogic and Bokardo. Then there are the bigger fish:

Excellent! I’d just like to add one piece of advice to anyone implementing or thinking of implementing rev="canonical": if you are visibly linking to the short url of the current page, please remember to use rev="canonical" on that A element as well as on any LINK element you’ve put in the HEAD of your document. Likewise, for the coders out there, if you are thinking of implementing a rev="canonical" parser—and let’s face it, that’s a nice piece of low-hanging fruit to hack together—please remember to also check for rev attributes on A elements as well as on LINK elements. If anything, I would prioritise human-visible claims of canonicity over invisible metacrap.

Actually, there’s a whole bunch of nice metacrapital things you can do with your visible hyperlinks. If you link to an RSS feed in the BODY of your document, use the same rel values that you would use if you linked to the feed from a LINK element in the HEAD. If you link to an MP3 file, use the type attribute to specify the right mime-type (audio/mpeg). The same goes for linking to Word documents, PDFs and any other documents that aren’t served up with a mime-type of text/html. So, for example, here on my site, when I link to the RSS feed from the sidebar, I’m using type and rel attributes: href="/journal/rss" rel="alternate" type="application/rss+xml". I’m also quite partial to the hreflang attribute but I don’t get the chance to use that very often—this post being an exception.

The rev="canonical" convention makes a nice addition to the stable of nice semantic richness that can be added to particular flavours of hyperlinks. But it isn’t without its critics. The main thrust of the argument against this usage is that the rev attribute currently doesn’t appear in the HTML5 spec. I’ve even seen people use the past tense to refer to an as-yet unfinished specification: the rev attribute was taken out of the HTML5 spec.

As is so often the case with HTML5, the entire justification for dropping rev seems to be based on a decision made by one person. To be fair, the decision was based on available data from 2005. In light of recent activity and the sheer number of documents that are now using rev="canonical"—Flickr alone accounts for millions—I would hope that the HTML5 community will have the good sense to re-evaluate that decision. The document outlining the design principles of HTML5 states:

When a practice is already widespread among authors, consider adopting it rather than forbidding it or inventing something new.

The unbelievable speed of adoption of rev="canonical" shows that it fulfils a real need. If the HTML5 community ignore this development, not only would they not be paving a cowpath, they would be refusing to even acknowledge that a well-trodden cowpath even exists.

The argument against rev seems to be that it can be confusing and could result in people using it incorrectly. By that argument, new elements like header and footer should be kept out of any future specification for the same reason. I’ve already come across confusion on the part of authors who thought that these new elements could only be used once per document. Fortunately, the spec explains their meaning.

The whole point of having a spec is to explain the meaning of elements and attributes, be it for authors or user-agents. Without a spec to explain what they mean, elements like P and A don’t make any intuitive sense. It’s no different for attributes like href or rev. To say that rev isn’t a good attribute because it requires you to read the spec is like saying that in order to write English, you need to understand the language. It’s neither a good nor bad thing, it’s just a statement of the bleedin’ obvious.

Now go grab yourself the very handy bookmarklet that Simon has written for auto-discovering short urls.

Do the right semantic thing

Jason Kottke wrote about a new site on the block called Do The Right Thing:

The site works on a modified Digg model. If you see a story you like, you click a button to declare your interest in it. But then you also rate the social impact of the subject of the story, either positive or negative.

As soon as I read this, I immediately thought of vote-links. I wrote about vote-links before in an article for 24 Ways.

Do The Right Thing is already linking to other sites with “impact” ratings shown next each link. Depending on how people have voted for the social impact of the linked resource, this rating is either positive or negative. With the addition of rev="vote-for" or rev="vote-against" the community judgement could be explicitly encoded in the links.

I signed up for Do The Right Thing so that I could use the members-only feedback form to suggest this addition. Alas, the overly clever feedback form couldn’t be submitted in Camino, my current browser of choice.

Update: The feedback form has been fixed. Not only that but the guy doing the fixing turns out to be Jarkko Laine, who I once had dinner with in Copenhagen. Small world.

Spoken

The deed is done. I had the pre-lunchtime slot at Reboot to speak about a very simple subject: the hyperlink.

It was fun. People seemed to enjoy it and there were some great questions and comments afterwards: it was humbling and gratifying to have Håkon Wium Lie and Jean-Francois Groff respond to my words.

Unlike any previous presentations I’ve done, I had written out everything I wanted to say word for word. I began by describing this as a story, a manifesto, but mostly a love letter. For once, I was going to read a pre-prepared speech. I still had slides but they were very minimal.

I ended up using two laptops. One iBook, controlled from my phone using Salling Clicker, was displaying the slides done in Keynote. I used the other iBook as a teleprompter: I wanted large sized text continually scrolling as I spoke.

I looked into some autocue software for the Mac but rather than fork out the cash for one of them, I wrote my own little app using XHTML, CSS and JavaScript. I bashed out a quick’n’dirty first version pretty quickly. I spent most of the flight to Copenhagen refining the JavaScript to make it reasonably nice. I’ll post the code up somewhere, probably over on the DOM Scripting site in case anyone else needs a browser-based teleprompter.

If you’d like to read a regular, non-scrolling version of my love letter, I’ve posted In Praise of the Hyperlink in the articles section.

Copenhagen

I’ve been seeing the inside of a lot of airports lately. Right after getting back from XTech in Amsterdam, I flew up to Manchester to deliver a one day workshop in Ajax.

It was my first visit to the mighty Mancunian metropolis and a very pleasant visit it was, especially given the opportunity to go drinking with Patrick Lauke, James “Brothercake” Edwards, and Chris Mills in a bar that was decked out like a sci-fi version of the Hard Rock Café from parallel grungy dimension.

Tomorrow I will once again be doing the airport shuffle. This time the airport is Stansted and the destination is Copenhagen, the setting for the eighth iteration of the Reboot conference. I’ve never been to Denmark, let alone Reboot, before. I’m really looking forward to it.

I will be speaking but for once it won’t be a code-filled techy presentation. Instead, I plan to deliver the most pretentious talk ever devised: In Praise of the Hyperlink.

I also managed to solve the mystery of the missing email and figured out that the person doing the pre-Reboot podcast was Nicole Simon. We had a little chat over Skype and you can listen to the conversation if you want to get a taste of what I’ll be talking about.

If you’re going to Reboot, I’ll see you there. If not, expect the usual cascade of Flickr pics and liveblogging.