Journal tags: min

64

sparkline

A question of timing

I’ve been updating my collection of design principles lately, adding in some more examples from Android and Windows. Coincidentally, Vasilis unveiled a neat little page that grabs one list of principles at random —just keep refreshing to see more.

I also added this list of seven principles of rich web applications to the collection, although they feel a bit more like engineering principles than design principles per se. That said, they’re really, really good. Every single one is rooted in performance and the user’s experience, not developer convenience.

Don’t get me wrong: developer convenience is very, very important. Nobody wants to feel like they’re doing unnecessary work. But I feel very strongly that the needs of the end user should trump the needs of the developer in almost all instances (you may feel differently and that’s absolutely fine; we’ll agree to differ).

That push and pull between developer convenience and user experience is, I think, most evident in the first principle: server-rendered pages are not optional. Now before you jump to conclusions, the author is not saying that you should never do client-side rendering, but instead points out the very important performance benefits of having the server render the initial page. After that—if the user’s browser cuts the mustard—you can use client-side rendering exclusively.

The issue with that hybrid approach—as I’ve discussed before—is that it’s hard. Isomorphic JavaScript (terrible name) can theoretically help here, but I haven’t seen too many examples of it in action. I suspect that’s because this approach doesn’t yet offer enough developer convenience.

Anyway, I found myself nodding along enthusiastically with that first of seven design principles. Then I got to the second one: act immediately on user input. That sounds eminently sensible, and it’s backed up with sound reasoning. But it finishes with:

Techniques like PJAX or TurboLinks unfortunately largely miss out on the opportunities described in this section.

Ah. See, I’m a big fan of PJAX. It’s essentially the same thing as the Hijax technique I talked about many years ago in Bulletproof Ajax, but with the new addition of HTML5’s History API. It’s a quick’n’dirty way of giving the illusion of a fat client: all the work is actually being done in the server, which sends back chunks of HTML that update the interface. But it’s true that, because of that round-trip to the server, there’s a bit of a delay and so you often end up briefly displaying a loading indicator.

I contend that spinners or “loading indicators” should become a rarity

I agree …but I also like using PJAX/Hijax. Now how do I reconcile what’s best for the user experience with what’s best for my own developer convenience?

I’ve come up with a compromise, and you can see it in action on The Session. There are multiple examples of PJAX in action on that site, like pretty much any page that returns paginated results: new tune settings, the latest events, and so on. The steps for initiating an Ajax request used to be:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Display a loading indicator,
  4. Request the new data from the server, and
  5. Update the page with the new data.

In one sense, I am acting immediately to user input, because I always display the loading indicator straight away. But because the loading indicator always appears, no matter how fast or slow the server responds, it sometimes only appears very briefly—just for a flash. In that situation, I wonder if it’s serving any purpose. It might even be doing the opposite to its intended purpose—it draws attention to the fact that there’s a round-trip to the server.

“What if”, I asked myself, “I only showed the loading indicator if the server is taking too long to send a response back?”

The updated flow now looks like this:

  1. Listen for any clicks on the page,
  2. If a “previous” or “next” button is clicked, then:
  3. Start a timer, and
  4. Request the new data from the server.
  5. If the timer reaches an upper limit, show a loading indicator.
  6. When the server sends a response, cancel the timer and
  7. Update the page with the new data.

Even though there are more steps, there’s actually less happening from the user’s perspective. Where previously you would experience this:

  1. I click on a button,
  2. I briefly see a loading indicator,
  3. I see the new data.

Now your experience is:

  1. I click on a button,
  2. I see the new data.

…unless the server or the network is taking too long, in which case the loading indicator appears as an interim step.

The question is: how long is too long? How long do I wait before showing the loading indicator?

The Nielsen Norman group offers this bit of research:

0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

So I should set my timer to 100 milliseconds. In practice, I found that I can set it to as high as 200 to 250 milliseconds and keep it feeling very close to instantaneous. Anything over that, though, and it’s probably best to display a loading indicator: otherwise the interface starts to feel a little sluggish, and slightly uncanny. (“Did that click do any—? Oh, it did.”)

You can test the response time by looking at some of the simpler pagination examples on The Session: new recordings or new discussions, for example. To see examples of when the server takes a bit longer to send a response, you can try paginating through search results. These take longer because, frankly, I’m not very good at optimising some of those search queries.

There you have it: an interface that—under optimal conditions—reacts to user input instantaneously, but falls back to displaying a loading indicator when conditions are less than ideal. The result is something that feels like a client-side web thang, even though the actual complexity is on the server.

Now to see what else I can learn from the rest of those design principles.

Angular momentum

I was chatting with some people recently about “enterprise software”, trying to figure out exactly what that phrase means (assuming it isn’t referring to the LCARS operating system favoured by the United Federation of Planets). I always thought of enterprise software as “big, bloated and buggy,” but those are properties of the software rather than a definition.

The more we discussed it, the clearer it became that the defining attribute of enterprise software is that it’s software you never chose to use: someone else in your organisation chose it for you. So the people choosing the software and the people using the software could be entirely different groups.

That old adage “No one ever got fired for buying IBM” is the epitome of the world of enterprise software: it’s about risk-aversion, and it doesn’t necessarily prioritise the interests of the end user (although it doesn’t have to be that way).

In his critique of AngularJS PPK points to an article discussing the framework’s suitability for enterprise software and says:

Angular is aimed at large enterprise IT back-enders and managers who are confused by JavaScript’s insane proliferation of tools.

My own anecdotal experience suggests that Angular is not only suitable for enterprise software, but—assuming the definition provided above—Angular is enterprise software. In other words, the people deciding that something should be built in Angular are not necessarily the same people who will be doing the actual building.

Like I said, this is just anecdotal, but it’s happened more than once that a potential client has approached Clearleft about a project, and made it clear that they’re going to be building it in Angular. Now, to me, that seems weird: making a technical decision about what front-end technologies you’ll be using before even figuring out what your website needs to do.

Ah, but there’s the rub! It’s only weird if you think of Angular as a front-end technology. The idea of choosing a back-end technology (PHP, Ruby, Python, whatever) before knowing what your website needs to do doesn’t seem nearly as weird to me—it shouldn’t matter in the least what programming language is running on the server. But Angular is a front-end technology, right? I mean, it’s written in JavaScript and it’s executed inside web browsers. (By the way, when I say “Angular”, I’m using it as shorthand for “Angular and its ilk”—this applies to pretty much all the monolithic JavaScript MVC frameworks out there.)

Well, yes, technically Angular is a front-end framework, but conceptually and philosophically it’s much more like a back-end framework (actually, I think it’s conceptually closest to a native SDK; something more akin to writing iOS or Android apps, while others compare it to ASP.NET). That’s what PPK is getting at in his follow-up post, Front end and back end. In fact, one of the rebuttals to PPKs original post basically makes the exactly same point as PPK was making: Angular is for making (possibly enterprise) applications that happen to be on the web, but are not of the web.

On the web, but not of the web. I’m well aware of how vague and hand-wavey that sounds so I’d better explain what I mean by that.

The way I see it, the web is more than just a set of protocols and agreements—HTTP, URLs, HTML. It’s also built with a set of principles that—much like the principles underlying the internet itself—are founded on ideas of universality and accessibility. “Universal access” is a pretty good rallying cry for the web. Now, the great thing about the technologies we use to build websites—HTML, CSS, and JavaScript—is that universal access doesn’t have to mean that everyone gets the same experience.

Yes, like a broken record, I am once again talking about progressive enhancement. But honestly, that’s because it maps so closely to the strengths of the web: you start off by providing a service, using the simplest of technologies, that’s available to anyone capable of accessing the internet. Then you layer on all the latest and greatest browser technologies to make the best possible experience for the most number of people. But crucially, if any of those enhancements aren’t available to someone, that’s okay; they can still accomplish the core tasks.

So that’s one view of the web. It’s a view of the web that I share with other front-end developers with a background in web standards.

There’s another way of viewing the web. You can treat the web as a delivery mechanism. It is a very, very powerful delivery mechanism, especially if you compare it to alternatives like CD-ROMs, USB sticks, and app stores. As long as someone has the URL of your product, and they have a browser that matches the minimum requirements, they can have instant access to the latest version of your software.

That’s pretty amazing, but the snag for me is that bit about having a browser that matches the minimum requirements. For me, that clashes with the universality that lies at the heart of the World Wide Web. Sites built in this way are on the web, but are not of the web.

This isn’t anything new. If you think about it, sites that used the Flash plug-in to deliver their experience were on the web, but not of the web. They were using the web as a delivery mechanism, but they weren’t making use of the capabilities of the web for universal access. As long as you have the Flash plug-in, you get 100% of the intended experience. If you don’t have the plug-in, you get 0% of the intended experience. The modern equivalent is using a monolithic JavaScript library like Angular. As longer as your browser (and network) fulfils the minimum requirements, you should get 100% of the experience. But if your browser falls short, you get nothing. In other words, Angular and its ilk treat the web as a platform, not a continuum.

If you’re coming from a programming environment where you have a very good idea of what the runtime environment will be (e.g. a native app, a server-side script) then this idea of having minimum requirements for the runtime environment makes total sense. But, for me, it doesn’t match up well with the web, because the web is accessed by web browsers. Plural.

It’s telling that we’ve fallen into the trap of talking about what “the browser” is capable of, as though it were indeed a single runtime environment. There is no single “browser”, there are multiple, varied, hostile browsers, with differing degrees of support for front-end technologies …and that’s okay. The web was ever thus, and despite the wishes of some people that we only code for a single rendering engine, the web will—I hope—always have this level of diversity and competition when it comes to web browsers (call it fragmentation if you like). I not only accept that the web is this messy, chaotic place that will be accessed by a multitude of devices, I positively welcome it!

The alternative is to play a game of “let’s pretend”: Let’s pretend that web browsers can be treated like a single runtime environment; Let’s pretend that everyone is using a capable browser on a powerful device.

The problem with playing this game of “let’s pretend” is that we’ve played it before and it never works out well: Let’s pretend that everyone has a broadband connection; Let’s pretend that everyone has a screen that’s at least 960 pixels wide.

I refused to play that game in the past and I still refuse to play it today. I’d much rather live with the uncomfortable truth of a fragmented, diverse landscape of web browsers than live with a comfortable delusion.

The alternative—to treat “the browser” as though it were a known quantity—reminds of the punchline to all those physics jokes that go “Assume a perfectly spherical cow…”

Monolithic JavaScript frameworks like Angular assume a perfectly spherical browser.

If you’re willing to accept that assumption—and say to hell with the 250,000,000 people using Opera Mini (to pick just one example)—then Angular is a very powerful tool for helping you build something that is on the web, but not of the web.

Now I’m not saying that this way of building is wrong, just that it is at odds with my own principles. That’s why Angular isn’t necessarily a bad tool, but it’s a bad tool for me.

We often talk about opinionated software, but the truth is that all software is opinionated, because all software is built by humans, and humans can’t help but imbue their beliefs and biases into what they build (Tim Berners-Lee’s World Wide Web being a good example of that).

Software, like all technologies, is inherently political. … Code inevitably reflects the choices, biases and desires of its creators.

—Jamais Cascio

When it comes to choosing software that’s supposed to help you work faster—a JavaScript framework, for example—there are many questions you can ask: Is the code well-written? How big is the file size? What’s the browser support? Is there an active community maintaining it? But all of those questions are secondary to the most important question of all, which is “Do the beliefs and assumptions of this software match my own beliefs and assumptions?”

If the answer to that question is “yes”, then the software will help you. But if the answer is “no”, then you will be constantly butting heads with the software. At that point it’s no longer a useful tool for you. That doesn’t mean it’s a bad tool, just that it’s not a good fit for your needs.

That’s the reason why you can have one group of developers loudly proclaiming that a particular framework “rocks!” and another group proclaiming equally loudly that it “sucks!”. Neither group is right …and neither group is wrong. It comes down to how well the assumptions of that framework match your own worldview.

Now when it comes to a big MVC JavaScript framework like Angular, this issue is hugely magnified because the software is based on such a huge assumption: a perfectly spherical browser. This is exemplified by the architectural decision to do client-side rendering with client-side templates (as opposed to doing server-side rendering with server-side templates, also known as serving websites). You could try to debate the finer points of which is faster or more efficient, but it’s kind of like trying to have a debate between an atheist and a creationist about the finer points of biology—the fundamental assumptions of both parties are so far apart that it makes a rational discussion nigh-on impossible.

(Incidentally, Brett Slatkin ran the numbers to compare the speed of client-side vs. server-side rendering. His methodology is very telling: he tested in Chrome and …another Chrome. “The browser” indeed.)

So …depending on the way you view the web—“universal access” or “delivery mechanism”—Angular is either of no use to you, or is an immensely powerful tool. It’s entirely subjective.

But the problem is that if Angular is indeed enterprise software—i.e. somebody else is making the decision about whether or not you will be using it—then you could end up in a situation where you are forced to use a tool that not only doesn’t align with your principles, but is completely opposed to them. That’s a nightmare scenario.

Defining the damn thang

Chris recently documented the results from his survey which asked:

Is it useful to distinguish between “web apps” and “web sites”?

His conclusion:

There is just nothing but questions, exemptions, and gray area.

This is something I wrote about a while back:

Like obscenity and brunch, web apps can be described but not defined.

The results of Chris’s poll are telling. The majority of people believe there is a difference between sites and apps …but nobody can agree on what it is. The comments make for interesting reading too. The more people chime in an attempt to define exactly what a “web app” is, the more it proves the point that the the term “web app” isn’t a useful word (in the sense that useful words should have an agreed-upon meaning).

Tyler Sticka makes a good point:

By this definition, web apps are just a subset of websites.

I like that. It avoids the false dichotomy that a product is either a site or an app.

But although it seems that the term “web app” can’t be defined, there are a lot of really smart people who still think it has some value.

I think Cennydd is right. I think the differences exist …but I also think we’re looking for those differences at the wrong scale. Rather than describing an entire product as either a website or an web app, I think it makes much more sense to distinguish between patterns.

Let’s take those two modifiers—behavioural and informational. But let’s apply them at the pattern level.

The “get stuff” sites that Jake describes will have a lot of informational patterns: how best to present a flow of text for reading, for example. Typography, contrast, whitespace; all of those attributes are important for an informational pattern.

The “do stuff” sites will probably have a lot of behavioural patterns: entering information or performing an action. Feedback, animation, speed; these are some of the possible attributes of a behavioural pattern.

But just about every product out there on the web contains a combination of both types of pattern. Like I said:

Is Wikipedia a website up until the point that I start editing an article? Are Twitter and Pinterest websites while I’m browsing through them but then flip into being web apps the moment that I post something?

Now you could make an arbitrary decision that any product with more than 50% informational patterns is a website, and any product with more than 50% behavioural patterns is a web app, but I don’t think that’s very useful.

Take a look at Brad’s collection of responsive patterns. Some of them are clearly informational (tables, images, etc.), while some of them are much more behavioural (carousels, notifications, etc.). But Brad doesn’t divide his collection into two, saying “Here are the patterns for websites” and “Here are the patterns for web apps.” That would be a dumb way to divide up his patterns, and I think it’s an equally dumb way to divide up the whole web.

What I’m getting at here is that, rather than trying to answer the question “what is a web app, anyway?”, I think it’s far more important to answer the other question I posed:

Why?

Why do you want to make that distinction? What benefit do you gain by arbitrarily dividing the entire web into two classes?

I think by making the distinction at the pattern level, that question starts to become a bit easier to answer. One possible answer is to do with the different skills involved.

For example, I know plenty of designers who are really, really good at informational patterns—they can lay out content in a beautiful, clear way. But they are less skilled when it comes to thinking through all the permutations involved in behavioural patterns—the “arrow of time” that’s part of so much interaction design. And vice-versa: a skilled interaction designer isn’t necessarily the best at old-skill knowledge of type, margins, and hierarchy. But both skillsets will be required on an almost every project on the web.

So I do believe there is value in distinguishing between behaviour and information …but I don’t believe there is value in trying to shoehorn entire products into just one of those categories. Making the distinction at the pattern level, though? That I can get behind.

Addendum

Incidentally, some of the respondents to Chris’s poll shared my feeling that the term “web app” was often used from a marketing perspective to make something sound more important and superior:

Perhaps it’s simply fashion. Perhaps “website” just sounds old-fashioned, and “web app” lends your product a more up-to-date, zingy feeling on par with the native apps available from the carefully-curated walled gardens of app stores.

Approaching things from the patterns perspective, I wonder if those same feelings of inferiority and superiority are driving the recent crop of behavioural patterns for informational content: parallaxy, snowfally, animation patterns are being applied on top of traditional informational patterns like hierarchy, measure, and art direction. I’m not sure that the juxtaposition is working that well. Taking the single interaction involved in long-form informational patterns (that interaction would be scrolling) and then using it as a trigger for all kinds of behavioural patterns feels …uncanny.

August in America, day twelve

Today was a travel day, but it was a short travel day: the flight from Tucson to San Diego takes just an hour. It took longer to make the drive up from Sierra Vista to Tucson airport.

And what a lovely little airport it is. When we showed up, we were literally the only people checking in and the only people going through security. After security is a calm oasis, free of the distracting TV screens that plague most other airports. Also, it has free WiFi, which was most welcome. I’m relying on WiFi, not 3G, to go online on this trip.

I’ve got my iPhone with me but I didn’t do anything to guarantee myself a good data plan while I’m here in the States. Honestly, it’s not that hard to not always be connected to the internet. Here are a few things I’ve learned along the way:

  1. To avoid accidentally using data and getting charged through the nose for it, you can go into the settings of your iPhone and under General -> Cellular, you can switch “Cellular Data” to “off”. Like it says, “Turn off cellular data to restrict all data to Wi-Fi, including email, web browsing, and push notifications.”
  2. If you do that, and you normally use iMessage, make sure to switch iMessage off. Otherwise if someone with an iPhone in the States sends you an SMS, you won’t get it until the next time you connect to a WiFi network. I learned this the hard way: it happened to me twice on this trip before I realised what was going on.
  3. I use Google Maps rather than Apple Maps. It turns out you can get offline maps on iOS (something that’s been available on Android for quite some time). Open the Google Maps app while you’re still connected to a WiFi network; navigate so that the area you want to save is on the screen; type “ok maps” into the search bar; now that map is saved and zoomable for offline browsing.

August in America, day nine

Today was a day of rest. And in Arizona, that means lounging in or near the swimming pool.

Thanks to recently-installed solar panels on the roof, the water was nice and warm. Jessica did laps of the pool, while I splashed around spasmodically. Y’see, I can’t actually swim. Yes, I grew up by the sea, but you have to understand; that sea was bloody freezing.

So now I’m trying to figure out this whole swimming thing from first principles, but I’m not sure my brain has enough plasticity left to grasp the coordination involved. Still, it’s fun to attempt to swim, no matter how quixotic the goal.

It’s monsoon season in southern Arizona right now, meaning it’s almost certain to rain sometime in the afternoon. That’s why we got our swimming activities done early. Sure enough, thunder clouds started rolling in, but there wasn’t much rain in the end.

Clouds Clouds at sunset

Fortunately the clouds had mostly dissipated by the time the sun went down, so a few hours later, when we went outside to look up and search the starry sky for the Perseids, we got to see a few pieces of Swift-Tuttle streaking across the firmament.

By any other name

I’m not a fan of false dichotomies. Chief among them on the web is the dichotomy between documents and applications, or more broadly, “websites vs. web apps”:

Remember when we were all publishing documents on the web, but then there was that all-changing event and then we all started making web apps instead? No? Me neither. In fact, I have yet to hear a definition of what exactly constitutes a web app.

I’ve heard plenty of descriptions of web apps; there are many, many facets that could be used to describe a web app …but no hard’n’fast definitions.

One pithy observation is that “a website has an RSS feed; a web app has an API.” I like that. It’s cute. But it’s also entirely inaccurate. And it doesn’t actually help nail down what a web app actually is.

Like obscenity and brunch, web apps can be described but not defined.

I think that Jake gets close by describing sites as either “get stuff” (look stuff up) or “do stuff”. But even that distinction isn’t clear. Many sites morph from one into the other. Is Wikipedia a website up until the point that I start editing an article? Are Twitter and Pinterest websites while I’m browsing through them but then flip into being web apps the moment that I post something?

I think there’s a much more fundamental question here than simply “what’s the difference between a website and a web app?” That more fundamental question is…

Why?

Why do you want to make that distinction? What benefit do you gain by arbitrarily dividing the entire web into two classes?

I think this same fundamental question applies to the usage of the term “HTML5”. That term almost never means the fifth iteration of HTML. Instead it’s used to describe everything from CSS to WebGL. It fails as a descriptive term for the same reason that “web app” does: it fails to communicate the meaning intended by the person using the term. You might say “HTML5” and mean “requires JavaScript to work”, but I might hear “HTML5” and think you mean “has a short doctype.” I think the technical term for a word like this is “buzzword”: a word that is commonly used but without any shared understanding or agreement.

In the case of “web app”, I’m genuinely curious to find out why so many designers, developers, and product owners are so keen to use the label. Perhaps it’s simply fashion. Perhaps “website” just sounds old-fashioned, and “web app” lends your product a more up-to-date, zingy feeling on par with the native apps available from the carefully-curated walled gardens of app stores.

In his recent talk at Port 80, Jack Franklin points to one of the dangers of the web app/site artificial split:

We’re all building sites that people visit, do something, and leave. Differentiating websites vs. web apps is no good to anyone. A lot of people ignore new JavaScript tools, methods or approaches because those are just for “web apps.”

That’s a good point. A lot of tools, frameworks, and libraries pitch themselves as being intended for web apps even though they might be equally useful for good ol’-fashioned websites.

In my experience, there’s an all-too-common reason why designers, developers, and product owners are eager to self-identify as the builders of web apps. It gives them a “get out of jail free” card. All the best practices that they’d apply to websites get thrown by the wayside. Progressive enhancement? Accessibility? Semantic markup? “Oh, we’d love to that, but this is a web app, you see… that just doesn’t apply to us.”

I’m getting pretty fed up with it. I find myself grinding my teeth when I hear the term “web app” used without qualification.

We need a more inclusive term that covers both sites and apps on the web. I propose we use the word “thang.”

“Check out this web thang I’m working on.”

“Have you seen this great web thang?”

“What’s that?” “It’s a web thang.”

Now all I need is for someone to make a browser plugin (along the lines of the cloud-to-moon and cloud-to-butt plugins) to convert every instance of “website” or “web app” to “web thang.”

Play me off

One of the fun fringe events at Build in Belfast was The Standardistas’ Open Book Exam:

Unlike the typical quiz, the Open Book Exam demands the use of iPhones, iPads, Androids—even Zunes—to avail of the internet’s wealth of knowledge, required to answer many of the formidable questions.

Team Clearleft came joint third. Initially it was joint fourth but an obstreperous Andy Budd challenged the scoring.

Now one of the principles of this unusual pub quiz was that cheating was encouraged. Hence the encouragement to use internet-enabled devices to get to Google and Wikipedia as quickly as the network would allow. In that spirit, Andy suggested a strategy of “running interference.”

So while others on the team were taking information from the web, I created a Wikipedia account to add misinformation to the web.

Again, let me stress, this was entirely Andy’s idea.

The town of Clover, South Carolina ceased being twinned Larne and became twinned with Belfast instead.

The world’s largest roller coaster become 465 feet tall instead of its previous 456 feet (requiring a corresponding change to a list page).

But the moment I changed the entry for Keyboard Cat to alter its real name from “Fatso” to “Freddy” …BAM! Instant revert.

You can mess with geography. You can mess with measurements. But you do. Not. Mess. With. Keyboard Cat.

For some good clean Wikipedia fun, you can always try wiki racing:

To Wikirace, first select a page off the top of your head. Using “Random page” works well, as well as the featured article of the day. This will be your beginning page. Next choose a destination page. Generally, this destination page is something very unrelated to the beginning page. For example, going from apple to orange would not be challenging, as you would simply start at the apple page, click a wikilink to fruit and then proceed to orange. A race from Jesus Christ to Subway (restaurant) would be more of a challenge, however. For a true test of skill, attempt Roman Colosseum to Orthographic projection.

Then there’s the simple pleasure of getting to Philosophy:

Some Wikipedia readers have observed that clicking on the first link in the main text of a Wikipedia article, and then repeating the process for subsequent articles, usually eventually gets you to the Philosophy article.

Seriously. Try it.

Improving Reality

Much as I enjoyed myself in Tennessee, it was shame to miss some of the Brighton Digital Festival events that were going on at the same time. I missed Barcamp and Flash On The Beach. But since getting back I’ve been making up for lost time, soaking up the geek comedy at The Caroline of Brunswick last Wednesday with Robin Ince and Helen Keen.

I also went along to the Improving Reality conference on Friday, which turned out to be an excellent event.

The title was deliberately contentious, inviting a Slavin-shaped spectre to loom over the proceedings after he closed dConstruct with his excellent talk, Reality is Plenty wherein he placed his boot on the head of Augmented Reality, carefully pointed his rhetorical gun at its temple and repeatedly pulled the trigger.

But AR was just one of the items on the menu at Improving Reality. The day was split into three parts, each of them expertly curated: Digital Art, Cinema and Gaming. In spite of this clear delineation of topics there were a number of overlapping themes.

I’m somewhat biased but I couldn’t help but notice the influence of science fiction in all the different strands. I suppose I shouldn’t be surprised. Science fiction sets expectations for technology and culture …and I don’t just mean flying cars and jetpacks.

Mind you, this is something that cinema has always done. Matt Adams from Blast Theory asked:

How many romantic kisses had you seen before you had your first romantic kiss?

Or, on a more pedestrian level, everyone in the UK knows what an American yellow school bus is, even though they’ll probably never see one. It’s part of a pre-established world that needs no explanation. In the same way, science fiction is pre-establishing a strange world that we already inhabit.

José Luis de Vicente took us on a tour of some of this world’s stranger corners. He pointed us to the deserted Mongolian city of Ordos, a perfectly Ballardian location.

We also heard about the Tower of David in Venezuela. Intended as a high-rise centre of commerce but bankrupted before completion, it is now the world’s tallest favela.

It reminds me of William Gibson’s bridge.

It isn’t hard to draw parallels between Gibson’s Spook Country and the locative art presented at Improving Reality like Julian Oliver’s mischievous creation The Artvertiser.

He describes his work as “jamming with reality”—much like Mark Shepard’s Sentient Cities

But Julian Oliver is at pains to point out that that it’s not just about messing with people’s heads. He’s attempting to point out the points of control that might otherwise go unquestioned. There’s also an important third step to his process:

  1. Identify the points of control in the infrastructure.
  2. Hack it.
  3. Show how it was done.

This stands in stark contrast to the kind of future that Aral outlined in his energetic presentation. He is striving for a world where technology is smooth and seamless, where an infrastructure of control is acceptable as long as the user experience is excellent. It’s Apple’s App Store today; it’s the starship in Wall·E tomorrow (or possibly the Starship Opryland)—a future where convenience triumphs over inquisitiveness.

As Marshall McLuhan put it “there is no augmentation without an amputation.” In Charles Stross’s Accelerando that is literally true: when the main character—exactly the kind of superhuman cyborg that Aral envisions—has his augmentation stolen, he is effectively mentally and socially retarded.

Julian Oliver’s battle against a convenient but complacent future is clearly shown with Newstweek where William Gibson, Umberto Eco and Philip K. Dick collide in a project that skirts around the edges of morality and legality, hijacking wifi connections and altering news headlines for the lulz.

Then there’s Blast Theory’s current work on the streets of Brighton, A Machine To See With. It’s ostensibly another locative art piece but it may have more in common with a cinematic work like David Fincher’s The Game.

It’s all part of a long tradition of attempting to break down the barrier between the audience and the performance, a tradition that continues with the immersive theatre of Punchdrunk. This reminds me of the ractives in Neal Stephenson’s The Diamond Age, a form of entertainment so immersive that when a troupe attempt to perform a traditional theatrical piece, they run into problems:

The hard part was indoctrinating the audience; unless they were theatre buffs, they always wanted to run up on stage and interact, which upset the whole thing.

It’s a complete inversion of the infamous premier by the Lumière brothers of Arrival of a Train at La Ciotat where, so the myth goes, the audience ran from the theatre in terror.

It’s probably a completely apocryphal story. But as the representative from Time’s Up said at Improving Reality: “Don’t let the truth get in the way of a good story.”

Stories were at the heart of the gaming section of Improving reality. Stored In A Bank Vault, which is currently running in Brighton, was presented as part of PARN: Physical and Alternate Reality Narratives. These are stories where the player is empowered to become the narrator.

Incidentally, it was refreshing to hear how much contempt the game designers like Tassos Stevens held for the exploitationware of “gamification”—a dehumanising topic that was explored in Stross’s superbly damning .

There were plenty of good stories in the middle section of Improving Reality too, which began with a look at the past, present and future of cinema from Matt Hanson. Matt’s own remarkable work A Swarm Of Angels bears a striking similarity to “the footage” in Gibson’s Pattern Recognition—both are infused with a spirit of .

The subject of film funding is currently a hot topic and it’s unsurprising to see that much of the experimentation in this area can be found in sci-fi endeavours such as Iron Sky and The Cosmonaut.

Micropatronage can be very impowering. Where once we were defined (and perhaps judged) by the films we chose to watch and the books we chose to read, now we can define ourselves by the films and books we choose to fund. Instead of judging me by my what’s on my bookshelf or my Last.fm profile, judge me by my Kickstarter profile. Kickstarter is one of those genuinely disruptive uses of the network that’s enabling real creativity and originality to come to the surface in projects like Adrian Hon’s A History Of The Future In 100 Objects.

This change in how we think about funding feels like the second part of a revolution. The first part was changing how we think about distribution.

Jamie King, director of Steal This Film, hammered home just how powerful Moore’s Law has been for film, music and anything else that can be digitised. Extrapolating the trend, he pointed to the year 2028 as the media singularity, when it will cost $5 to store every film ever made on a device that fits in your pocket. He evocatively described this as the moment when “the cloud settles at street level.”

It’s here, at the point where anything can be copied, where the old and new worlds clash head on in the battle for the artificial construct that has been so inaccurately labeled “intellectual property”.

Once again we were shown two potential futures; one of chaos and one of control:

  1. There’s the peer-to-peer future precipitated by Bit Torrent and Pirate Bay where anyone is free to share their hopes and dreams with the entire world …but where no distinction is drawn between a creative work of art and a hate-filled racist polemic.

  2. Then there’s the centralised future of the iPad, a future where people will gladly pay money to climb into a beautifully designed jail cell. You can have whatever you want …as long as it has been pre-approved. So you won’t, for example, ever be able to play Phone Story.

This second future—where your general-purpose computing device is broken—promises to put the genie back in the bottle and reverse the disruptive revolution in distribution and funding.

Thinking about it, it’s no surprise that payment systems are undergoing the same upheavals as distribution systems. After all, money is just another form of information that can be reduced to bits.

The much tougher problem is with atoms.

Until recently this was entirely the domain of science fiction—the post-singularity futures of replicators and . But even here, with the rise of 3D thing printing, our science fictional future is becoming more evenly distributed in the present.

Improving Reality closed with a talk from Alice Taylor wherein she demoed the work being done at Makie Lab:

We’re making a new kind of toy: customisable, 3D-printed, locally made, and internet-enabled.

A year ago, this was a work of fiction by Alice’s husband. Now it’s becoming reality.

Just as Makie Lab envision a game that’s an infinite loop between the network and the physical world, I think we’ll continue to see an infinite loop between science fiction and reality.

Re-flex

I was in Minnesota last week for An Event Apart Minneapolis. A great time was had by all. Not only were the locals living up to their reputation with Amy and Kasia demonstrating that Kristina isn’t an outlier in the super-nice, super-smart Minnesotan data sample, but the conference itself was top-notch too. It even featured some impromptu on-stage acrobatics by Stan.

A recurring theme of the conference—right from Zeldman’s opening talk—was Content First. In Luke’s talk it was more than a rallying cry; it was a design pattern he recommends for mobile: content first, navigation second. It makes a lot of sense when your screen real estate is at a premium. You can see this pattern in action on the Bagcheck mobile site (a button at the top of screen is simply a link that leads to the fragment identifier for the navigation at the bottom).

Later on, Eric was diving deep into the guts of the CSS3 flexible box layout module and I saw an opportunity to join some dots.

Let’s say I’ve got a document like this with the content first and the navigation second:

<body>
<div role="main">
<p>This is the main content</p>
</div>
<nav role="navigation">
<p>This is the navigation</p>
</nav>
</body>

Using box-orient:vertical and box-direction:reverse on the body element, I can invert the display of those two children from the order they appear in the source:

body {
    display: box;
    box-orient: vertical;
    box-direction: reverse;
}

If I wrap that in a media query, I can get the best of both worlds: content first, navigation second on small screens; navigation first, content second on larger viewports:

@media screen and (min-width: 30em) {
    body {
        display: box;
        box-orient: vertical;
        box-direction: reverse;
    }
}

Works a treat (once you include the necessary -webkit and -moz prefixes).

I thought I’d take it a bit further. Suppose the navigation has a list of links:

<nav role="navigation">
<p>This is the navigation.</p>
<ol>
<li><a href="#">foo</a></li>
<li><a href="#">bar</a></li>
<li><a href="#">baz</a></li>
</ol>
</nav>

I could use flexbox to lay those items out horizontally instead of vertically once the viewport is large enough:

@media screen and (min-width: 30em) {
    [role="navigation"] ol {
        display: box;
        box-orient: horizontal;
    }
    [role="navigation"] li {
        box-flex: 1;
    }
}

Here’s the weird thing: in Webkit—Safari and Chrome—the list items reverse their direction: “baz, bar, foo” instead of “foo, bar, baz.” It seems that the box-direction value of reverse is being inherited from the body element, which I’m pretty sure isn’t the right behaviour. But it can be easily counteracted by explicitly declaring box-direction: normal on the navigation.

What’s a little trickier to figure out is why Firefox is refusing to space the list items equally. I’ve put a gist on Github if you want to take a look for yourself and see if you can figure out what’s going on.

Update: You can see it in action on JSbin (resize the view panel).

The new CSS3 layout modules and responsive design could potentially be a match made in heaven …something that Stephen has been going on about for a while now. Check out his talk at Mobilism earlier this year.

You’ll notice that he’s using a different syntax in his presentation; that’s because the spec has changed. In my example, I’m using the syntax that’s currently supported in Webkit, Gecko and Internet Explorer. And, as Eric pointed out in his talk, even when the newer syntax is supported, the older vendor-prefixed syntax won’t be going anywhere.

Star Wars memories

It’s been a starwarsy few days.

I made the most of my brief time in Seattle with a visit the Star Wars exhibit at the Pacific Science Center. I took many photos. Needless to say, I loved it, particularly the robot show’n’tell that intermixed fictional droids like C3PO with automata from our own timeline like Kismet. The premise of the exhibition was to essentially treat Star Wars as a work of design fiction.

From Seattle, Jessica and I took the train down to Portland. No, it didn’t go under the ocean like the Eurostar, and having WiFi on board a train wasn’t quite as thrilling as having WiFi on a plane, but it was still a lovely journey through some beautiful scenery. Do not pass Go. Do not get groped by the TSA.

Portland turns out to be delightful, just as reports suggested. There are food carts a-plenty. There’s a ma-HOO-sive book shop. There’s excellent coffee. And then there’s the beer. From Wikipedia:

With 46 microbrew outlets, Portland has more breweries and brewpubs per capita than any other city in the United States.

After consuming a few beers in the company of Portland’s finest geeks, we relocated to a true Portland institution: Ground Kontrol. It’s an arcade. But it’s a bar. But it’s an arcade! But it’s a bar!

Amongst the many, many machines packed into the place was . Just seeing it brought back a Proustian rush of memories. I had to play it. I remembered a not-so-secret tactic that results in a nice big bonus…

When you get to the trench level on the Death Star, don’t fire; instead dodge and weave to avoid the incoming fire. After about thirty seconds, the music stops. You are now using the Force. If you fire just one single shot into the exhaust port at the end of the trench, you will be rewarded with many, many bonus points.

You’re welcome.

Collective action

When I added collectives to Huffduffer, I wanted to keep the new feature fairly discrete. I knew I would have to add an add/remove device to profiles but I also wanted that device to be unobtrusive. That’s why I settled on using a small +/- button.

The action of adding someone to, or removing someone from a collective was a clear candidate for Hijax. Once I had the adding and removing working without JavaScript, I went back and sprinkled in some Ajax pixie-dust to do the adding and removing asynchronously without refreshing the whole page.

That was the easy part. The challenge lies in providing some meaningful and reassuring feedback to the user that the action has been carried out. There are quite a few familiar devices for doing this; the yellow fade technique is probably the most common. Personally, I like the Humanized Messages as devised by Aza Raskin and ported to jQuery by Michael Heilemann.

I knew that, depending on the page, the user could be carrying out multiple additions or removals. Whatever feedback mechanism I provided, it shouldn’t get in the way of the user carrying out another addition or removal. That’s when I thought of a feedback mechanism from a different discipline: video games.

Super Mario Bros. Frustration Speed Run in 3:07

Quite a few arcade games provide a discrete but clear feedback mechanism when points are scored. When the player successfully “catches” a prize, not only does the overall score in the corner of the screen update, but the amount scored appears in situ, floating briefly upwards. It doesn’t get in the way of immediately grabbing another prize but it does provide a nice tangible bit of feedback (the player usually gets some audio feedback too, which would be nice to do on the web if it weren’t to likely to get very annoying very quickly).

It wasn’t too tricky to imitate this behaviour with jQuery.

Collective action

This game-inspired feedback mechanism feels surprisingly familiar to me. Sign up or log in to Huffduffer to try it for yourself.

Building Reprieve

The front page of today’s Guardian ran with a story on Binyam Mohamed and his fight to stop evidence of his torture from being destroyed:

The photograph will be destroyed within 30 days of his case being dismissed by the American courts — a decision on which is due to be taken by a judge imminently, Clive Stafford Smith, Mohamed’s British lawyer and director of Reprieve, the legal charity, said today.

Reprieve recently relaunched their website and they chose Clearleft to help them. I was responsible for the front-end build; that’s my usual role. But unusually, I also had to build a CMS.

We don’t normally do back-end work at Clearleft. That’s a conscious decision; we don’t want to tie ourselves to any particular server-side language. Usually we partner up with server-side developers; either those of the client or independent agencies like New Bamboo. In the case of Reprieve, the budget didn’t allow for that option. We were faced with three possibilities:

  1. Write a CMS from scratch, probably using PHP and MySQL—the technologies I’m most comfortable with.
  2. Take an off-the-shelf platform like WordPress or Expression Engine and twist it to make it fit the needs of the client.
  3. Create a CMS using Django which would give us an admin interface for free.

There was a three person team responsible for the project: myself, Cennydd and Paul. We did a little card-sorting exercise, weighing up the pros and cons of each option. Django came out on top.

I had conflicting emotions about this. On the one hand, I was pleased to have the chance to learn a new technology. On the other hand, I was absolutely terrified that I would be completely out of my depth.

I had seen Simon giving a talk on Django just a few weeks previously. I stuck my hand up during the Q and A to ask Is it possible to learn Django without first learning Python? Simon said that a year ago, he would have said No. But given the work of fellow designers like Jeff and Bryan, the answer isn’t so clear cut. Maybe Django could be a really good introduction to Python.

By far the hardest part of building a Django website was the initial set-up. Sure, installing Django was pretty straightforward …once you’ve made sure you’ve installed the right image libraries, the right database bindings, blah, blah, blah. I can deal with programming challenges but I have no desire to become a sysadmin. Setting up my local dev environment on my Mac was a hair-tearing experience. Setting up the live environment, even on a Django-friendly host like WebFaction, was almost as frustrating …no thanks to the worst. screencast. EVER.

But I persevered, I obediently followed the tutorial, and I discovered all the things that make Django such a powerful framework; the excellent separation of concerns, the superb templating system, the lack of so-called front-end “helpers” that cripple other server-side frameworks. I think Gareth was really onto something when he noticed the way that the web standards world appears to be choosing Django.

In the end, Django proved to be absolutely the right choice for Reprieve. It provided enough flexibility for me to build a site tailored to the specific needs of the client while at the same time, giving me plenty of pre-built tools like RSS and, crucially, the admin interface. The client is extremely happy with the power that the admin interface offers.

For my part, it was an honour to work on a project with this mission statement:

We investigate, we litigate and we educate, working on the frontline, providing legal support to prisoners unable to pay for it themselves. We promote the rule of law around the world, and secure each person’s right to a fair trial. And in doing so, we save lives.

The Audio of the System of the World

Four months after the curtain went down on dConstruct 2008, the final episode of the podcast of the conference has just been published. It’s the audio recording of my talk The System Of The World.

I’m very happy indeed with how the talk turned out: dense and pretentious …but in a good way, I hope. It’s certainly my favourite from the presentations I have hitherto delivered.

Feel free to:

The whole thing is licenced under a Creative Commons attribution licence. You are free—nay, encouraged—to share, copy, distribute, remix and mash up any of those files as long as you include a little attribution lovin’.

If you’ve got a Huffduffer account, feel free to huffduff it.

Danmaku

Here’s an interview with the makers of the game Geometry Wars, a game I find utterly fascinating for the way its very simple rule base quickly results in complex hallucinatory visions of beauty that are simultaneously mesmerising and baffling to watch.

After reading the interview, I moved on to the next tab I had open in my browser courtesy of Tom’s always excellent links. This was a post by Simon Wistow describing the iPhone version of the game rRootage. There I came across the word 弾幕 or meaning :

…a sub-genre of shoot ‘em up video games in which the entire screen is often almost completely filled with enemy bullets.

Next time I’m trying to describe Geometry Wars I think I’ll just say It’s kind of danmaku.

Adventure

Andy has become the gaming world’s equivalent of uncovering the Tutankhamun’s tomb of a hard drive from Infocom containing details of the never-released sequel to The Hitchhiker’s Guide To The Galaxy game. In his post, he picks out the salient points from the Lost in La Mancha-like story. In the comments, much hand-wringing ensues about what is and isn’t journalism (answer: who cares?).

I missed the Hitchhiker’s game when I was growing up. I cut my teeth on 8-bit computers; first a and then an . While I didn’t have the chance to play Douglas Adams’ meisterwerk, there were plenty of other text-only adventure games that sucked me in. I recall some quality stuff coming from the studio.

I remember learning BASIC specifically so that I could try create my own adventure games complete with mapped-out locations and a simple verb/noun parser. Adventure games seemed like the natural extension to the but far more open to exploration (even if that openness was just a cleverly-crafted illusion). Hypertext—a term used these days almost exclusively to refer to Web-based documents—seems an entirely appropriate way to describe this kind of interactive fiction.

Later this year, I and my fellow adventure game geeks will be able to wallow in nostalgia when the documentary Get Lamp is released. The film will feature interviews with some of the Infocom movers and shakers featured in Andy’s archeological treasure trove.

Moral panic

Thanks to Tom’s always excellent linkage, I came across an excellent in-depth article by Brenda Brathwaite called The Myth of the Media Myth, all about the perception of videogames by non-gamers. The research was prompted by a dinner conversation that highlighted the typical reactions:

It happens the same way every time: People listen and then they say what they’ve been feeling. Videogames are not good for you. Videogames are a waste of time. They isolate children. Kids never go outside to play. They just sit there and stare at the TV all day.

The tone of the opinions reminded me of the Daily Mail attitude to social networking sites. The resonances were so strong that I decided to conduct a quick experiment using my hacky little text substitution script. Here are the terms I swapped:

OriginalSubstitution
videogamesocial networking site
gamingsocial networking
game designerweb designer
gamewebsite
playsurf
GTAFacebook

Because the original article is paginated, I ran the print version through the transmogrifier. Please excuse any annoying print dialogue boxes. Here’s the final result.

The results are amusing, even accurate.The original article begins:

There are six of us around the table, and the conversation turns to what I do for a living, also known as “my field of study” in academia. “I’m a game designer and a professor,” I say. The dinner had been arranged by a third party in order to connect academics from various institutions for networking purposes.

“You mean videogames?” one of the teachers asks. It’s said with the same professional and courteous tone that one might reserve for asking, “Did you pass gas?”

“Videogames, yes,” I answer. “I’ve been doing it over 20 years now.” Really without any effort at all, I launch into a little love manifesto of sorts, talking about how much I enjoy being a game designer, how wonderful it is to make games, all kinds of games.

After substiitution:

There are six of us around the table, and the conversation turns to what I do for a living, also known as “my field of study” in academia. “I’m a web designer and a professor,” I say. The dinner had been arranged by a third party in order to connect academics from various institutions for networking purposes.

“You mean social networking sites?” one of the teachers asks. It’s said with the same professional and courteous tone that one might reserve for asking, “Did you pass gas?”

“social networking sites, yes,” I answer. “I’ve been doing it over 20 years now.” Really without any effort at all, I launch into a little love manifesto of sorts, talking about how much I enjoy being a web designer, how wonderful it is to make websites, all kinds of websites.

The comments from interviewees also hold up. Before:

One friend complained about GTA, admitted she’d never played the game and then offered this: “If you really are interested in deep psychoanalysis… the truth of my disdain for games is from a negative relationship — [a former boyfriend] would play for hours, upon hours, upon hours. Maybe I felt neglected, ignored and disrespected.”

After:

One friend complained about Facebook, admitted she’d never surfed the website and then offered this: “If you really are interested in deep psychoanalysis… the truth of my disdain for websites is from a negative relationship — [a former boyfriend] would surf for hours, upon hours, upon hours. Maybe I felt neglected, ignored and disrespected.”

Even the analysis of the language offers parallels. Original:

“I haven’t found this kind of attitude about games per se. But in my version of your dinner party anecdote, I start with ‘I make games,’ not ‘I make videogames,’ and I’ve never had a response like the one you describe. This leads me to wonder if the very term ‘videogames’ is the problem meme.”

Substitution:

“I haven’t found this kind of attitude about websites per se. But in my version of your dinner party anecdote, I start with ‘I make websites,’ not ‘I make social networking sites,’ and I’ve never had a response like the one you describe. This leads me to wonder if the very term ‘social networking sites’ is the problem meme.”

But most telling of all are the quotes in the closing passages that haven’t been changed one jot from the original:

“If I had a choice, I would want to include these distrustful folks in finding solutions. I would prefer it if they understood. I would prefer it if they could see the long sequence of events that is going to address their fears and create the medium they will inevitably love and participate in, whether they expect to or not. What’s sad is that their ideological, ignorant, hostile, one-dimensional attitudes oversimplify one of the most beautiful problems in human history.”

Resolved

Remember when I was bitching and moaning about the way that search works on Upcoming? Well, it looks like my whining has paid off. As of today, search is fixed.

Thank you, relevant Yahoo employees.

Outgoing

As a web developer, I get annoyed by interaction design implementations all the time: Why is that a link instead of a form button?, Why doesn’t that scale when I bump up the font size?, Why am I being asked to enter this unnecessary information?… Usually I can brush off these annoyances and continue my journey along the threads of the World Wide Web but there’s one “feature” that has irked me to point of distraction and it’s all the more irritating for being on a site I use habitually: Upcoming.

As an Upcoming user, I have a default location. In my case it’s Brighton. This location is important. My location determines what content gets served up to me on the front page of the site—a useful way of discovering local events of interest.

The site also has a search feature. The search form has two components: what I’m searching for and where I’m searching for it. The “where” field defaults to my location, which is a handy little touch. If I want to search for something outside my current location—say the Future of Web Design conference in London this April—I can enter “Future of Web Design” in the “what” field and delete “Brighton” from the “where” field, replacing it with “London”. That works: I have now narrowed down my search to the location “London.”

Here’s the problem: if I now return to the front page I will find that my location is London. That’s right: simply by searching in a place, the system assumes that I now want that to be my location. You know what they say about assumptions, right? In this case, not only has it made an ass out of me, it has, over time, instilled a fear of searching.

I’ll be in San Francisco at the end of this month so I’d like to see what’s going on while I’m there. But once I’ve finished my searching I must remember to reset my location back to Brighton. Knowing this makes me hesitant to use the search form. No doubt the justification for this unexpected behaviour in the search is to second-guess what people really want: do as I want, not as I say. But when I search, I really just want to search. I suspect the same is true of most people.

Normally I wouldn’t rant about an obviously-flawed feature but in this case it’s a feature that can be easily fixed by simply being removed. Here is the current flow:

  1. The user enters a search term in the “what” field, a location in the “where” field and submits the search form.
  2. The system returns a list of search results for the specified term in the specified place.
  3. The system changes the user’s location to the specified place.

That third step is completely unnecessary. Its omission would not harm the search functionality one whit and it would make the search interface more truthful and less duplicitous.

I’ve already mentioned this on the Upcoming suggestion board. If you can think of a good reason why the current behaviour should stay, please add your justification there. If, like me, you’d like to see a search feature that actually just searches, please let your voice be heard there too.

Please Leonard, Neil, I kvetch because I care. I use Upcoming all the time. It would be a butt-kicking service if it weren’t for this one glaring flaw… even without a liquid layout.

Update: Fixed!

Social networking

Here’s a list of websites on which I have an account and which involve some form of social networking. I’m listing them in order of how often I visit. I’m also listing how many contacts/buddies/friends/connections/people I have on each site.

My Social Networks
WebsiteVisitsConnections
FlickrDaily154
TwitterDaily205
Del.icio.usDaily4
UpcomingFrequently95
Last.fmFrequently66
DopplrFrequently96
JaikuWeekly34
AnobiiWeekly2
CorkdInfrequently27
PownceInfrequently22
RevishInfrequenty9
FicletsInfrequently4
NewsvineInfrequently4
FacebookInfrequently59
Ma.gnoliaRarely7
Linked inRarely90
OdeoRarely10
XingNever2
DiggNever0

This is just a snapshot of activity so some of the data may be slightly skewed. Pownce, for instance, is quite a new site so my visits may increase or decrease dramatically over time. Also, though I’ve listed Del.icio.us as a daily visit, it’s really just the bookmarklet or Adactio Elsewhere that I use every day—I hardly ever visit the site itself.

Other sites that I visit on a daily basis don’t have a social networking component: blogs, news sites, Technorati, The Session (hmmm… must do something about that).

In general, the more often I use a service, the more likely I am to have many connections there. But there are some glaring exceptions. I have hardly any connections on Del.icio.us because the social networking aspect is fairly tangential to the site’s main purpose.

More interestingly, there are some exceptions that run in the other direction. I have lots of connections on Linked in and Facebook but I don’t use them much at all. In the case of Linked in, that’s because I don’t really have any incentive. I’m sure it would be a different story if I were looking for a job.

As for Facebook, I really don’t like the way it tries to be a one-stop shop for everything. It feels like a walled garden to me. I much prefer services that choose to do one thing but do it really well:

Mind you, there’s now some crossover in the events space when the events are musical in nature. The next Salter Cane concert is on Last.fm but it links off to the Upcoming event … which then loops back to Last.fm.

I haven’t settled on a book reading site yet. It’s a toss-up between Anobbii and Revish. It could go either way. One of the deciding factors will be how many of friends use each service. That’s the reason why I use Twitter more than Jaiku. Jaiku is superior in almost every way but more of my friends use Twitter. Inertia keeps me on Twitter. It’s probably just inertia that keeps me Del.icio.us rather than Ma.gnolia.

The sum total of all my connections on all these services comes to 890. But of course most of these are the same people showing up on different sites. I reckon the total amount of individual people doesn’t exceed 250. Of that, there’s probably a core of 50 people who I have connected to on at least 5 services. It’s for these people that I would really, really like to have portable social networks.

Each one of the services I’ve listed should follow these three steps. In order of difficulty:

  1. Provide a publicly addressable list of my connections. Nearly all the sites listed already do this.
  2. Mark up the list of connections with hCard and, where appropriate, XFN. Twitter, Flickr, Ma.gnolia, Pownce, Cork’d and Upcoming already do this.
  3. Provide a form with a field to paste the URL of another service where I have suitably marked-up connections. Parse and attempt to import connections found there.

That last step is the tricky one. Dopplr is the first site to attempt this. That’s the way to do it. Other social networking sites, take note.

It’s time that social networking sites really made an effort to allow not just the free flow of data, but also the free flow of relationships.

Hackfight

Ninety seconds. That’s how long each team at Hackday had to present the fruits of their labours. That’s a pretty good timeframe to demonstrate the core functionality of an app but it’s nowhere near long enough to explain the background story behind Hackfight.

I was the third presenter (out of a total of 73). I knew I had to try to make every one of those ninety seconds count. At the moment that Chad Dickerson introduced me and the spotlight was cast upon my frame, I went into Simon Willison mode and began to stream out as much information as the bandwidth of the human voice allows.

“Hackfight is a mashup” I began, “but it’s a mashup of ideas: the ideas of Justin Hall with his talk of browsing as a kind of role-playing game and Gavin Bell with his ideas on provenance—your online history forming a picture of who you are.”

I was standing on stage in Alexandra Palace trying to give an elevator pitch of an idea that had been brewing in my head for quite some time.

Background

Ever since I first started talking about lifestreams I knew I wanted some kind of way of tying together all the disparate strands of my online identity. There’s a connection here with the dream of portable social networks: tying together the walled gardens of myriad social networks. The final piece fell into place when I was listening to the South by Southwest podcast of a panel discussion by Joi Ito, Ben Cerveny and Justin Hall. Justin says:

I’m working on this idea of passively multiplayer online games. Watching you surf the Web and giving you xp for using your computer. You might be as high level as Joi but just by doing what you’re doing… My model for this was looking at a D&D character sheet, which proposes to know a lot about people.

Something clicked. This idea really resonated with me but I wanted to tie it into a person’s long-term publishing history—their provenance, in other words. I started thinking about how this might work. I would definitely need some help. Then Hackday London was announced.

My recruitment drive began well before the day itself. I spoke to people at both @medias. Matt Harris—no stranger to the mechanics of role-playing games—expressed his interest. I noticed that Gareth was in search of a project for Hackday so I baggsied his brain. I even managed to turn my presentation at Reboot into a rallying cry for hackers. Riccardo and Colin were both there and added their names to the list of interested parties. Finally, I wrote a blog post right before Hackday to let everyone know that I was looking for help.

On the day itself—once the excitement of the lightning strike had worn off—I began quizzing my friends to find out who had plans and who didn’t. Ben and Natalie were both amenable to getting stuck in. An unsuspecting Paul Duncan was also roped in. I hopped on stage and put out one final call for help.

Planning

I had plenty of people. Now I needed to make sure they could work in managable teams. I divided the work into front-end and back-end projects, appointing Nat as head of front end and Gareth as delegator for the back end.

APIsBefore a line of code was written, we made plenty of use of the available whiteboards. We began brainstorming all the possible APIs we could potentially use. At this stage we were already thinking in terms of characteristics: how social you are, how many photos you take, how much you blog, how much you bookmark.

The long list of APIs was quickly whittled down to a managable number. The terminology was updated to be more game-like. Here’s what we had:

Charisma
Your social networking power based on Twitter. It’s not as simple as just how many contacts you have: your followers must equal or exceed your claimed friends to get a good score.
Perception
Your power of observing the world around you as decided by Flickr. The Flickr API reveals how long you’ve been posting photos and how many you’ve posted in total. From there it’s a short step to establishing an average number of photos per day.
Memory
Your power of cataloguing the world around you as revealed through del.icio.us.
Willpower
How much influence you can exert over others. This is gleaned from Technorati’s ranking algorithm.

Testing

Assuming we could generate a number for each of these characteristics, how should gameplay proceed? Should it be as simple as Top Trumps or as complex as World of Warcraft? It was Jim Purbrick who pointed out that we were closest to having beat-em-up game mechanics.

Now we needed to consider fairness. How would we deal with the uebergeek who has been blogging, Flickring and social networking for years? This was quickly christened “The Tantek Scenario.” Needless to say, I blame Tantek.

By giving each player a pool of points that always adds up to the same total, we could level the playing field. We chose the number 20 for the total points. This could then be split four ways amongst charisma, perception, memory and willpower. So even if you were superb in all four categories, you could only have a total of 5 points in each. Most people will have a high score in one or two categories and a correspondingly low score in others.

Gameplay proceeded like Top Trumps but with a difference: if you are attacked in one category (say, charisma), you can defend with another category (such as memory). But you can only ever use a category once. So one fight is exactly four rounds of attack and defence. At the beginning of each fight each player has 10 health points. If a player successfully attacks, the amount of health points deducted from the other player is the difference in category points. So if I attack with a willpower of 8 and you defend with a memory of 6, you lose 2 health points.

User testingPhew! The game mechanics were starting to get complex. Would people be able to understand the gameplay? There’s only one way to find out: user testing!

I mercilessly pounced on unsuspecting passers-by like Andy and Aral and thrust sticky notes into their hands. We then played a round using these paper prototypes. We tested a slightly more complex version of the gameplay involving the ability to bet high or low but the user-testing revealed that this was probably too complex to be easily grasped.

Building

Alright. Enough planning. Enough user testing. The clock was ticking. It was time for the front-end team to start working on the design and the back-end team to get coding.

As day one drew to a close, our numbers lessened. Riccardo and Paul headed for home (or in Paul’s case, the pub and then home). But I still had two incredible teams of ludicrously dedicated people. These are the people who would build what we were now calling Hackfight.

Team Hackfight

Back end
Front end

Watching these people work through the night was a humbling experience. It quickly became clear that my programming skills weren’t nearly up to scratch. I helped out a bit with some Flickr API stuff but I mostly just left the lads to it. I even snatched one or two hours of sleep. Colin and Natalie didn’t sleep at all.

By morning, things were shaping up nicely. On the back end, we had a good database schema, ranking algorithms and classes for combatants and fights. On the front end, we had a colour scheme, a logo and beautifully shiny icons. But could we tie the two ends together and still hit the afternoon deadline?

The result

In the end it was clear that we had bitten off more than we could chew. We had a solid infrastructure and a lovely interface but there just wasn’t enough time to build the interactive elements: signing up, choosing an opponent and having a fight.

We still wanted to demonstrate what was possible with this system. If we cut out the interactive elements for now, we could at least show an example fight by having the computer pitch two people against each other. We began adding some real-world data into the system and built a fight page where the moves were chosen at random.

Here’s the result using real data from Tom and Norm!’s online publishing history. It takes a while to load because the information is being fetched from each service at runtime but… it works!

Presenting Hackfight

The final result is more of a proof-of-concept but boy, what a proof-of-concept. Watching this idea come to life in the space of 24 hours was simply magicial. I honestly don’t think words can express how impressed I am with the people who built this. All I did was lay the groundwork. They pulled out all the stops to actually make something.

I had one last task. I had to get up there on stage and present Hackfight.

Ninety seconds.

Standing in the spotlight with Hackfight projected on the screen behind me, I rushed through the game mechanics and showed a sample fight. My mind was racing as face as my mouth. I was frantically trying to think of what I absolutely needed to get across. I quickly explained that Hackfight was a platform rather than a finished hack: something that could be built upon to create all kinds of gaming experiences based on online publishing. Feeling the seconds ticking away to nothing, I closed with the one remark that it was absolutely necessary to make:

The team that put this together was awesome.

And with that, I was done. It was later that I realised I actually still had 19 seconds left on the clock. My one chance to do the team justice and I blew it.

The future

Milling with my peers at the close of Hackday, one question kept coming up: would we continue to work on this? We’ve got a good codebase. We’ve got some solid game mechanics. I think it would be great to see this taken forward. The Hackfight team all seemed pretty interested in hacking on this thing a bit more.

I think there’s a lot of potential in this idea. Forget about the basic idea of a fight confined to a web page: think about all the other possibilities: fighting via Twitter, by SMS, on Jabber, even in Second Life. As long as the ranking algorithms are in place and the game mechanics are set, there’s no limit to where and when Hackfight might exist.

The best outcome would be for Hackfight itself to become an API so that other people could hook into the system and build cool fun stuff. That’s an ambitious goal and I don’t have the resources to see it through but having seen what can be accomplished by a dedicated team of unbelievably smart and talented people, I think anything is possible.