2024 in photos
Here’s one photo from each month in 2024
Here’s one photo from each month in 2024
Web performance is an unalloyed good. No one has ever complained that a website is too fast.
So the benefit is pretty obvious. Users like fast websites. But there are other benefits to web performance. And they don’t all get equal airtime.
A lot of good web performance practices come down to the first half of Postel’s Law: be conservative in what send. Images, fonts, JavaScript …remove what you don’t need and optimise the hell out of what’s left.
That can translate to savings. If you’re paying for the bandwidth every time a hefty file is downloaded, your monthly bill could get pretty big.
So apart from the indirect business benefits of happy users converting to happy customers, there can be a real nuts’n’bolts bottom-line saving to be made by having a snappy website.
This is related to the cost-savings benefit. If you’re shipping less stuff down the wire, and you’re optimising what you do send, then there’s less energy required.
Whether less energy directly translates to a smaller carbon footprint depends on how the energy is being generated. If your servers are running on 100% renewable energy sources, then reducing the output of your responses won’t reduce your carbon footprint.
But there’s an energy cost at the other end too. Think of all the devices making requests to your server. If you’re making those devices work hard—by downloading, parsing, executing lots of JavaScript, for example—then you’re draining battery life. And you can’t guarantee that the battery will be replenished from renewable energy sources.
That’s why sites like the website carbon calculator have so much crossover with web performance:
From data centres to transmission networks to the billions of connected devices that we hold in our hands, it is all consuming electricity, and in turn producing carbon emissions equal to or greater than the global aviation industry. Yikes!
There comes a point when a slow website isn’t just inconvenient, it’s inaccessible.
I’ve always liked the German phrase for accessible: barrierefrei—free of barriers. With every file you add to a website’s dependencies, you’re adding one more barrier. Eventually the barrier is insurmountable for people with older devices or slower internet connections. If they can no longer access your website, your website is quite literally inaccessible.
I’ve noticed that when it comes to making the argument in favour of better web performance, people often default to the business benefits.
I get it. We’re always being told to speak the language of business. The psychology seems pretty straightforward; if you think that the people you’re trying to convince are mostly concerned with the bottom line, use the language of commerce to change their minds.
But that’s always felt reductive to me.
Sure, those people almost certainly do care about the business. Who doesn’t? But they’re also humans. I feel like if really want to convince them, speak to their hearts. Show them the bigger picture.
Eliel Saarinen said:
Always design a thing by considering it in its next larger context; a chair in a room, a room in a house, a house in an environment, an environment in a city plan.
I think the same could apply to making the case for web performance. Don’t stop at the obvious benefits. Go wider. Show the big-picture implications.
While I’m talking about the SVGs on The Session, I thought I’d share something else related to the rendering of the sheet music.
Like I said, I use the brilliant abcjs JavaScript library. It converts ABC notation into sheet music on the fly, which still blows my mind.
If you view source on the rendered SVG, you’ll see that the path and rect elements have been hard-coded with a colour value of #000000. That makes sense. You’d want to display sheet music on a light background, probably white. So it seems like a safe assumption.
Ah, but when it comes to front-end development, assumptions are like little hidden bombs just waiting to go off!
I got an email the other day:
Hi Jeremy,
I have vision problems, so I need to use high-contrast mode (using Windows 11). In high-contrast mode, the sheet-music view is just black!
Doh! All my CSS adapts just fine to high-contrast mode, but those hardcoded hex values in the SVG aren’t going to be affected by high-contrtast mode.
Stepping back, the underlying problem was that I didn’t have a full separation of concerns. Most of my styling information was in my CSS, but not all. Those hex values in the SVG should really be encoded in my style sheet.
I couldn’t remove the hardcoded hex values—not without messing around with JavaScript beyond my comprehension—so I made the fix in CSS:
[fill="#000000"] {
fill: currentColor;
}
[stroke="#000000"] {
stroke: currentColor;
}
That seemed to do the trick. I wrote back to the person who had emailed me, and they were pleased as punch:
Well done, Thanks! The staff, dots, etc. all appear as white on a black background. When I click “Print”, it looks like it still comes out black on a white background, as expected.
I’m very grateful that they brought the issue to my attention. If they hadn’t, that assumption would still be lying in wait, preparing to ambush someone else.
I’d like to play it cool when I announce the latest speakers for UX London 2023, like I could be all nonchalant and say, “oh yeah, did I not mention these people are also speaking…?”
But I wouldn’t be able to keep up that façade for longer than a second. The truth is I am excited to the point of skittish gigglyness about this line-up.
Look, I’ll let you explore these speakers for yourself while I try to remain calm and simply enumerate the latest additions…
The line-up is almost complete now! Just one more speaker to announce.
I highly recommend you get your UX London ticket if you haven’t already. You won’t want to miss this!
Push notifications are finally arriving on iOS—hallelujah! Like I said last year, this is my number one wish for the iPhone, though not because I personally ever plan to use the feature:
When I’m evangelising the benefits of building on the open web instead of making separate iOS and Android apps, I inevitably get asked about notifications. As long as mobile Safari doesn’t support them—even though desktop Safari does—I’m somewhat stumped. There’s no polyfill for this feature other than building an entire native app, which is a bit extreme as polyfills go.
With push notifications in mobile Safari, the arguments for making proprietary apps get weaker. That’s good.
The announcement post is a bit weird though. It never uses the phrase “progressive web apps”, even though clearly the entire article is all about progressive web apps. I don’t know if this down to Not-Invented-Here syndrome by the Apple/Webkit team, or because of genuine legal concerns around using the phrase.
Instead, there are repeated references to “Home Screen apps”. This distinction makes some sense though. In order to use web push on iOS, your website needs to be added to the home screen.
I think that would be fair enough, if it weren’t for the fact that adding a website to the home screen remains such a hidden feature that even power users would be forgiven for not knowing about it. I described the steps here:
- Tap the “share” icon. It’s not labelled “share.” It’s a square with an arrow coming out of the top of it.
- A drawer pops up. The option to “add to home screen” is nowhere to be seen. You have to pull the drawer up further to see the hidden options.
- Now you must find “add to home screen” in the list
- Copy
- Add to Reading List
- Add Bookmark
- Add to Favourites
- Find on Page
- Add to Home Screen
- Markup
As long as this remains the case, we can expect usage of web push on iOS to be vanishingly low. Hardly anyone is going to add a website to their home screen when their web browser makes it so hard.
If you’d like to people to install your progressive web app, you’ll almost certainly need to prompt people to do so. Here’s the page I made on thesession.org with instructions on how to add to home screen. I link to it from the home page of the site.
I wish that pages like that weren’t necessary. It’s not the best user experience. But as long as mobile Safari continues to bury the home screen option, we don’t have much choice but to tackle this ourselves.
Please put your fingers on the desk in front of you and move them up and down rapidly in the manner of a snare drum…
I’m very happy to announce the first four speakers for UX London 2023:
This is shaping up nicely! You can expect some more speaker announcements before too long.
But don’t wait too long to get your ticket—early-bird pricing ends this month on Friday, February 24th. Then the price goes up by £200. If you need to convince your boss, here are some reasons to attend.
I very much look forward to seeing you at Tobacco Dock on June 22nd and 23rd this year!
I keep thinking about this blog post I linked to last week by Jacob Kaplan-Moss. It’s called Quality Is Systemic:
Software quality is more the result of a system designed to produce quality, and not so much the result of individual performance. That is: a group of mediocre programmers working with a structure designed to produce quality will produce better software than a group of fantastic programmers working in a system designed with other goals.
I think he’s on to something. I also think this applies to design just as much as development. Maybe more so. In design, there’s maybe too much emphasis placed on the talent and skill of individual designers and not enough emphasis placed on creating and nurturing a healthy environment where anyone can contribute to the design process.
Jacob also ties this into hiring:
Instead of spending tons of time and effort on hiring because you believe that you can “only hire the best”, direct some of that effort towards building a system that produces great results out of a wider spectrum of individual performance.
I couldn’t agree more! It just one of the reasons why the smart long-term strategy can be to concentrate on nurturing junior designers and developers rather than head-hunting rockstars.
As an aside, if you think that the process of nurturing junior designers and developers is trickier now that we’re working remotely, I highly recommend reading Mandy’s post, Official myths:
Supporting junior staff is work. It’s work whether you’re in an office some or all of the time, and it’s work if Slack is the only office you know. Hauling staff back to the office doesn’t make supporting junior staff easier or even more likely.
Hiring highly experienced designers and developers makes total sense, at least in the short term. But I think the better long-term solution—as outlined by Jacob—is to create (and care for) a system where even inexperienced practitioners will be able to do good work by having the support and access to knowledge that they need.
I was thinking about this last week when Irina very kindly agreed to present a lunch’n’learn for Clearleft all about inclusive design.
She answered a question that had been at the front of my mind: what’s the difference between inclusive design and accessibility?
The way Irina put it, accessibility is focused on implementation. To make a website accessible, you need people with the necessary skills, knowledge and experience.
But inclusive design is about the process and the system that leads to that implementation.
To use that cliché of the double diamond, maybe inclusive design is about “building the right thing” and accessibility is about “building the thing right.”
Or to put it another way, maybe accessibility is about outputs, whereas inclusive design is about inputs. You need both, but maybe we put too much emphasis on the outputs and not enough emphasis on the inputs.
This is what made me think of Jacob’s assertion that quality is systemic.
Imagine someone who’s an expert at accessibility: they know all the details of WCAG and ARIA. Now put that person into an organisation that doesn’t prioritise accessibility. They’re going to have a hard time and they probably won’t be able to be very effective despite all their skills.
Now imagine an organisation that priorities inclusivity. Even if their staff don’t (yet) have the skills and knowledge of an accessibility expert, just having the processes and priorities in place from the start will make it easier for everyone to contribute to a more accessible experience.
It’s possible to make something accessible in the absence of a system that prioritises inclusive design but it will be hard work. Whereas making sure inclusive design is prioritised at an organisational level makes it much more likely that the outputs will be accessible.
Not long now until the last ever dConstruct. It’s on Friday of next week, that’s the 9th of September. And there are still a few tickets available if you haven’t got yours yet.
I have got one update to the line-up to report. Sadly, Léonie Watson isn’t going to be able to make it after all. That’s a shame.
But that means there’s room to squeeze in one more brilliant speaker from the vaults of the dConstruct archive.
I’m very pleased to announce that Seb Lee-Delisle will be returning, ten years after his first dConstruct appearance.
Back then he was entertaining us with hardware hacking and programming for fun. That was before he discovered lasers. Now he’s gone laser mad.
Don’t worry though. He’s fully qualified to operate lasers so he’s not going to take anyone’s eye out at dConstruct. Probably.
The first speakers are live on the UX London 2022 site! There are only five people announced for now—just enough to give you a flavour of what to expect. There will be many, many more.
Putting together the line-up of a three-day event is quite challenging, but kind of fun too. On the one hand, each day should be able to stand alone. After all, there are one-day tickets available. On the other hand, it should feel like one cohesive conference, not three separate events.
I’ve decided to structure the three days to somewhat mimic the design process…
The first day is all about planning and preparation. This is like the first diamond in the double-diamond process: building the right thing. That means plenty of emphasis on research.
The second day is about creation and execution. It’s like that second diamond: building the thing right. This could cover potentially everything but this year the focus will be on content design.
The third day is like the third diamond in the double dia— no, wait. The third day is about growing, scaling, and maintaining design. That means there’ll be quite an emphasis on topics like design systems and design engineering, maybe design ops.
But none of the days will be exclusively about a single topic. There are evergreen topics that apply throughout the process: product design, design ethics, inclusive design.
It’s a lot to juggle! But I’m managing to overcome choice paralysis and assemble a very exciting line-up indeed. Trust me—you won’t want to miss this!
Early bird tickets are available until February 28th. That’s just a few days away. I recommend getting your tickets now—you won’t regret it!
Quite a few people are bringing their entire teams, which is perfect. UX London can be both an educational experience and a team-bonding exercise. Let’s face it, it’s been too long since any of us have had a good off-site.
If you’re one of those lucky people who’s coming along (or if you’re planning to), I’m curious: given the themes mentioned above, are there specific topics that you’d hope to see covered? Drop me a line and let me know.
Also, if you read the description of the event and think “Oh, I know the perfect speaker!” then I’d love to hear from you. Maybe that speaker is you. (Although, cards on the table; if you look like me—another middle-age white man—I may take some convincing.)
Right. Time to get back to my crazy wall of conference curation.
Eleven years ago, I made a prediction:
The original URL for this prediction (www.longbets.org/601) will no longer be available in eleven years.
One year later, Matt called me on it and the prediction officially became a bet:
We’re playing for $1000. If I win, that money goes to the Bletchley Park Trust. If Matt wins, it goes to The Internet Archive.
I’m very happy to lose this bet.
When I made the original prediction eleven years ago that a URL on the longbets.org site would no longer be available, I did so in a spirit of mischief—it was a deliberately meta move. But it was also informed by a genuine feeling of pessimism around the longevity of links on the web. While that pessimism was misplaced in this case, it was informed by data.
The lifetime of a URL on the web remains shockingly short. What I think has changed in the intervening years is that people may have become more accustomed to the situation. People used to say “once something is online it’s there forever!”, which infuriated me because the real problem is the exact opposite: if you put something online, you have to put in real effort to keep it online. After all, we don’t really buy domain names; we just rent them. And if you publish on somebody else’s domain, you’re at their mercy: Geocities, MySpace, Facebook, Medium, Twitter.
These days my view towards the longevity of online content has landed somewhere in the middle of the two dangers. There’s a kind of Murphy’s Law around data online: anything that you hope will stick around will probably disappear and anything that you hope will disappear will probably stick around.
One huge change in the last eleven years that I didn’t anticipate is the migration of websites to HTTPS. The original URL of the prediction used HTTP. I’m glad to see that original URL now redirects to a more secure protocol. Just like most of the World Wide Web. I think we can thank Let’s Encrypt for that. But I think we can also thank Edward Snowden. We are no longer as innocent as we were eleven years ago.
I think if I could tell my past self that most of the web would using HTTPS by 2022, my past self would be very surprised …’though not as surprised at discovering that time travel had also apparently been invented.
The Internet Archive has also been a game-changer for digital preservation. While it’s less than ideal that something isn’t reachable at its original URL, knowing that there’s probably a copy of the content at archive.org lessens the sting considerably. I couldn’t be happier that this fine institution is the recipient of the stakes of this bet.
I wrote about how I created a page on The Session with instructions for installing the site to your home screen. When I said that I included screenshots on that page, I may have underplayed the effort involved. It was real faff.
I’ve got an iPhone so generating screenshots (and video) from that wasn’t too bad. But I don’t have access to an Android phone. I found myself scouring the web for templates that I could use to mockup a screenshot of the address bar.
That got me thinking…
Wouldn’t it be cool if there were a service that generated those screenshots for you? You give it a URL, and it spits out screenshots of the site complete with overlays showing the installation flow on Android and iOS. It could even generate the img markup, complete with differently-scaled images for the srcset attribute.
Download the images. Copy that markup. Paste it into a page on your site. Boom! Now you’ve got somewhere to point your visitors to if you’d like them to install your progressive web app.
There are already some services out there for generating screenshots of mobile phones but they’re missing is the menu overlays for adding to home screen.
The devrels at both Google and Microsoft have been doing a great job of promoting progressive web apps. They’ve built tools to help you with tasks like generating icons or creating your web app manifest. It would be sooooo nifty if those tools also generated instructional screenshots for adding to home screen!
I posted to adactio.com 968 times in 2021.
That’s considerably less than 2020 or 2019. Not sure why.
March was the busiest month with 118 posts.
I published:
Those notes include 170 photos and 162 replies.
Elsewhere in 2021 I published two seasons of the Clearleft podcast (12 episodes), and I wrote the 15 modules that comprise a course on responsive design on web.dev.
Most of my speaking engagements in 2021 were online though I did manage a little bit of travel in between COVID waves.
My travel map for the year includes one transatlantic trip: Christmas in Arizona, where I’m writing this end-of-year wrap-up before getting back on a plane to England tomorrow, Omicron willing.
If you’re not already subscribed to the Clearleft podcast, you should probably remedy that. The third season is about to drop any day now.
Once again, the season will comprise six episodes released on a weekly schedule.
That’s a cadence I more or less picked at random, but I think it’s working out well. Six episodes are enough for the podcast to sustain your interest without overstaying its welcome. And by taking nice long breaks between seasons, you’re never going to end up with that podcast problem of having a backlog of episodes that you never seem to get around to listening to.
That said, if you did fancy going through the backlog, there’s a mere twelve episodes for you to catch up on. Six from season one and six from season two. None of the episodes are overly long. Again, I don’t want this podcast to overstay its welcome. I respect your time. A typical episode is somewhere between 20 and 25 minutes of multiple viewpoints and voices.
You can subscribe to the RSS feed or use whichever service you prefer to get your podcasts from: Apple, Google, Spotify, Stitcher, Deezer, TuneIn, Castro, Pocket Casts, Player FM, or my own personal choice, Overcast.
Or you could just huffduff whichever episodes sound most appealing to you. But honestly, and I may be biased here, they’re all pretty darn great so I recommend subscribing.
If you subscribe now, then the episodes from season three will magically appear in your podcast software of choice. Again, I know I’m biased, but this is going to be an excellent season featuring some very smart folks sharing their stories.
Just to be clear, in case you haven’t listened to the Clearleft podcast before, this isn’t your usual podcast format. Yes, I interview people but I don’t release one interview per episode. Instead, each episode zeroes in one topic, and features different opinions from different people. It’s tight and snappy with no filler. That involves a lot of production and editing work, but I think it’s worth it for the end result.
Can you tell that I’m excited?
I’m speaking at a conference this week. But unlike all the conference talks I’ve done for the past year and a half, this one won’t be online. I’m going to Zürich.
I have to admit, when I was first contacted about speaking at a real, honest-to-goodness in-person event, I assumed that things would be in a better state by the end of August 2021. The delta variant has somewhat scuppered the predicted trajectory of The Situation.
Still, this isn’t quite like going to speak at an event in 2020. I’m double-vaccinated for one thing. And although this event will be held indoors, the numbers are going to be halved and every attendee will need to show proof of vaccination along with their conference ticket. That helps to put my mind at ease.
But as the event draws nearer, I must admit to feeling uneasy. There’ll be airports and airplanes. I’m not looking forward to dealing with those. But I am looking forward to seeing some lovely people on the other end.
When I post a link, I do it for two reasons.
First of all, it’s me pointing at something and saying “Check this out!”
Secondly, it’s a way for me to stash something away that I might want to return to. I tag all my links so when I need to find one again, I just need to think “Now what would past me have tagged it with?” Then I type the appropriate URL: adactio.com/links/tags/whatever
There are some links that I return to again and again.
Back in 2008, I linked to a document called A Few Notes on The Culture. It’s a copy of a post by Iain M Banks to a newsgroup back in 1994.
Alas, that link is dead. Linkrot, innit?
But in 2013 I linked to the same document on a different domain. That link still works even though I believe it was first published around twenty(!) years ago (view source for some pre-CSS markup nostalgia).
Anyway, A Few Notes On The Culture is a fascinating look at the world-building of Iain M Banks’s Culture novels. He talks about the in-world engineering, education, biology, and belief system of his imagined utopia. The part that sticks in my mind is when he talks about economics:
Let me state here a personal conviction that appears, right now, to be profoundly unfashionable; which is that a planned economy can be more productive - and more morally desirable - than one left to market forces.
The market is a good example of evolution in action; the try-everything-and-see-what-works approach. This might provide a perfectly morally satisfactory resource-management system so long as there was absolutely no question of any sentient creature ever being treated purely as one of those resources. The market, for all its (profoundly inelegant) complexities, remains a crude and essentially blind system, and is — without the sort of drastic amendments liable to cripple the economic efficacy which is its greatest claimed asset — intrinsically incapable of distinguishing between simple non-use of matter resulting from processal superfluity and the acute, prolonged and wide-spread suffering of conscious beings.
It is, arguably, in the elevation of this profoundly mechanistic (and in that sense perversely innocent) system to a position above all other moral, philosophical and political values and considerations that humankind displays most convincingly both its present intellectual immaturity and — through grossly pursued selfishness rather than the applied hatred of others — a kind of synthetic evil.
Those three paragraphs might be the most succinct critique of unfettered capitalism I’ve come across. The invisible hand as a paperclip maximiser.
Like I said, it’s a fascinating document. In fact I realised that I should probably store a copy of it for myself.
I have a section of my site called “extras” where I dump miscellaneous stuff. Most of it is unlinked. It’s mostly for my own benefit. That’s where I’ve put my copy of A Few Notes On The Culture.
Here’s a funny thing …for all the times that I’ve revisited the link, I never knew anything about the site is was hosted on—vavatch.co.uk—so this most recent time, I did a bit of clicking around. Clearly it’s the personal website of a sci-fi-loving college student from the early 2000s. But what came as a revelation to me was that the site belonged to …Adrian Hon!
I’m impressed that he kept his old website up even after moving over to the domain mssv.net, founding Six To Start, and writing A History Of The Future In 100 Objects. That’s a great snackable book, by the way. Well worth a read.
My last long-distance trip before we were all grounded by The Situation was to San Francisco at the end of 2019. I attended Indie Web Camp while I was there, which gave me the opportunity to add a little something to my website: an “on this day” page.
I’m glad I did. While it’s probably of little interest to anyone else, I enjoy scrolling back to see how the same date unfolded over the years.
’Sfunny, when I look back at older journal entries they’re often written out of frustration, usually when something in the dev world is bugging me. But when I look back at all the links I’ve bookmarked the vibe is much more enthusiastic, like I’m excitedly pointing at something and saying “Check this out!” I feel like sentiment analyses of those two sections of my site would yield two different results.
But when I scroll down through my “on this day” page, it also feels like descending deeper into the dark waters of linkrot. For each year back in time, the probability of a link still working decreases until there’s nothing but decay.
Sadly this is nothing new. I’ve been lamenting the state of digital preservation for years now. More recently Jonathan Zittrain penned an article in The Atlantic on the topic:
Too much has been lost already. The glue that holds humanity’s knowledge together is coming undone.
In one sense, linkrot is the price we pay for the web’s particular system of hypertext. We don’t have two-way linking, which means there’s no centralised repository of links which would be prohibitively complex to maintain. So when you want to link to something on the web, you just do it. An a element with an href attribute. That’s it. You don’t need to check with the owner of the resource you’re linking to. You don’t need to check with anyone. You have complete freedom to link to any URL you want to.
But it’s that same simple system that makes the act of linking a gamble. If the URL you’ve linked to goes away, you’ll have no way of knowing.
As I scroll down my “on this day” page, I come across more and more dead links that have been snapped off from the fabric of the web.
If I stop and think about it, it can get quite dispiriting. Why bother making hyperlinks at all? It’s only a matter of time until those links break.
And yet I still keep linking. I still keep pointing to things and saying “Check this out!” even though I know that over a long enough timescale, there’s little chance that the link will hold.
In a sense, every hyperlink on the World Wide Web is little act of hope. Even though I know that when I link to something, it probably won’t last, I still harbour that hope.
If hyperlinks are built on hope, and the web is made of hyperlinks, then in a way, the World Wide Web is quite literally made out of hope.
I like that.
The French have a wonderful phrase, lesprit de l’escalier. It describes that feeling when you’ve stormed out of the room after an argument and you’re already halfway down the stairs when you think of the perfect quip that you wish you had said.
I had a similar feeling last week but instead of wishing I had said something, I was wishing I had kept my mouth shut.
I have an annoying tendency to want to get the last word in. I don’t have a problem coming up with a barbed quip. My problem is wishing I could take them back.
This happened while I was hosting the conference portion of UX Fest last week. On the hand, I don’t want the discussions to be dull so I try to come up with thought-provoking points to bring up. But take that too far and it gets ugly. There’s a fine line between asking probing questions and just being mean (I’m reminded of headline in The Onion, “Devil’s Advocate Turns Out To Be Just An Asshole”).
Towards the end of the conference, there was a really good robust discussion underway. But I couldn’t resist getting in the last word. In the attempt to make myself look clever I ended up saying something hurtful and clumsy.
Fucking idiot.
I apologised, and it all worked out well in the end, but damn if I haven’t spent the last week on the staircase wishing I could turn back time and say …nothing.
I’ve always liked the way that web browsers are called “user agents” in the world of web standards. It’s such a succinct summation of what browsers are for, or more accurately who browsers are for. Users.
The term makes sense when you consider that the internet is for end users. That’s not to be taken for granted. This assertion is now enshrined in the Internet Engineering Task Force’s RFC 8890—like Magna Carta for the network age. It’s also a great example of prioritisation in a design principle:
When there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users.
So when a web browser—ostensibly an agent for the user—prioritises user-hostile third parties, we get upset.
Google Chrome—ostensibly an agent for the user—is running an origin trial for Federated Learning of Cohorts (FLoC). This is not a technology that serves the end user. It is a technology that serves third parties who want to target end users. The most common use case is behavioural advertising, but targetting could be applied for more nefarious purposes.
The Electronic Frontier Foundation wrote an explainer last month: Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know.
Let’s back up a minute and look at why this is happening. End users are routinely targeted today (for behavioural advertising and other use cases) through third-party cookies. Some user agents like Apple’s Safari and Mozilla’s Firefox are stamping down on this, disabling third party cookies by default.
Seeing which way the wind is blowing, Google’s Chrome browser will also disable third-party cookies at some time in the future (they’re waiting to shut that barn door until the fire is good’n’raging). But Google isn’t just in the browser business. Google is also in the ad tech business. So they still want to advertisers to be able to target end users.
Yes, this is quite the cognitive dissonance: one part of the business is building a user agent while a different part of the company is working on ways of tracking end users. It’s almost as if one company shouldn’t simultaneously be the market leader in three separate industries: search, advertising, and web browsing. (Seriously though, I honestly think Google’s search engine would get better if it were split off from the parent company, and I think that Google’s web browser would also get better if it were a separate enterprise.)
Anyway, one possible way of tracking users without technically tracking individual users is to assign them to buckets, or cohorts of interest based on their browsing habits. Does that make you feel safer? Me neither.
That’s what Google is testing with the origin trial of FLoC.
If you, as an end user, don’t wish to be experimented on like this, there are a few things you can do:
That last decision is interesting. On the one hand, the origin trial is supposed to be on a small scale, hence the lack of European countries. On the other hand, the origin trial is “opt out” instead of “opt in” so that they can gather a big enough data set. Weird.
The plan is that if and when FLoC launches, websites would have to opt in to it. And when I say “plan”, I mean “best guess.”
I, for one, am filled with confidence that Google would never pull a bait-and-switch with their technologies.
In the meantime, if you’re a website owner, you have to opt your website out of the origin trial. You can do this by sending a server header. A meta element won’t do the trick, I’m afraid.
I’ve done it for my sites, which are served using Apache. I’ve got this in my .conf file:
<IfModule mod_headers.c>
Header always set Permissions-Policy "interest-cohort=()"
</IfModule>
If you don’t have access to your server, tough luck. But if your site runs on Wordpress, there’s a proposal to opt out of FLoC by default.
Interestingly, none of the Chrome devs that I follow are saying anything about FLoC. They’re usually quite chatty about proposals for potential standards, but I suspect that this one might be embarrassing for them. It was a similar situation with AMP. In that case, Google abused its monopoly position in search to blackmail publishers into using Google’s format. Now Google’s monopoly in advertising is compromising the integrity of its browser. In both cases, it makes it hard for Chrome devs claiming to have the web’s best interests at heart.
But one of the advantages of having a huge share of the browser market is that Chrome can just plough ahead and unilaterily implement whatever it wants even if there’s no consensus from other browser makers. So that’s what Google is doing with FLoC. But their justification for doing this doesn’t really work unless other browsers play along.
The problem is with step three. The theory is that if FLoC gives third parties what they need, then they won’t reach for fingerprinting. Even if there were any validity to that hypothesis, the only chance it has of working is if every browser joins in with FLoC. Otherwise ad tech companies are leaving money on the table. Can you seriously imagine third parties deciding that they just won’t target iPhone or iPad users any more? Remember that Safari is the only real browser on iOS so unless FLoC is implemented by Apple, third parties can’t reach those people …unless those third parties use fingerprinting instead.
Google have set up a situation where it looks like FLoC is going head-to-head with fingerprinting. But if FLoC becomes a reality, it won’t be instead of fingerprinting, it will be in addition to fingerprinting.
Google is quite right to point out that fingerprinting is A Very Bad Thing. But their concerns about fingerprinting sound very hollow when you see that Chrome is pushing ahead and implementing a raft of browser APIs that other browser makers quite rightly point out enable more fingerprinting: Battery Status, Proximity Sensor, Ambient Light Sensor and so on.
When it comes to those APIs, the message from Google is that fingerprinting is a solveable problem.
But when it comes to third party tracking, the message from Google is that fingerprinting is inevitable and so we must provide an alternative.
Which one is it?
Google’s flimsy logic for why FLoC is supposedly good for end users just doesn’t hold up. If they were honest and said that it’s to maintain the status quo of the ad tech industry, it would make much more sense.
The flaw in Google’s reasoning is the fundamental idea that tracking is necessary for advertising. That’s simply not true. Sacrificing user privacy is fundamental to behavioural advertising …but behavioural advertising is not the only kind of advertising. It isn’t even a very good kind of advertising.
FLoC seems to be Google’s way of saving a dying business. They are trying to keep targeted ads going by making them more “privacy-friendly” and “anonymous”. But behavioral profiling and targeted advertisement is not compatible with a privacy-respecting web.
What’s striking is that the very monopolies that make Google and Facebook the leaders in behavioural advertising would also make them the leaders in contextual advertising. Almost everyone uses Google’s search engine. Almost everyone uses Facebook’s social network. An advertising model based on what you’re currently looking at would keep Google and Facebook in their dominant positions.
Google made their first many billions exclusively on contextual advertising. Google now prefers to push the message that behavioral advertising based on personal data collection is superior but there is simply no trustworthy evidence to that.
I sincerely hope that Chrome will align with Safari, Firefox, Vivaldi, Brave, Edge and every other web browser. Everyone already agrees that fingerprinting is the real enemy. Imagine the combined brainpower that could be brought to bear on that problem if all browsers made user privacy a priority.
Until that day, I’m not sure that Google Chrome can be considered a user agent.
Remember how I said I was preparing an online conference talk? Well, I’m happy to say that not only is the talk prepared, but I’ve managed to successfully record it too.
If you want to see the finished results, come along to An Event Apart Spring Summit on April 19th. To sweeten the deal, I’ve got a discount code you can use when you buy any multi-day pass: AEAJEREMY.
Recording the talk took longer than I thought it would. I think it was because I said this:
It feels a bit different to prepare a talk for pre-recording rather than live delivery on stage. In fact, it feels less like preparing a conference talk and more like making a documentary.
Once I got that idea in my head, I think I became a lot fussier about the quality of the recording. “Would David Attenborough allow his documentaries to have the sound of a keyboard audibly being pressed? No! Start again!”
I’m pleased with the final results. And I’m really looking forward to the post-presentation discussion with questions from the audience. The talk gets provocative—and maye a bit ranty—towards the end so it’ll be interesting to see how people react to that.
It feels good to have the presentation finished, but it also feels …weird. It’s like the feeling that conference organisers get once the conference is over. You spend all this time working towards something and then, one day, it’s in the past instead of looming in the future. It can make you feel kind of empty and listless. Maybe it’s the same for big product launches.
The two big projects I’ve been working on for the past few months were this talk and season two of the Clearleft podcast. The talk is in the can and so is the final episode of the podcast season, which drops tomorrow.
On the one hand, it’s nice to have my decks cleared. Nothing work-related to keep me up at night. But I also recognise the growing feeling of doubt and moodiness, just like the post-conference blues.
The obvious solution is to start another big project, something on the scale of making a brand new talk, or organising a conference, or recording another podcast season, or even writing a book.
The other option is to take a break for a while. Seeing as the UK government has extended its furlough scheme, maybe I should take full advantage of it. I went on furlough for a while last year and found it to be a nice change of pace.
Two-factor authentication is generally considered A Good Thing™️ when you’re logging in to some online service.
The word “factor” here basically means “kind” so you’re doing two kinds of authentication. Typical factors are:
Asking for a password and an email address isn’t two-factor authentication. They’re two pieces of identification, but they’re the same kind (something you know). Same goes for supplying your fingerprint and your face: two pieces of information, but of the same kind (something you are).
None of these kinds of authentication are foolproof. All of them can change. All of them can be spoofed. But when you combine factors, it gets a lot harder for an attacker to breach both kinds of authentication.
The most common kind of authentication on the web is password-based (something you know). When a second factor is added, it’s often connected to your phone (something you have).
Every security bod I’ve talked to recommends using an authenticator app for this if that option is available. Otherwise there’s SMS—short message service, or text message to most folks—but SMS has a weakness. Because it’s tied to a phone number, technically you’re only proving that you have access to a SIM (subscriber identity module), not a specific phone. In the US in particular, it’s all too easy for an attacker to use social engineering to get a number transferred to a different SIM card.
Still, authenticating with SMS is an option as a second factor of authentication. When you first sign up to a service, as well as providing the first-factor details (a password and a username or email address), you also verify your phone number. Then when you subsequently attempt to log in, you input your password and on the next screen you’re told to input a string that’s been sent by text message to your phone number (I say “string” but it’s usually a string of numbers).
There’s an inevitable friction for the user here. But then, there’s a fundamental tension between security and user experience.
In the world of security, vigilance is the watchword. Users need to be aware of their surroundings. Is this web page being served from the right domain? Is this email coming from the right address? Friction is an ally.
But in the world of user experience, the opposite is true. “Don’t make me think” is the rallying cry. Friction is an enemy.
With SMS authentication, the user has to manually copy the numbers from the text message (received in a messaging app) into a form on a website (in a different app—a web browser). But if the messaging app and the browser are on the same device, it’s possible to improve the user experience without sacrificing security.
If you’re building a form that accepts a passcode sent via SMS, you can use the autocomplete attribute with a value of “one-time-code”. For a six-digit passcode, your input element might look something like this:
<input type="text" maxlength="6" inputmode="numeric" autocomplete="one-time-code">
With one small addition to one HTML element, you’ve saved users some tedious drudgery.
There’s one more thing you can do to improve security, but it’s not something you add to the HTML. It’s something you add to the text message itself.
Let’s say your website is example.com and the text message you send reads:
Your one-time passcode is 123456.
Add this to the end of the text message:
@example.com #123456
So the full message reads:
Your one-time passcode is 123456.
@example.com #123456
The first line is for humans. The second line is for machines. Using the @ symbol, you’re telling the device to only pre-fill the passcode for URLs on the domain example.com. Using the # symbol, you’re telling the device the value of the passcode. Combine this with autocomplete="one-time-code" in your form and the user shouldn’t have to lift a finger.
I’m fascinated by these kind of emergent conventions in text messages. Remember that the @ symbol and # symbol in Twitter messages weren’t ideas from Twitter—they were conventions that users started and the service then adopted.
It’s a bit different with the one-time code convention as there is a specification brewing from representatives of both Google and Apple.
Tess is leading from the Apple side and she’s got another iron in the fire to make security and user experience play nicely together using the convention of the /.well-known directory on web servers.
You can add a URL for /.well-known/change-password which redirects to the form a user would use to update their password. Browsers and password managers can then use this information if they need to prompt a user to update their password after a breach. I’ve added this to The Session.
Oh, and on that page where users can update their password, the autocomplete attribute is your friend again:
<input type="password" autocomplete="new-password">
If you want them to enter their current password first, use this:
<input type="password" autocomplete="current-password">
All of the things I’ve mentioned—the autocomplete attribute, origin-bound one-time codes in text messages, and a well-known URL for changing passwords—have good browser support. But even if they were only supported in one browser, they’d still be worth adding. These additions do absolutely no harm to browsers that don’t yet support them. That’s progressive enhancement.