Journal tags: form

203

sparkline

The datalist element on iOS 26

The datalist element is all fucked up on iOS. Again.

I haven’t “upgraded” my iPhone to iOS 26 and I have no plans to. The whole Liquid Glass thing is literally offputting. So I wouldn’t have known about the latest regression in Safari if a friend hadn’t texted me about the problem.

He was trying to do a search on The Session. He was looking for the tune, The Road To Town. He started typing this into the form on the home page of the site. He got as far as “The Road To”. That’s when the entire input was obscured by a suggestion from the associated datalist.

A screenshot of The Session on an iPhone during a search on the homepage. The search input is completely obscured by the text: The Road To Lisdoonvarna.

This is incredibly annoying and seems to be a pattern of behaviour for Safari. Features are supported …technically. But the implementation is so buggy as to be unusable.

I’ll probably have to do some user-agent sniffing, which I hate. And it won’t be enough to just sniff for Safari on iOS 26. Remember that every browser on iOS is just Webkit in a trenchcoat.

Time to file a bug and then wait God knows how long for an update to get rolled out.

Update: I filed a bug, but in the meantime it looks like user-agent sniffing is going to be impossible.

Why use React?

This isn’t a rhetorical question. I genuinely want to know why developers choose to build websites using React.

There are many possible reasons. Alas, none of them relate directly to user experience, other than a trickle-down justification: happy productive developers will make better websites. Citation needed.

It’s also worth mentioning that some people don’t choose to use React, but its use is mandated by their workplace (like some other more recent technologies I could mention). By my definition, this makes React enterprise software in this situation. My definition of enterprise software is any software that you use but that you yourself didn’t choose.

Inertia

By far the most common reason for choosing React today is inertia. If it’s what you’re comfortable with, you’d need a really compelling reason not to use it. That’s generally the reason behind usage mandates too. If we “standardise” on React, then it’ll make hiring more straightforward (though the reality isn’t quite so simple, as the React ecosystem has mutated and bifurcated over time).

And you know what? Inertia is a perfectly valid reason to choose a technology. If time is of the essence, and you know it’s going to take you time to learn a new technology, it makes sense to stick with what you know, even if it’s out of date. This isn’t just true of React, it’s true of any tech stack.

This would all be absolutely fine if React weren’t a framework that gets executed in browsers. Any client-side framework is a tax on the end user. They have to download, parse, and execute the framework in order for you to benefit.

But maybe React doesn’t need to run in the browser at all. That’s the promise of server-side rendering.

The front end

There used to be a fairly clear distinction between front-end development and back-end development. The front end consisted of HTML, CSS, and client-side JavaScript. The back end was anything you wanted as long as it could spit out those bits of the front end: PHP, Ruby, Python, or even just a plain web server with static files.

Then it became possible to write JavaScript on the back end. Great! Now you didn’t need to context-switch when you were scripting for the client or the server. But this blessing also turned out to be a bit of a curse.

When you’re writing code for the back end, some things matter more than others. File size, for example, isn’t really a concern. Your code can get really long and it probably won’t slow down the execution. And if it does, you can always buy your way out of the problem by getting a more powerful server.

On the front end, your code should have different priorities. File size matters, especially with JavaScript. The code won’t be executed on your server. It’s executed on all sorts of devices on all sorts of networks running all sorts of browsers. If things get slow, you can’t buy your way out of the problem because you can’t buy every single one of your users a new device and a new network plan.

Now that JavaScript can run on the server as well as the client, it’s tempting to just treat the code the same. It’s the same language after all. But the context really matters. Some JavaScript that’s perfectly fine to run on the server can be a resource hog on the client.

And this is where it gets interesting with React. Because most of the things people like about React still apply on the back end.

React developers

When React first appeared, it was touted as front-end tool. State management and a near-magical virtual DOM were the main selling points.

Over time, that’s changed. The claimed speed benefits of the virtual DOM turned out to be just plain false. That just left state management.

But by that time, the selling points had changed. The component-based architecture turned out to be really popular. Developers liked JSX. A lot. Once you got used to it, it was a neat way to encapsulate little bits of functionality into building blocks that can be combined in all sorts of ways.

For the longest time, I didn’t realise this had happened. I was still thinking of React as being a framework like jQuery. But React is a framework like Rails or Django. As a developer, it’s where you do all your work. Heck, it’s pretty much your identity.

But whereas Rails or Django run on the back end, React runs on the front end …except when it doesn’t.

JavaScript can run on the server, which means React can run on the server. It’s entirely possible to have your React cake and eat it. You can write all of your code in React without serving up a single line of React to your users.

That’s true in theory. The devil is in the tooling.

Priorities

Next.js allows you to write in React and do server-side rendering. But it really, really wants to output React to the client as well.

By default, you get the dreaded hydration pattern—do all the computing on the server in JavaScript (yay!), serve up HTML straight away (yay! yay!) …and then serve up all the same JavaScript that’s on the server anyway (ya—wait, what?).

It’s possible to get Next.js to skip that last step, but it’s not easy. You’ll be battling it every step of the way.

Astro takes a very different approach. It will do everything it can to keep the client-side JavaScript to a minimum. Developers get to keep their beloved JSX authoring environment without penalising users.

Alas, the collective inertia of the “modern” development community is bound up in the React/Next/Vercel ecosystem. That’s a shame, because Astro shows us that it doesn’t have to be this way.

Switching away from using React on the front end doesn’t mean you have to switch away from using React on the back end.

Why use React?

The titular question I asked is too broad and naïve. There are plenty of reasons to use React, just as there are plenty of reasons to use Wordpress, Eleventy, or any other technology that works on the back end. If it’s what you like or what you’re comfortable with, that’s reason enough.

All I really care about is the front end. I’m not going to pass judgment on anyone’s choice of server-side framework, as long as it doesn’t impact what you can do in the client. Like Harry says:

…if you’re going to use one, I shouldn’t be able to smell it.

Here’s the question I should be asking:

Why use React in the browser?

Because if the reason you’re using React is cultural—the whole team works in JSX, it makes hiring easier—then there’s probably no need to make your users download React.

If you’re making a single-page app, then …well, the first thing you should do is ask yourself if it really needs to be a single-page app. They should be the exception, not the default. But if you’re determined to make a single-page app, then I can see why state management becomes very important.

In that situation, try shipping Preact instead of React. As a developer, you’ll almost certainly notice no difference, but your users will appreciate the refreshing lack of bloat.

Mostly though, I’d encourage you to investigate what you can do with vanilla JavaScript in the browser. I totally get why you’d want to hold on to React as an authoring environment, but don’t let your framework limit what you can do on the front end. If you use React on the client, you’re not doing your users any favours.

You can continue to write in React. You can continue to use JSX. You can continue to hire React developers. But keep it on your machine. For your users, make the most of what web browsers can do.

Once you keep React on the server, then a whole world of possibilities opens up on the client. Web browsers have become incredibly powerful in what they offer you. Don’t let React-on-the-client hold you back.

And if you want to know more about what web browsers are capable of today, come to Web Day Out in Brighton on Thursday, 12th March 2026.

Reasoning

Tim recently gave a talk at Smashing Conference in New York called One Step Ahead. Based on the slides, it looks like it was an excellent talk.

Towards the end, there’s a slide that could be the tagline for Web Day Out:

Betting on the browser is our best chance at long-term success.

Most of the talk focuses on two technologies that you can add to any website with just a couple of lines of code: view transitions and speculation rules.

I’m using both of them on The Session and I can testify to their superpowers—super-snappy navigations with smooth animations.

Honestly, that takes care of 95% of the reasons for building a single-page app (the other 5% would be around managing state, which most sites—e-commerce, publishing, whatever—don’t need to bother with). Instead build a good ol’-fashioned website with pages of HTML linked together, then apply view transitions and speculation rules.

I mean, why wouldn’t you do that?

That’s not a rhetorical question. I’m genuinely interested in the reasons why people would reject a simple declarative solution in favour of the complexity of doing everything with a big JavaScript framework.

One reason might be browser support. After all, both view transitions and speculation rules are designed to be used as progressive enhancements, regardless of how many browsers happen to support them right now. If you want to attempt to have complete control, I understand why you might reach for the single-page app model, even if it means bloating the initial payload.

But think about that mindset for a second. Rather than reward the browsers that support modern features, you would instead be punishing them. You’d be treating every browser the same. Instead of taking advantage of the amazing features that some browsers have, you’d rather act as though they’re no different to legacy browsers.

I kind of understand the thinking behind that. You assume a level playing field by treating every browser as though they’re Internet Explorer. But what a waste! You ship tons of uneccesary code to perfectly capable browsers.

That could be the tagline for React.

Harry Roberts is speaking at Web Day Out

I was going to save this announcement for later, but I’m just too excited: Harry Roberts will be speaking at Web Day Out!

Goddamn, that’s one fine line-up, and it isn’t even complete yet! Get your ticket if you haven’t already.

There’s a bit of a story behind the talk that Harry is going to give…

Earlier this year, Harry posted a most excellent screed in which he said:

The web as a platform is a safe bet. It’s un-versioned by design. That’s the commitment the web makes to you—take advantage of it.

  • Opt into web platform features incrementally;
  • Embrace progressive enhancement to build fast, reliable applications that adapt to your customers’ context;
  • Write code that leans into the browser, not away from it.

Yes! Exactly!

Thing is, Harry posted this on LinkedIn. My indieweb sensibilities were affronted. So I harangued him:

You should blog this, Harry

My pestering paid off with an excellent blog post on Harry’s own site called Build for the Web, Build on the Web, Build with the Web:

The beauty of opting into web platform features as they become available is that your site becomes contextual. The same codebase adapts into its environment, playing to its strengths, rather than trying to build and ship your own environment from the ground up. Meet your users where they are.

That’s a pretty neat summation of the agenda for Web Day Out. So I thought, “Hmm …if I was able to pester Harry to turn a LinkedIn post into a really good blog post, I wonder if I could pester him to turn that blog post into a talk?”

I threw down the gauntlet. Harry accepted the challenge.

I’m sure you’re already familiar with Harry’s excellent work, but if you’re not, he’s basically Mr. Web Performance. That’s why I’m so excited to have him speak at Web Day Out—I want to hear the business case for leaning into what web browsers can do today, and he is most certainly the best person to bring receipts.

You won’t want to miss this, so be sure to get your ticket now; it’s only £225+VAT.

If you’re not ready to commit just yet, but you want to hear about more speaker announcements like this, you can sign up to the mailing list.

The Invisibles

When I was talking about monitoring web performance yesterday, I linked to the CrUX data for The Session.

CrUX is a contraction of Chrome User Experience Report. CrUX just sounds better than CEAR.

It’s data gathered from actual Chrome users worldwide. It can be handy as part of a balanced performance-monitoring diet, but it’s always worth remembering that it only shows a subset of your users; those on Chrome.

The actual CrUX data is imprisoned in some hellish Google interface so some kindly people have put more humane interfaces on it. I like Calibre’s CrUX tool as well as Treo’s.

What’s nice is that you can look at the numbers for any reasonably popular website, not just your own. Lest I get too smug about the performance metrics for The Session, I can compare them to the numbers for WikiPedia or the BBC. Both of those sites are made by people who prioritise speed, and it shows.

If you scroll down to the numbers on navigation types, you’ll see something interesting. Across the board, whether it’s The Session, Wikipedia, or the BBC, the BFcache—back/forward cache—is used around 16% to 17% of the time. This is when users use the back button (or forward button).

Unless you do something to stop them, browsers will make sure that those navigations are super speedy. You might inadvertently be sabotaging the BFcache if you’re sending a Cache-Control: no-store header or if you’re using an unload event handler in JavaScript.

I guess it’s unsurprising the BFcache numbers are relatively consistent across three different websites. People are people, whatever website they’re browsing.

Where it gets interesting is in the differences. Take a look at pre-rendering. It’s 4% for the BBC and just 0.4% for Wikipedia. But on The Session it’s a whopping 35%!

That’s because I’m using speculation rules. They’re quite straightforward to implement and they pair beautifully with full-page view transitions for a slick, speedy user experience.

It doesn’t look like WikiPedia or the BBC are using speculation rules at all, which kind of surprises me.

Then again, because they’re a hidden technology I can understand why they’d slip through the cracks.

On any web project, I think it’s worth having a checklist of The Invisibles—things that aren’t displayed directly in the browser, but that can make a big difference to the user experience.

Some examples:

If you’ve got a checklist like that in place, you can at least ask “Whose job is this?” All too often, these things are missing because there’s no clarity on whose responsible for them. They’re sorta back-end and sorta front-end.

Databasing

A few years back, Craig wrote a great piece called Fast Software, the Best Software:

Speed in software is probably the most valuable, least valued asset. To me, speedy software is the difference between an application smoothly integrating into your life, and one called upon with great reluctance.

Nelson Elhage said much the same thing in his reflections on software performance:

I’ve really come to appreciate that performance isn’t just some property of a tool independent from its functionality or its feature set. Performance — in particular, being notably fast — is a feature in and of its own right, which fundamentally alters how a tool is used and perceived.

Or, as Robin put it:

I don’t think a website can be good until it’s fast.

Those sentiments underpin The Session. Speed is as much a priority as usability, accessibility, privacy, and security.

I’m fortunate in that the site doesn’t have an underlying business model at odds with these priorities. I’m under no pressure to add third-party code that would track users and slow down the website.

When it comes to making fast websites, most of the obstacles are put in place by front-end development, mostly JavaScript. I’ve been pretty ruthless in my pursuit of speed on The Session, removing as much JavaScript as possible. On the bigger pages, the bottleneck now is DOM size rather than parsing and excuting JavaScript. As bottlenecks go, it’s not the worst.

But even with all my core web vitals looking good, I still have an issue that can’t be solved with front-end optimisations. Time to first byte (or TTFB if you’d rather use an initialism that takes just as long to say as the words it’s replacing).

When it comes to reducing the time to first byte, there are plenty of factors that are out of my control. But in the case of The Session, something I do have control over is the server set-up, specifically the database.

Now I could probably solve a lot of my speed issues by throwing money at the problem. If I got a bigger better server with more RAM and CPUs, I’m pretty sure it would improve the time to first byte. But my wallet wouldn’t thank me.

(It’s still worth acknowledging that this is a perfectly valid approach when it comes to back-end optimisation that isn’t available on the front end; you can’t buy all your users new devices.)

So I’ve been spending some time really getting to grips with the MySQL database that underpins The Session. It was already normalised and indexed to the hilt. But perhaps there were server settings that could be tweaked.

This is where I have to give a shout-out to Releem, a service that is exactly what I needed. It monitors your database and then over time suggests configuration tweaks, explaining each one along the way. It’s a seriously good service that feels as empowering as it is useful.

I wish I could afford to use Releem on an ongoing basis, but luckily there’s a free trial period that I could avail of.

Thanks to Releem, I was also able to see which specific queries were taking the longest. There was one in particular that had always bothered me…

If you’re a member of The Session, then you can see any activity related to something you submitted in the past. Say, for example, that you added a tune or an event to the site a while back. If someone else comments on that, or bookmarks it, then that shows up in your “notifications” feed.

That’s all well and good but under the hood it was relying on a fairly convuluted database query to a very large table (a table that’s effectively a log of all user actions). I tried all sorts of query optimisations but there always seemed to be some combination of circumstances where the request would take ages.

For a while I even removed the notifications functionality from the site, hoping it wouldn’t be missed. But a couple of people wrote to ask where it had gone so I figured I ought to reinstate it.

After exhausting all the technical improvements, I took a step back and thought about the purpose of this particular feature. That’s when I realised that I had been thinking about the database query too literally.

The results are ordered in reverse chronological order, which makes sense. They’re also chunked into groups of ten, which also makes sense. But I had allowed for the possibility that you could navigate through your notifications back to the very start of your time on the site.

But that’s not really how we think of notifications in other settings. What would happen if I were to limit your notifications only to activity in, say, the last month?

Boom! Instant performance improvement by orders of magnitude.

I guess there’s a lesson there about switching off the over-analytical side of my brain and focusing on actual user needs.

Anyway, thanks to the time I’ve spent honing the database settings and optimising the longest queries, I’ve reduced the latency by quite a bit. I’m hoping that will result in an improvement to the time to first byte.

Time—and monitoring tools—will tell.

Uses

I don’t use large language models. My objection to using them is ethical. I know how the sausage is made.

I wanted to clarify that. I’m not rejecting large language models because they’re useless. They can absolutely be useful. I just don’t think the usefulness outweighs the ethical issues in how they’re trained.

Molly White came to the same conclusion:

The benefits, though extant, seem to pale in comparison to the costs.

Rich has similar thoughts:

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

I genuinely look forward to being able to use a large language model with a clear conscience. Such a model would need to be trained ethically. When we get a free-range organic large language model I’ll be the first in line to use it. Until then, I’ll abstain. Remember:

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Still, in anticipation of an ethical large language model someday becoming reality, I think it’s good for me to have an understanding of which tasks these tools are good at.

Prototyping seems like a good use case. My general attitude to prototyping is the exact opposite to my attitude to production code; use absolutely any tool you want and prioritise speed over quality.

When it comes to coding in general, I think Laurie is really onto something when he says:

Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

In other words, despite what the hype says, these tools are far better at transforming than they are at generating.

Iris Meredith goes deeper into this distinction between transformative and compositional work:

Compositionality relies (among other things) on two core values or functions: choice and precision, both of which are antithetical to LLM functioning.

My own take on this is that transformative work is often the drudge work—take this data dump and convert it to some other format; take this mock-up and make a disposable prototype. I want my tools to help me with that.

But compositional work that relies on judgement, taste, and choice? Not only would I not use a large language model for that, it’s exactly the kind of work that I don’t want to automate away.

Transformative work is done with broad brushstrokes. Compositional work is done with a scalpel.

Large language models are big messy brushes, not scalpels.

The closing talks at UX London 2025

It’s just over one month until UX London. You should grab a ticket if you haven’t already!

The format of UX London is quite special. While the focus of each day is different—discovery, design, and delivery—each day unfolds like this…

There are four talks in the morning. You get your brain filled with ideas and learn from fantastic speakers. It’s a single track—everyone’s getting the same shared experience.

Then after a lunch, you choose from one of four workshops. Whatever you choose, it’s going to be hands-on. You can leave your laptop at home.

A day of listening to talks could get exhausting. A workshop that lasts all day could be even more exhausting. But somehow by splitting the day between both activities, the energy level is just right!

That said, we don’t want the day to end with everyone spread across four different workshop rooms. That’s why there’s one final talk at the end of each day.

These closing talks are a bit different to the morning talks. Whereas the focus of the morning talks is on practical skills that you can apply straight away, the closing talks are an opportunity to sit back and have your mind expanded. They’ll be fun and thought-provoking.

Paula Zuccotti is closing out day one with a talk about two of her projects: Every Thing We Touch and Future Archeology:

This talk invites audiences to reconsider the meaning of the objects they encounter every day and reflect on what their possessions might reveal about who we are and what we value, both now and in the years to come.

Sarah Hyndman will wrap up day two with a fun interactive talk about your senses:

Join a live expedition into our inner world to explore why we see, feel and remember.

Finally, Rachel Coldicutt is going to finish UX London with a rallying cry:

Introducing the Society of Hopeful Technologists and discussing how, in modern technology development, your practice is probably more political than you realise.

I can’t wait! Get yourself a ticket for a day or for all three days.

And as a little thank you for tolerating my excitement, use the discount code JOINJEREMY to get 20% off your ticket.

Denial

The Wikimedia Foundation, stewards of the finest projects on the web, have written about the hammering their servers are taking from the scraping bots that feed large language models.

Our infrastructure is built to sustain sudden traffic spikes from humans during high-interest events, but the amount of traffic generated by scraper bots is unprecedented and presents growing risks and costs.

Drew DeVault puts it more bluntly, saying Please stop externalizing your costs directly into my face:

Over the past few months, instead of working on our priorities at SourceHut, I have spent anywhere from 20-100% of my time in any given week mitigating hyper-aggressive LLM crawlers at scale.

And no, a robots.txt file doesn’t help.

If you think these crawlers respect robots.txt then you are several assumptions of good faith removed from reality. These bots crawl everything they can find, robots.txt be damned.

Free and open source projects are particularly vulnerable. FOSS infrastructure is under attack by AI companies:

LLM scrapers are taking down FOSS projects’ infrastructure, and it’s getting worse.

You try to do the right thing by making knowledge and tools freely available. This is how you get repaid. AI bots are destroying Open Access:

There’s a war going on on the Internet. AI companies with billions to burn are hard at work destroying the websites of libraries, archives, non-profit organizations, and scholarly publishers, anyone who is working to make quality information universally available on the internet.

My own experience with The Session bears this out.

Ars Technica has a piece on this: Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries .

So does MIT Technology Review: AI crawler wars threaten to make the web more closed for everyone.

When we talk about the unfair practices and harm done by training large language models, we usually talk about it in the past tense: how they were trained on other people’s creative work without permission. But this is an ongoing problem that’s just getting worse.

The worst of the internet is continuously attacking the best of the internet. This is a distributed denial of service attack on the good parts of the World Wide Web.

If you’re using the products powered by these attacks, you’re part of the problem. Don’t pretend it’s cute to ask ChatGPT for something. Don’t pretend it’s somehow being technologically open-minded to continuously search for nails to hit with the latest “AI” hammers.

If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.

Style legend

There’s a new proposal for giving developers more control over styling form controls. I like it.

It’s clearly based on the fantastic work being done by the Open UI group on the select element. The proposal suggests that authors can opt-in to the new styling possibilities by declaring:

appearance: base;

So basically the developer is saying “I know what I’m doing—I’m taking the controls.” But browsers can continue to ship their default form styles. No existing content will break.

The idea is that once the developer has opted in, they can then style a number of pseudo-elements.

This proposal would apply to pretty much all the form controls you can think of: all the input types, along with select, progress, meter, buttons and more.

But there’s one element more that I wish were on the list:

legend

I know, technically it’s not a form control but legend and fieldset are only ever used within forms.

The legend element is notoriously annoying to style. So a lot of people just don’t bother using it, which is a real shame. It’s like we’re punishing people for doing the right thing.

Wouldn’t it be great if you, as a developer, had the option of saying “I know what I’m doing—I’m taking the controls”:

legend {
  appearance: base;
}

Imagine if that nuked the browser’s weird default styles, effectively turning the element into a span or div as far as styling is concerned. Then you could style it however you wanted. But crucially, if browsers shipped this, no existing content would break.

The shitty styling situation for legend (and its parent fieldset) is one of those long-standing annoyances that seems to have fallen down the back of the sofa of browser vendors. No one’s going to spend time working on it when there are more important newer features to ship. That’s why I’d love to see it sneak in to this new proposal for styling form controls.

I was in Amsterdam last week. Just like last year I was there to help out Vasilis’s students with a form-based assignment:

They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

(Side note: this time they were told to style it using the design system from the Dutch railway because the tax office was getting worried that they were making phishing sites.)

I saw a lot of the same challenges again. I saw how students wished they could specify a past date or a future date in a date picker without using JavaScript. And I saw them lamenting the time they spent styling legends that worked across all browsers.

Right now, Mason Freed has an open issue on the new proposal with his suggestion to add some more elements to consider. Both legend and fieldset are included. That gets a thumbs-up from me.

The web on mobile

Here’s a post outlining all the great things you can do in mobile web browsers today: Your App Should Have Been A Website (And Probably Your Game Too):

Today’s browsers are powerhouses. Notifications? Check. Offline mode? Check. Secure payments? Yep, they’ve got that too. And with technologies like WebAssembly and WebGPU, web games are catching up to native-level performance. In some cases, they’re already there.

This is all true. But this post from John Gruber is equally true: One Bit of Anecdata That the Web Is Languishing Vis-à-Vis Native Mobile Apps:

I won’t hold up this one experience as a sign that the web is dying, but it sure seems to be languishing, especially for mobile devices.

As John points out, the problems aren’t technical:

There’s absolutely no reason the mobile web experience shouldn’t be fast, reliable, well-designed, and keep you logged in. If one of the two should suck, it should be the app that sucks and the website that works well. You shouldn’t be expected to carry around a bundle of software from your utility company in your pocket. But it’s the other way around.

He’s right. It makes no sense, but this is the reality.

Ten or fifteen years ago, the gap between the web and native apps on mobile was entirely technical. There were certain things that you just couldn’t do in web browsers. That’s no longer the case now. The web caught up quite a while back.

But the experience of using websites on a mobile device is awful. Never mind the terrible performance penalties incurred by unnecessary frameworks and libraries like React and its ilk, there’s the constant game of whack-a-mole with banners and overlays. What’s just about bearable in a large desktop viewport becomes intolerable on a small screen.

This is not a technical problem. This doesn’t get solved by web standards. This is a cultural problem.

First of all, there’s the business culture. If your business model depends on tracking people or pushing newsletter sign-ups, then it’s inevitable that your website will be shite on mobile.

Mind you, if your business model depends on tracking people, you’re more likely to try push people to download your native app. Like Cory Doctorow says:

50% of web users are running ad-blockers. 0% of app users are running ad-blockers, because adding a blocker to an app requires that you first remove its encryption, and that’s a felony (Jay Freeman calls this ‘felony contempt of business-model’).

Matt May brings up the same point in his guide, How to grey-rock Meta:

Remove Meta apps from your devices and use only the mobile web versions. Mobile apps have greater access to your personal data, provided the app requests those privileges, and Facebook and Instagram in particular (more so than WhatsApp, another Meta property) request the vast majority of those privileges. This includes precise GPS data on where you are, whether or not you are using the app.

Ironically, it’s the strength of the web—and web browsers—that has led to such shitty mobile web experiences. The pretty decent security model on the web means that sites have to pester you.

Part of the reason why you don’t see the same egregious over-use of pop-ups and overlays in native apps is that they aren’t needed. If you’ve installed the app, you’re already being tracked.

But when I describe the dreadful UX of most websites on mobile as a cultural problem, I don’t just mean business culture.

Us, the people who make websites, designers and developers, we’re responsible for this too.

For all our talk of mobile-first design for the last fifteen years, we never really meant it, did we? Sure, we use media queries and other responsive techniques, but all we’ve really done is make sure that a terrible experience fits on the screen.

As developers, I’m sure we can tell ourselves all sorts of fairy tales about why it’s perfectly justified to make users on mobile networks download React, Tailwind, and megabytes more of third-party code.

As designers, I’m sure we can tell ourselves all sorts of fairy tales about why intrusive pop-ups and overlays are the responsibility of some other department (as though users make any sort of distinction).

Worst of all, we’ve spent the last fifteen years teaching users that if they want a good experience on their mobile device, they should look in an app store, not on the web.

Ask anyone about their experience of using websites on their mobile device. They’ll tell you plenty of stories of how badly it sucks.

It doesn’t matter that the web is the perfect medium for just-in-time delivery of information. It doesn’t matter that web browsers can now do just about everything that native apps can do.

In many ways, I wish this were a technical problem. At least then we could lobby for some technical advancement that would fix this situation.

But this is not a technical problem. This is a people problem. Specifically, the people who make websites.

We fucked up. Badly. And I don’t see any signs that things are going to change anytime soon.

But hey, websites on desktop are just great!

Making the new Salter Cane website

With the release of a new Salter Cane album I figured it was high time to update the design of the band’s website.

Here’s the old version for reference. As you can see, there’s a connection there in some of the design language. Even so, I decided to start completely from scratch.

I opened up a text editor and started writing HTML by hand. Same for the CSS. No templates. No build tools. No pipeline. Nothing. It was a blast!

And lest you think that sounds like a wasteful way of working, I pretty much had the website done in half a day.

Partly that’s because you can do so much with so little in CSS these days. Custom properties for colours, spacing, and fluid typography (thanks to Utopia). Logical properties. View transitions. None of this takes much time at all.

Because I was using custom properties, it was a breeze to add a dark mode with prefers-color-scheme. I think I might like the dark version more than the default.

The final stylesheet is pretty short. I didn’t bother with any resets. Browsers are pretty consistent with their default styles nowadays. As long as you’ve got some sensible settings on your body element, the cascade will take care of a lot.

There’s one little CSS trick I think is pretty clever…

The background image is this image. As you can see, it’s a rectangle that’s wider than it is tall. But the web pages are rectangles that are taller than they are wide.

So how I should I position the background image? Centred? Anchored to the top? Anchored to the bottom?

If you open up the website in Chrome (or Safari Technical Preview), you’ll see that the background image is anchored to the top. But if you scroll down you’ll see that the background image is now anchored to the bottom. The background position has changed somehow.

This isn’t just on the home page. On any page, no matter how tall it is, the background image is anchored to the top when the top of the document is in the viewport, and it’s anchored to the bottom when you reach the bottom of the document.

In the past, this kind of thing might’ve been possible with some clever JavaScript that measured the height of the document and updated the background position every time a scroll event is triggered.

But I didn’t need any JavaScript. This is a scroll-driven animation made with just a few lines of CSS.

@keyframes parallax {
    from {
        background-position: top center;
    }
    to {
        background-position: bottom center;
    }
}
@media (prefers-reduced-motion: no-preference) {
        html {
            animation: parallax auto ease;
            animation-timeline: scroll();
        }
    }
}

This works as a nice bit of progressive enhancement: by default the background image stays anchored to the top of the viewport, which is fine.

Once the site was ready, I spent a bit more time sweating some details, like the responsive images on the home page.

But the biggest performance challenge wasn’t something I had direct control over. There’s a Spotify embed on the home page. Ain’t no party like a third party.

I could put loading="lazy" on the iframe but in this case, it’s pretty close to the top of document so it’s still going to start loading at the same time as some of my first-party assets.

I decided to try a little JavaScript library called “lazysizes”. Normally this would ring alarm bells for me: solving a problem with third-party code by adding …more third-party code. But in this case, it really did the trick. The library is loading asynchronously (so it doesn’t interfere with the more important assets) and only then does it start populating the iframe.

This made a huge difference. The core web vitals went from being abysmal to being perfect.

I’m pretty pleased with how the new website turned out.

Progressively enhancing maps

The Session has been online for over 20 years. When you maintain a site for that long, you don’t want to be relying on third parties—it’s only a matter of time until they’re no longer around.

Some third party APIs are unavoidable. The Session has maps for sessions and other events. When people add a new entry, they provide the address but then I need to get the latitude and longitude. So I have to use a third-party geocoding API.

My code is like a lesson in paranoia: I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

Things are better on the client side. I’m using other people’s JavaScript libraries—like the brilliant abcjs—but at least I can self-host them.

I’m using Leaflet for embedding maps. It’s a great little library built on top of Open Street Map data.

A little while back I linked to a new project called OpenFreeMap. It’s a mapping provider where you even have the option of hosting the tiles yourself!

For now, I’m not self-hosting my map tiles (yet!), but I did want to switch to OpenFreeMap’s tiles. They’re vector-based rather than bitmap, so they’re lovely and crisp.

But there’s an issue.

I can use OpenFreeMap with Leaflet, but to do that I also have to use the MapLibre GL library. But whereas Leaflet is 148K of JavaScript, MapLibre GL is 800K! Yowzers!

That’s mahoosive by the standards of The Session’s performance budget. I’m not sure the loveliness of the vector maps is worth increasing the JavaScript payload by so much.

But this doesn’t have to be an either/or decision. I can use progressive enhancement to get the best of both worlds.

If you land straight on a map page on The Session for the first time, you’ll get the old-fashioned bitmap map tiles. There’s no MapLibre code.

But if you browse around The Session and then arrive on a map page, you’ll get the lovely vector maps.

Here’s what’s happening…

The maps are embedded using an HTML web component called embed-map. The fallback is a static image between the opening and closing tags. The web component then loads up Leaflet.

Here’s where the enhancement comes in. When the web component is initiated (in its connectedCallback method), it uses the Cache API to see if MapLibre has been stored in a cache. If it has, it loads that library:

caches.match('/path/to/maplibre-gl.js')
.then( responseFromCache => {
    if (responseFromCache) {
        // load maplibre-gl.js
    }
});

Then when it comes to drawing the map, I can check for the existence of the maplibreGL object. If it exists, I can use OpenFreeMap tiles. Otherwise I use the old Leaflet tiles.

But how does the MapLibre library end up in a cache? That’s thanks to the service worker script.

During the service worker’s install event, I give it a list of static files to cache: CSS, JavaScript, and so on. That includes third-party libraries like abcjs, Leaflet, and now MapLibre GL.

Crucially this caching happens off the main thread. It happens in the background and it won’t slow down the loading of whatever page is currently being displayed.

That’s it. If the service worker installation works as planned, you’ll get the nice new vector maps. If anything goes wrong, you’ll get the older version.

By the way, it’s always a good idea to use a service worker and the Cache API to store your JavaScript files. As you know, JavaScript is unduly expensive to performance; not only does the JavaScript file have to be downloaded, it then has to be parsed and compiled. But JavaScript stored in a cache during a service worker’s install event is already parsed and compiled.

Archives

Speaking of serendipity, not long after I wrote about making a static archive of The Session for people to download and share, I came across a piece by Alex Chan about using static websites for tiny archives.

The use-case is slightly different—this is about personal archives, like paperwork, screenshots, and bookmarks. But we both came up with the same process:

I’m deliberately going low-scale, low-tech. There’s no web server, no build system, no dependencies, and no JavaScript frameworks.

And we share the same hope:

Because this system has no moving parts, and it’s just files on a disk, I hope it will last a long time.

You should read the whole thing, where Alex describes all the other approaches they took before settling on plain ol’ HTML files in a folder:

HTML is low maintenance, it’s flexible, and it’s not going anywhere. It’s the foundation of the entire web, and pretty much every modern computer has a web browser that can render HTML pages. These files will be usable for a very long time – probably decades, if not more.

I’m enjoying this approach, so I’m going to keep using it. What I particularly like is that the maintenance burden has been essentially zero – once I set up the initial site structure, I haven’t had to do anything to keep it working.

They also talk about digital preservation:

I’d love to see static websites get more use as a preservation tool.

I concur! And it’s particularly interesting for Alex to be making this observation in the context of working with the Flickr foundation. That’s where they’re experimenting with the concept of a data lifeboat

What should we do when a digital service sinks?

This is something that George spoke about at the final dConstruct in 2022. You can listen to the talk on the dConstruct archive.

content-visibility in Safari

Earlier this year I wrote about some performance improvements to The Session using the content-visibility property in CSS.

If you say content-visibility: auto you’re telling the browser not to bother calculating the layout and paint for an element until it needs to. But you need to combine it with the contain-intrinsic-block-size property so that the browser knows how much space to leave for the element.

I mentioned the browser support:

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working).

Well, that’s happened! Safari 18 supports content-visibility. I didn’t have to do a thing and it just started working.

But …I think I’ve discovered a little bug in Safari’s implementation.

(I say I think it’s a bug with the browser because, like Jim, I’ve made the mistake in the past of thinking I had discovered a browser bug when in fact it was something caused by a browser extension. And when I say “in the past”, I mean yesterday.)

So here’s the issue: if you apply content-visibility: auto to an element that contains an SVG, and that SVG contains a text element, then Safari never paints that text to the screen.

To see an example, take a look at the fourth setting of Cooley’s reel on The Session archive. There’s a text element with the word “slide” (actually the text is inside a tspan element inside a text element). On Safari, that text never shows up.

I’m using a link to the archive of The Session I created recently rather than the live site because on the live site I’ve removed the content-visibility declaration for Safari until this bug gets resolved.

I’ve also created a reduced test case on Codepen. The only HTML is the element containing the SVGs. The only CSS—apart from the content-visibility stuff—is just a little declaration to push the content below the viewport so you have to scroll it into view (which is when the bug happens).

I’ve filed a bug report. I know it’s a fairly niche situation, but there are some other issues with Safari’s implementation of content-visibility so it’s possible that they’re all related.

Preventing automated sign-ups

The Session goes through periods of getting spammed with automated sign-ups. I’m not sure why. It’s not like they do anything with the accounts. They’re just created and then they sit there (until I delete them).

In the past I’ve dealt with them in an ad-hoc way. If the sign-ups were all coming from the same IP addresses, I could block them. If the sign-ups showed some pattern in the usernames or emails, I could use that to block them.

Recently though, there was a spate of sign-ups that didn’t have any patterns, all coming from different IP addresses.

I decided it was time to knuckle down and figure out a way to prevent automated sign-ups.

I knew what I didn’t want to do. I didn’t want to put any obstacles in the way of genuine sign-ups. There’d be no CAPTCHAs or other “prove you’re a human” shite. That’s the airport security model: inconvenience everyone to stop a tiny number of bad actors.

The first step I took was the bare minimum. I added two form fields—called “wheat” and “chaff”—that are randomly generated every time the sign-up form is loaded. There’s a connection between those two fields that I can check on the server.

Here’s how I’m generating the fields in PHP:

$saltstring = 'A string known only to me.';
$wheat = base64_encode(openssl_random_pseudo_bytes(16));
$chaff = password_hash($saltstring.$wheat, PASSWORD_BCRYPT);

See how the fields are generated from a combination of random bytes and a string of characters never revealed on the client? To keep it from goint stale, this string—the salt—includes something related to the current date.

Now when the form is submitted, I can check to see if the relationship holds true:

if (!password_verify($saltstring.$_POST['wheat'], $_POST['chaff'])) {
    // Spammer!
}

That’s just the first line of defence. After thinking about it for a while, I came to conclusion that it wasn’t enough to just generate some random form field values; I needed to generate random form field names.

Previously, the names for the form fields were easily-guessable: “username”, “password”, “email”. What I needed to do was generate unique form field names every time the sign-up page was loaded.

First of all, I create a one-time password:

$otp = base64_encode(openssl_random_pseudo_bytes(16));

Now I generate form field names by hashing that random value with known strings (“username”, “password”, “email”) together with a salt string known only to me.

$otp_hashed_for_username = md5($saltstring.'username'.$otp);
$otp_hashed_for_password = md5($saltstring.'password'.$otp);
$otp_hashed_for_email = md5($saltstring.'email'.$otp);

Those are all used for form field names on the client, like this:

<input type="text" name="<?php echo $otp_hashed_for_username; ?>">
<input type="password" name="<?php echo $otp_hashed_for_password; ?>">
<input type="email" name="<?php echo $otp_hashed_for_email; ?>">

(Remember, the name—or the ID—of the form field makes no difference to semantics or accessibility; the accessible name is derived from the associated label element.)

The one-time password also becomes a form field on the client:

<input type="hidden" name="otp" value="<?php echo $otp; ?>">

When the form is submitted, I use the value of that form field along with the salt string to recreate the field names:

$otp_hashed_for_username = md5($saltstring.'username'.$_POST['otp']);
$otp_hashed_for_password = md5($saltstring.'password'.$_POST['otp']);
$otp_hashed_for_email = md5($saltstring.'email'.$_POST['otp']);

If those form fields don’t exist, the sign-up is rejected.

As an added extra, I leave honeypot hidden forms named “username”, “password”, and “email”. If any of those fields are filled out, the sign-up is rejected.

I put that code live and the automated sign-ups stopped straight away.

It’s not entirely foolproof. It would be possible to create an automated sign-up system that grabs the names of the form fields from the sign-up form each time. But this puts enough friction in the way to make automated sign-ups a pain.

You can view source on the sign-up page to see what the form fields are like.

I used the same technique on the contact page to prevent automated spam there too.

The datalist element on iOS

The datalist element is good. It was a bit bumpy there for a while, but browser implementations have improved over time. Now it’s by far the simplest and most robust way to create an autocompleting combobox widget.

Hook up an input element with a datalist element using the list and id attributes and you’re done. You can even use a bit of Ajax to dynamically update the option elements inside the datalist in response to the user’s input. The browser takes care of all the interaction. If you try to roll your own combobox implementation, it’s almost certainly going to involve a lot of JavaScript and still probably won’t account for all use cases.

Safari on iOS—and therefore all browsers on iOS—didn’t support datalist for quite a while. But once it finally shipped, it worked really nicely. The options showed up just like automplete suggestions above the keyboard.

But that broke a while back.

The suggestions still appeared, but if you tapped on one of them, nothing happened. The input element didn’t get updated. You had to tap on a little downward arrow inside the input in order to see the list of options.

That was really frustrating for anybody on iOS using The Session. By far the most common task on the site is searching for a tune, something that’s greatly (progressively) enhanced with a dynamically-updating datalist.

I just updated to iOS 18 specifically to see if this bug has been fixed, and it has:

Fixed updating the input value when selecting an option from a datalist element.

Hallelujah!

But now there’s some additional behaviour that’s a little weird.

As well as showing the options in the autocomplete list above the keyboard, Safari on iOS—and therefore all browsers on iOS—also pops up the options as a list (as if you had tapped on that downward arrow). If the list is more than a few options long, it completely obscures the input element you’re typing into!

I’m not sure if this is a bug or if it’s the intended behaviour. It feels like a bug, but I don’t know if I should file something.

For now, I’ve updated the datalist elements on The Session to only ever hold three option elements in order to minimise the problem. Seeing as the autosuggest list above the keyboard only ever shows a maximum of three suggestions anyway, this feels like a reasonable compromise.

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Speculation rules and fears

After I wrote positively about the speculation rules API I got an email from David Cizek with some legitimate concerns. He said:

I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.

That’s fair. My hope is that the user will indeed get more say, whether that’s at the level of the browser or the operating system. I’m thinking of a prefers-reduced-data setting, much like prefers-color-scheme or prefers-reduced-motion.

But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.

I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?

So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.

Going back to speculation rules, David asked:

Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?

That question could be asked of many web technologies.

There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.

Or take the elephant in the room: JavaScript.

Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.

I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.