Journal tags: ie

455

sparkline

Why use React?

This isn’t a rhetorical question. I genuinely want to know why developers choose to build websites using React.

There are many possible reasons. Alas, none of them relate directly to user experience, other than a trickle-down justification: happy productive developers will make better websites. Citation needed.

It’s also worth mentioning that some people don’t choose to use React, but its use is mandated by their workplace (like some other more recent technologies I could mention). By my definition, this makes React enterprise software in this situation. My definition of enterprise software is any software that you use but that you yourself didn’t choose.

Inertia

By far the most common reason for choosing React today is inertia. If it’s what you’re comfortable with, you’d need a really compelling reason not to use it. That’s generally the reason behind usage mandates too. If we “standardise” on React, then it’ll make hiring more straightforward (though the reality isn’t quite so simple, as the React ecosystem has mutated and bifurcated over time).

And you know what? Inertia is a perfectly valid reason to choose a technology. If time is of the essence, and you know it’s going to take you time to learn a new technology, it makes sense to stick with what you know, even if it’s out of date. This isn’t just true of React, it’s true of any tech stack.

This would all be absolutely fine if React weren’t a framework that gets executed in browsers. Any client-side framework is a tax on the end user. They have to download, parse, and execute the framework in order for you to benefit.

But maybe React doesn’t need to run in the browser at all. That’s the promise of server-side rendering.

The front end

There used to be a fairly clear distinction between front-end development and back-end development. The front end consisted of HTML, CSS, and client-side JavaScript. The back end was anything you wanted as long as it could spit out those bits of the front end: PHP, Ruby, Python, or even just a plain web server with static files.

Then it became possible to write JavaScript on the back end. Great! Now you didn’t need to context-switch when you were scripting for the client or the server. But this blessing also turned out to be a bit of a curse.

When you’re writing code for the back end, some things matter more than others. File size, for example, isn’t really a concern. Your code can get really long and it probably won’t slow down the execution. And if it does, you can always buy your way out of the problem by getting a more powerful server.

On the front end, your code should have different priorities. File size matters, especially with JavaScript. The code won’t be executed on your server. It’s executed on all sorts of devices on all sorts of networks running all sorts of browsers. If things get slow, you can’t buy your way out of the problem because you can’t buy every single one of your users a new device and a new network plan.

Now that JavaScript can run on the server as well as the client, it’s tempting to just treat the code the same. It’s the same language after all. But the context really matters. Some JavaScript that’s perfectly fine to run on the server can be a resource hog on the client.

And this is where it gets interesting with React. Because most of the things people like about React still apply on the back end.

React developers

When React first appeared, it was touted as front-end tool. State management and a near-magical virtual DOM were the main selling points.

Over time, that’s changed. The claimed speed benefits of the virtual DOM turned out to be just plain false. That just left state management.

But by that time, the selling points had changed. The component-based architecture turned out to be really popular. Developers liked JSX. A lot. Once you got used to it, it was a neat way to encapsulate little bits of functionality into building blocks that can be combined in all sorts of ways.

For the longest time, I didn’t realise this had happened. I was still thinking of React as being a framework like jQuery. But React is a framework like Rails or Django. As a developer, it’s where you do all your work. Heck, it’s pretty much your identity.

But whereas Rails or Django run on the back end, React runs on the front end …except when it doesn’t.

JavaScript can run on the server, which means React can run on the server. It’s entirely possible to have your React cake and eat it. You can write all of your code in React without serving up a single line of React to your users.

That’s true in theory. The devil is in the tooling.

Priorities

Next.js allows you to write in React and do server-side rendering. But it really, really wants to output React to the client as well.

By default, you get the dreaded hydration pattern—do all the computing on the server in JavaScript (yay!), serve up HTML straight away (yay! yay!) …and then serve up all the same JavaScript that’s on the server anyway (ya—wait, what?).

It’s possible to get Next.js to skip that last step, but it’s not easy. You’ll be battling it every step of the way.

Astro takes a very different approach. It will do everything it can to keep the client-side JavaScript to a minimum. Developers get to keep their beloved JSX authoring environment without penalising users.

Alas, the collective inertia of the “modern” development community is bound up in the React/Next/Vercel ecosystem. That’s a shame, because Astro shows us that it doesn’t have to be this way.

Switching away from using React on the front end doesn’t mean you have to switch away from using React on the back end.

Why use React?

The titular question I asked is too broad and naïve. There are plenty of reasons to use React, just as there are plenty of reasons to use Wordpress, Eleventy, or any other technology that works on the back end. If it’s what you like or what you’re comfortable with, that’s reason enough.

All I really care about is the front end. I’m not going to pass judgment on anyone’s choice of server-side framework, as long as it doesn’t impact what you can do in the client. Like Harry says:

…if you’re going to use one, I shouldn’t be able to smell it.

Here’s the question I should be asking:

Why use React in the browser?

Because if the reason you’re using React is cultural—the whole team works in JSX, it makes hiring easier—then there’s probably no need to make your users download React.

If you’re making a single-page app, then …well, the first thing you should do is ask yourself if it really needs to be a single-page app. They should be the exception, not the default. But if you’re determined to make a single-page app, then I can see why state management becomes very important.

In that situation, try shipping Preact instead of React. As a developer, you’ll almost certainly notice no difference, but your users will appreciate the refreshing lack of bloat.

Mostly though, I’d encourage you to investigate what you can do with vanilla JavaScript in the browser. I totally get why you’d want to hold on to React as an authoring environment, but don’t let your framework limit what you can do on the front end. If you use React on the client, you’re not doing your users any favours.

You can continue to write in React. You can continue to use JSX. You can continue to hire React developers. But keep it on your machine. For your users, make the most of what web browsers can do.

Once you keep React on the server, then a whole world of possibilities opens up on the client. Web browsers have become incredibly powerful in what they offer you. Don’t let React-on-the-client hold you back.

And if you want to know more about what web browsers are capable of today, come to Web Day Out in Brighton on Thursday, 12th March 2026.

Providers

If you’re building software, it’s generally a good idea to avoid the Not-Invented-Here syndrome. This is when you insist on writing absolutely everything from scratch even if it would make more sense to use a third-party provider.

Need your app to take payments? Don’t try to become your own payment provider—use an existing provider instead.

Need your app to send email? Don’t try to code all that up yourself—just use an existing service.

This same thinking seems to apply to JavaScript libraries too. If you don’t use a library or framework, you’ll just end up writing your own library or framework instead, right?

Except that’s not the way that JavaScript frameworks work. At least not any more.

There was a time when JavaScript libraries really did abstract away browser differences that you probably didn’t want to deal with yourself. In the early days of jQuery—before querySelector existed—trying to work with the DOM could be a real pain. Libraries like jQuery helped avoid that pain.

Maybe it was even true in the early days of Angular and React. If you were trying to handle navigations yourself, it probably made sense to use a framework.

But that’s not the case any more, and hasn’t been for quite a while.

These days, client-side JavaScript frameworks don’t abstract away the underlying platform, they instead try to be an alternative. In fact, if you attempt to use web platform features, your JavaScript framework will often get in the way. You have to wait until your framework of choice supports a feature like view transitions before you get to use it.

This is nuts. Developers are choosing to use tools that actively get in the way of the web platform.

I think that most developers have the mental model of JavaScript frameworks completely backwards. They believe that the framework saves them time and effort (just like a payment provider or an email service). Instead these frameworks are simply limiting the possibility space of what you can do in web browsers today.

When you use a JavaScript framework, that isn’t the end of your work, it’s just the beginning. You still have to write your own code that makes use of that framework. Except now your code is restricted to only what the framework can do.

And yet most developers still believe that using a JavaScript framework somehow enables them to do more.

Jim Nielsen has a great framing on this. JavaScript libraries aren’t like payment providers or email services. Rather, it’s the features built into web browsers today that are like these third-party providers. When you use these features, you’re benefiting from all the work that the browser makers have put into making them as efficient as possible:

Browser makers have teams of people who, day-in and day-out, are spending lots of time developing and optimizing new their offerings.

So if you leverage what they offer you, that gives you an advantage because you don’t have to build it yourself.

Want to do nifty page transitions? Don’t use a library. Use view transitions.

Want to animate parts of the page as the user scrolls? Don’t use a library. Use scroll-driven animations.

Want to make something happen when the user clicks? Don’t use a library. For the love of all that is holy, just use a button.

If you agree that using a button makes more sense than using a div, then I encourage you to apply the same thinking to everything else your app needs to do.

Take advantage of all the wonderful things you can do in web browsers today. If instead you decide to use a JavaScript framework, you’re basically inventing from scratch.

Except now all of your users pay the price because they’re the ones who have to download the JavaScript framework when they use your app.

Responses

I had a very pleasant experience last week while I was reading through the RSS feeds I’m subscribed to. I came across two blog posts that were responding to blog posts of my own.

Robin Sloan wrote a post clarifying his position after I linked to him in my post about the slipperiness of the term “AI”.

Then Jim Nielsen wrote a deliciously satirical piece in response to my pithy little parable about research.

I love it when this happens!

Elizabeth Spiers recently wrote a piece called What Made Blogging Different?:

And if they wanted to respond to you, they had to do it on their own blog, and link back. The effect of this was that there were few equivalents of the worst aspects of social media that broke through.

It’s so true. I feel like a response from someone’s own website is exponentially more valuable than a response on Bluesky, Mastodon, Instagram, or any other social media platform.

Don’t get me wrong: I absolutely love the way that Brid.gy will send those social-media responses right back here to my own site in the form of webmentions. It also pings me whenever someone likes or shares a post of mine. But I’ve noticed that I’m not that interested in those anymore.

Maybe those low-investment actions were carried over from the old days of Twitter just because that’s the way things were always done, without us really asking whether they serve much purpose.

Right now I accept these likes and shares as webmentions. I display a tally of each kind of response under my posts. But I’m not sure why I’m doing it. I don’t particularly care about these numbers. I’m pretty sure no one else cares either.

If I cared, then they’d be vanity metrics. As it is they’re more like zombie metrics. I should probably just put them out of their misery.

Jake Archibald is speaking at Web Day Out

I’m very happy to announce that the one and only Jake Jaffa-The-Cake Archibald will be speaking at Web Day Out!

Given the agenda for this event, I think you’ll agree that Jake is a perfect fit. He’s been at the forefront of championing user-centred web standards, writing specs and shipping features in browsers.

Along the way he’s also created two valuable performance tools that I use all the time: SVGOMG and Squoosh, which has a permanent place in my dock—if you need to compress images, I highly recommend adding this progressive web app to your desktop.

He’s the man behind service workers and view transitions—two of the most important features for making websites first-class citizens on any device.

So what will he talk about at Web Day Out? Image formats? Offline functionality? Smooth animations? Something else entirely?

All will be revealed soon. In the meantime, grab yourself a ticket to Web Day Out—it’s just £225+VAT—and I’ll see you in Brighton on Thursday, 12 March 2026!

Reasoning

Tim recently gave a talk at Smashing Conference in New York called One Step Ahead. Based on the slides, it looks like it was an excellent talk.

Towards the end, there’s a slide that could be the tagline for Web Day Out:

Betting on the browser is our best chance at long-term success.

Most of the talk focuses on two technologies that you can add to any website with just a couple of lines of code: view transitions and speculation rules.

I’m using both of them on The Session and I can testify to their superpowers—super-snappy navigations with smooth animations.

Honestly, that takes care of 95% of the reasons for building a single-page app (the other 5% would be around managing state, which most sites—e-commerce, publishing, whatever—don’t need to bother with). Instead build a good ol’-fashioned website with pages of HTML linked together, then apply view transitions and speculation rules.

I mean, why wouldn’t you do that?

That’s not a rhetorical question. I’m genuinely interested in the reasons why people would reject a simple declarative solution in favour of the complexity of doing everything with a big JavaScript framework.

One reason might be browser support. After all, both view transitions and speculation rules are designed to be used as progressive enhancements, regardless of how many browsers happen to support them right now. If you want to attempt to have complete control, I understand why you might reach for the single-page app model, even if it means bloating the initial payload.

But think about that mindset for a second. Rather than reward the browsers that support modern features, you would instead be punishing them. You’d be treating every browser the same. Instead of taking advantage of the amazing features that some browsers have, you’d rather act as though they’re no different to legacy browsers.

I kind of understand the thinking behind that. You assume a level playing field by treating every browser as though they’re Internet Explorer. But what a waste! You ship tons of uneccesary code to perfectly capable browsers.

That could be the tagline for React.

Harry Roberts is speaking at Web Day Out

I was going to save this announcement for later, but I’m just too excited: Harry Roberts will be speaking at Web Day Out!

Goddamn, that’s one fine line-up, and it isn’t even complete yet! Get your ticket if you haven’t already.

There’s a bit of a story behind the talk that Harry is going to give…

Earlier this year, Harry posted a most excellent screed in which he said:

The web as a platform is a safe bet. It’s un-versioned by design. That’s the commitment the web makes to you—take advantage of it.

  • Opt into web platform features incrementally;
  • Embrace progressive enhancement to build fast, reliable applications that adapt to your customers’ context;
  • Write code that leans into the browser, not away from it.

Yes! Exactly!

Thing is, Harry posted this on LinkedIn. My indieweb sensibilities were affronted. So I harangued him:

You should blog this, Harry

My pestering paid off with an excellent blog post on Harry’s own site called Build for the Web, Build on the Web, Build with the Web:

The beauty of opting into web platform features as they become available is that your site becomes contextual. The same codebase adapts into its environment, playing to its strengths, rather than trying to build and ship your own environment from the ground up. Meet your users where they are.

That’s a pretty neat summation of the agenda for Web Day Out. So I thought, “Hmm …if I was able to pester Harry to turn a LinkedIn post into a really good blog post, I wonder if I could pester him to turn that blog post into a talk?”

I threw down the gauntlet. Harry accepted the challenge.

I’m sure you’re already familiar with Harry’s excellent work, but if you’re not, he’s basically Mr. Web Performance. That’s why I’m so excited to have him speak at Web Day Out—I want to hear the business case for leaning into what web browsers can do today, and he is most certainly the best person to bring receipts.

You won’t want to miss this, so be sure to get your ticket now; it’s only £225+VAT.

If you’re not ready to commit just yet, but you want to hear about more speaker announcements like this, you can sign up to the mailing list.

Dancing about dancing

I read recently read two books that had writers as their main protagonists:

They were both perfectly fine. But I found it hard to get really involved in either narrative. The stakes just never felt that high.

Not that high stakes a pre-requisite for a gripping narrative. I enjoyed the films The Social Network and Like A Complete Unknown. Those stakes couldn’t be lower. One is about a website that might’ve ripped off its idea from another website. The other is about someone who’d like to play different kinds of music but other people would rather he played the same music. It’s a credit to the writers and directors of both films that they could create compelling stories from such objectively unimportant subjects.

Getting back to those two books, maybe there’s something navel-gazey when writers write about writing. Then again, I really like non-fiction books about writing from Ann Lamott, Stephen King, and more.

Perhaps it’s not the writing part, but the milieu of publishing.

I’m trying to think if there are any great films about film-making (Inception doesn’t count). Living In Oblivion is pretty great. But a lot of its appeal is that it’s not taking itself too seriously.

All too often when a story is set in its own medium (a book about publishing; a film about film-making) it runs the risk of over-estimating its own importance.

The most eye-rolling example of this is The Morning Show, a television show about a television show. It genuinely tries to make the case for the super-important work being done by vacuous morning chat shows.

Databasing

A few years back, Craig wrote a great piece called Fast Software, the Best Software:

Speed in software is probably the most valuable, least valued asset. To me, speedy software is the difference between an application smoothly integrating into your life, and one called upon with great reluctance.

Nelson Elhage said much the same thing in his reflections on software performance:

I’ve really come to appreciate that performance isn’t just some property of a tool independent from its functionality or its feature set. Performance — in particular, being notably fast — is a feature in and of its own right, which fundamentally alters how a tool is used and perceived.

Or, as Robin put it:

I don’t think a website can be good until it’s fast.

Those sentiments underpin The Session. Speed is as much a priority as usability, accessibility, privacy, and security.

I’m fortunate in that the site doesn’t have an underlying business model at odds with these priorities. I’m under no pressure to add third-party code that would track users and slow down the website.

When it comes to making fast websites, most of the obstacles are put in place by front-end development, mostly JavaScript. I’ve been pretty ruthless in my pursuit of speed on The Session, removing as much JavaScript as possible. On the bigger pages, the bottleneck now is DOM size rather than parsing and excuting JavaScript. As bottlenecks go, it’s not the worst.

But even with all my core web vitals looking good, I still have an issue that can’t be solved with front-end optimisations. Time to first byte (or TTFB if you’d rather use an initialism that takes just as long to say as the words it’s replacing).

When it comes to reducing the time to first byte, there are plenty of factors that are out of my control. But in the case of The Session, something I do have control over is the server set-up, specifically the database.

Now I could probably solve a lot of my speed issues by throwing money at the problem. If I got a bigger better server with more RAM and CPUs, I’m pretty sure it would improve the time to first byte. But my wallet wouldn’t thank me.

(It’s still worth acknowledging that this is a perfectly valid approach when it comes to back-end optimisation that isn’t available on the front end; you can’t buy all your users new devices.)

So I’ve been spending some time really getting to grips with the MySQL database that underpins The Session. It was already normalised and indexed to the hilt. But perhaps there were server settings that could be tweaked.

This is where I have to give a shout-out to Releem, a service that is exactly what I needed. It monitors your database and then over time suggests configuration tweaks, explaining each one along the way. It’s a seriously good service that feels as empowering as it is useful.

I wish I could afford to use Releem on an ongoing basis, but luckily there’s a free trial period that I could avail of.

Thanks to Releem, I was also able to see which specific queries were taking the longest. There was one in particular that had always bothered me…

If you’re a member of The Session, then you can see any activity related to something you submitted in the past. Say, for example, that you added a tune or an event to the site a while back. If someone else comments on that, or bookmarks it, then that shows up in your “notifications” feed.

That’s all well and good but under the hood it was relying on a fairly convuluted database query to a very large table (a table that’s effectively a log of all user actions). I tried all sorts of query optimisations but there always seemed to be some combination of circumstances where the request would take ages.

For a while I even removed the notifications functionality from the site, hoping it wouldn’t be missed. But a couple of people wrote to ask where it had gone so I figured I ought to reinstate it.

After exhausting all the technical improvements, I took a step back and thought about the purpose of this particular feature. That’s when I realised that I had been thinking about the database query too literally.

The results are ordered in reverse chronological order, which makes sense. They’re also chunked into groups of ten, which also makes sense. But I had allowed for the possibility that you could navigate through your notifications back to the very start of your time on the site.

But that’s not really how we think of notifications in other settings. What would happen if I were to limit your notifications only to activity in, say, the last month?

Boom! Instant performance improvement by orders of magnitude.

I guess there’s a lesson there about switching off the over-analytical side of my brain and focusing on actual user needs.

Anyway, thanks to the time I’ve spent honing the database settings and optimising the longest queries, I’ve reduced the latency by quite a bit. I’m hoping that will result in an improvement to the time to first byte.

Time—and monitoring tools—will tell.

Style your underlines

We shouldn’t rely on colour alone to indicate that something is interactive.

Take links, for example. Sure, you can give them a different colour to the surrounding text, but you shouldn’t stop there. Make sure there’s something else that distinguishes them. You could make them bold. Or you could stick with the well-understood convention of underlying links.

This is where some designers bristle. If there are a lot of links on a page, it could look awfully cluttered with underlines. That’s why some designers would rather remove the underline completely.

As Manu observed:

I’ve done a lot of audits in the first half of this year and at this point a believe that designing links without underlines is a kink. The idea that users don’t understand that links are clickable arouses some designers. I can’t explain it any other way.

But underlining links isn’t the binary decision it once was. You can use CSS to style those underlines just as you’d style any other part of your design language.

Here’s a regular underlined link.

For a start, you can adjust the distance of the underline from the text using text-underline-offset. If you’re using a generous line-height, use a generous distance here too.

Here’s a link with an offset underline.

If you’d rather have a thinner or thicker underline, use text-decoration-thickness.

Here’s a link with a thin underline.

The colour of the underline and the colour of the link don’t need to be the same. Use text-decoration-color to make them completely different colours or make the underline a lighter shade of the link colour.

Here’s a link with a translucent underline.

That’s quite a difference with just a few CSS declarations:

text-underline-offset: 0.2em;
text-decoration-thickness: 1px;
text-decoration-color: oklch(from currentColor l c h / 50%);

If that still isn’t subtle enough for you, you could even use text-decoration-style to make the underline dotted or dashed, but that might be a step too far.

Here’s a link with a dotted underline.

Whatever you decide, I hope you’ll see that underlines aren’t the enemy of good design. They’re an opportunity.

You should use underlines to keep your links accessible. But you should also use CSS to make those underlines beautiful.

Donegal to Galway to Clare

After spending a week immersed in the language and the landscape of Glencolmcille, Jessica and I were headed to Miltown Malbay for the annual Willie Clancy music week.

I could only get us accommodation from the Monday onwards so we had a weekend in between Donegal and Clare. We decided to spend it in Galway.

We hadn’t booked any travel from Glencolmcille to Galway and that worked out fine. We ended up getting a lift from a fellow student (and fellow blogger) heading home to Limerick.

Showing up in Galway on a busy Saturday afternoon was quite the change after the peace and quiet of Glencolmcille. But we dove right in and enjoyed a weekend of good food and music.

A man playing button accordion and a man playing banjo at a pub table covered with pints. A fiddle in the foreground as a man plays pipes accompanied by another man on guitar.

But I missed speaking Irish. So on the Sunday afternoon we made a trip out to Spiddal for lunch just so we could say a few words as Gaeilge.

We also got some practice in every morning getting coffee at the Plámás cafe. You get a ten-cent discount for ordering in Irish. What a lovely little piece of behaviour design—a nice gentle nudge!

From Galway we made our way down to Miltown Malbay where the Willie Clancy festival was in full swing. We were staying out in Spanish Point, so we could escape the madness of the town each evening. Mind you, there was plenty going at the Armada hotel too.

The hotel was something of an extravagance but it was worth it—we had a beautiful view on to the beach at Spanish Point and our room was tucked away far from the wild shenanigans in the hotel bar (not to mention the céilís on the other side of the hotel!).

I have to admit, I got quite overwhelmed the first day I ventured into Miltown proper. It’s easy to have a constant state of FOMO, constantly searching for the best session. But once I calmed down and accepted the situation, I had a lovely time at some really nice sessions.

A kitchen crammed with musicians. A line of musicians playing away. A selfie with some other musicians in a pub corner. A man playing banjo and a woman playing fiddle.

Last time we were in Miltown Malbay was three years ago …and three years before that. Maybe we’ll be back in another three years.

I don’t know, though. It kind of felt like going to the South By Southwest after it got crazy big and the host town could no longer bear the weight of the event.

Still, I thoroughly enjoyed our two-week excursion down a stretch of the Wild Atlantic Way from Donegal to Galway to Clare.

CSS snippets

I’ve been thinking about the kind of CSS I write by default when I start a new project.

Some of it is habitual. I now use logical properties automatically. It took me a while to rewire my brain, but now seeing left or top in a style sheet looks wrong to me.

When I mentioned this recently, I had some pushback from people wondering why you’d bother using logical properites if you never planned to translate the website into a language with a different writing system. I pointed out that even if you don’t plan to translate a web page, a user may still choose to. Using logical properties helps them. From that perspective, it’s kind of like using user preference queries.

That’s something else I use by default now. If I’ve got any animations or transitions in my CSS, I wrap them in prefers-reduced-motion: no-preference query.

For instance, I’m a huge fan of view transitions and I enable them by default on every new project, but I do it like this:

@media (prefers-reduced-motion: no-preference) {
  @view-transition {
    navigation: auto;
  }
}

I’ll usually have a prefers-color-scheme query for dark mode too. This is often quite straightforward if I’m using custom properties for colours, something else I’m doing habitually. And now I’m starting to use OKLCH for those colours, even if they start as hexadecimal values.

Custom properties are something else I reach for a lot, though I try to avoid premature optimisation. Generally I wait until I spot a value I’m using more than two or three times in a stylesheet; then I convert it to a custom property.

I make full use of clamp() for text sizing. Sometimes I’ll just set a fluid width on the html element and then size everything else with ems or rems. More often, I’ll use Utopia to flow between different type scales.

Okay, those are all features of CSS—logical properties, preference queries, view transitions, custom properties, fluid type—but what about actual snippets of CSS that I re-use from project to project?

I’m not talking about a CSS reset, which usually involves zeroing out the initial values provided by the browser. I’m talking about tiny little enhancements just one level up from those user-agent styles.

Here’s one I picked up from Eric that I apply to the figcaption element:

figcaption {
  max-inline-size: max-content;
  margin-inline: auto;
}

That will centre-align the text until it wraps onto more than one line, at which point it’s no longer centred. Neat!

Here’s another one I start with on every project:

a:focus-visible {
  outline-offset: 0.25em;
  outline-width: 0.25em;
  outline-color: currentColor;
}

That puts a nice chunky focus ring on links when they’re tabbed to. Personally, I like having the focus ring relative to the font size of the link but I know other people prefer to use a pixel size. You do you. Using the currentColor of the focused is usually a good starting point, thought I might end up over-riding this with a different hightlight colour.

Then there’s typography. Rich has a veritable cornucopia of starting styles you can use to improve typography in CSS.

Something I’m reaching for now is the text-wrap property with its new values of pretty and balance:

ul,ol,dl,dt,dd,p,figure,blockquote {
  hanging-punctuation: first last;
  text-wrap: pretty;
}

And maybe this for headings, if they’re being centred:

h1,h2,h3,h4,h5,h6 {
  text-align: center;
  text-wrap: balance;
}

All of these little snippets should be easily over-writable so I tend to wrap them in a :where() selector to reduce their specificity:

:where(figcaption) {
  max-inline-size: max-content;
  margin-inline: auto;
}
:where(a:focus-visible) {
  outline-offset: 0.25em;
  outline-width: 0.25em;
  outline-color: currentColor;
}
:where(ul,ol,dl,dt,dd,p,figure,blockquote) {
  hanging-punctuation: first last;
  text-wrap: pretty;
}

But if I really want them to be easily over-writable, then the galaxy-brain move would be to put them in their own cascade layer. That’s what Manu does with his CSS boilerplate:

@layer core, third-party, components, utility;

Then I could put those snippets in the core layer, making sure they could be overwritten by the CSS in any of the other layers:

@layer core {
  figcaption {
    max-inline-size: max-content;
    margin-inline: auto;
  }
  a:focus-visible {
    outline-offset: 0.25em;
    outline-width: 0.25em;
    outline-color: currentColor;
  }
  ul,ol,dl,dt,dd,p,figure,blockquote {
    hanging-punctuation: first last;
    text-wrap: pretty;
  }
}

For now I’m just using :where() but I think I should start using cascade layers.

I also want to start training myself to use the lh value (line-height) for block spacing.

And although I’m using the :has() selector, I don’t think I’ve yet trained my brain to reach for it by default.

CSS has sooooo much to offer today—I want to make sure I’m taking full advantage of it.

Style legend

There’s a new proposal for giving developers more control over styling form controls. I like it.

It’s clearly based on the fantastic work being done by the Open UI group on the select element. The proposal suggests that authors can opt-in to the new styling possibilities by declaring:

appearance: base;

So basically the developer is saying “I know what I’m doing—I’m taking the controls.” But browsers can continue to ship their default form styles. No existing content will break.

The idea is that once the developer has opted in, they can then style a number of pseudo-elements.

This proposal would apply to pretty much all the form controls you can think of: all the input types, along with select, progress, meter, buttons and more.

But there’s one element more that I wish were on the list:

legend

I know, technically it’s not a form control but legend and fieldset are only ever used within forms.

The legend element is notoriously annoying to style. So a lot of people just don’t bother using it, which is a real shame. It’s like we’re punishing people for doing the right thing.

Wouldn’t it be great if you, as a developer, had the option of saying “I know what I’m doing—I’m taking the controls”:

legend {
  appearance: base;
}

Imagine if that nuked the browser’s weird default styles, effectively turning the element into a span or div as far as styling is concerned. Then you could style it however you wanted. But crucially, if browsers shipped this, no existing content would break.

The shitty styling situation for legend (and its parent fieldset) is one of those long-standing annoyances that seems to have fallen down the back of the sofa of browser vendors. No one’s going to spend time working on it when there are more important newer features to ship. That’s why I’d love to see it sneak in to this new proposal for styling form controls.

I was in Amsterdam last week. Just like last year I was there to help out Vasilis’s students with a form-based assignment:

They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

(Side note: this time they were told to style it using the design system from the Dutch railway because the tax office was getting worried that they were making phishing sites.)

I saw a lot of the same challenges again. I saw how students wished they could specify a past date or a future date in a date picker without using JavaScript. And I saw them lamenting the time they spent styling legends that worked across all browsers.

Right now, Mason Freed has an open issue on the new proposal with his suggestion to add some more elements to consider. Both legend and fieldset are included. That gets a thumbs-up from me.

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

Blog Questions Challenge

I’ve been tagged in a good ol’-fashioned memetic chain letter, first by Jon and then by Luke. Only by answering these questions can my soul find peace…

Why did you start blogging in the first place?

All the cool kids were doing it. I distinctly remember thinking it was far too late to start blogging. Clearly I had missed the boat. That was in the year 2001.

So if you’re ever thinking of starting something but you think it might be too late …it isn’t.

Back then, I wrote:

I’ll try and post fairly regularly but I don’t want to make any promises I can’t keep.

I’m glad I didn’t commit myself but I’m also glad that I’m still posting 24 years later.

What platform are you using to manage your blog and why did you choose it? Have you blogged on other platforms before?

I use my own hand-cobbled mix of PHP and MySQL. Before that I had my own hand-cobbled mix of PHP and static XML files.

On the one hand, I wouldn’t recommend anybody to do what I’ve done. Just use an off-the-shelf content management system and start publishing.

On the other hand, the code is still working fine decades later (with the occasional tweak) and the control freak in me likes knowing what every single line of code is doing.

It’s very bare-bones though.

How do you write your posts? For example, in a local editing tool, or in a panel/dashboard that’s part of your blog?

I usually open a Mardown text editor and write in that. I use the Mac app Focused which was made by Realmac software. I don’t think you can even get hold of it these days, but it does the job for me. Any Markdown text editor would do though.

Then I copy what I’ve written and paste it into the textarea of my hand-cobbled CMS. It’s pretty rare for me to write directly into that textarea.

When do you feel most inspired to write?

When I’m supposed to be doing something else.

Blogging is the greatest procrastination tool there is. You’re skiving off doing the thing you should be doing, but then when you’ve published the blog post, you’ve actually done something constructive so you don’t feel too bad about avoiding that thing you were supposed to be doing.

Sometimes it takes me a while to get around to posting something. I find myself blogging out loud to my friends, which is a sure sign that I need to sit down and bash out that blog post.

When there’s something I’m itching to write about but I haven’t ’round to it yet, it feels a bit like being constipated. Then, when I finally do publish that blog post, it feels like having a very satisfying bowel movement.

No doubt it reads like that too.

Do you publish immediately after writing, or do you let it simmer a bit as a draft?

I publish immediately. I’ve never kept drafts. Usually I don’t even save theMarkdown file while I’m writing—I open up the text editor, write the words, copy them, paste them into that textarea and publish it. Often it takes me longer to think of a title than it takes to write the actual post.

I try to remind myself to read it through once to catch any typos, but sometimes I don’t even do that. And you know what? That’s okay. It’s the web. I can go back and edit it at any time. Besides, if I miss a typo, someone else will catch it and let me know.

Speaking for myself, putting something into a draft (or even just putting it on a to-do list) is a guarantee that it’ll never get published. So I just write and publish. It works for me, though I totally understand that it’s not for everyone.

What’s your favourite post on your blog?

I’ve got a little section of “recommended reading” in the sidebar of my journal:

But I’m not sure I could pick just one.

I’m very proud of the time I wrote 100 posts in 100 days and each post was exactly 100 words long. That might be my favourite tag.

Any future plans for your blog? Maybe a redesign, a move to another platform, or adding a new feature?

I like making little incremental changes. Usually this happens at Indie Web Camps. I add some little feature or tweak.

I definitely won’t be redesigning. But I might add another “skin” or two. I’ve got one of those theme-switcher things, y’see. It was like a little CSS Zen Garden before that existed. I quite like having redesigns that are cumulative instead of destructive.

Next?

You. Yes, you.

25, 20, 15, 10, 5

I have a feeling that 2025 is going to be a year of reflection for me. It’s such a nice round number, 25. One quarter of a century.

That’s also how long myself and Jessica have been married. Our wedding anniversary was last week.

Top tip: if you get married in year ending with 00, you’ll always know how long ago it was. Just lop off the first 2000 years and there’s the number.

As well as being the year we got married (at a small ceremony in an army chapel in Arizona), 2000 was also the year we moved from Freiburg to Brighton. I never thought we’d still be here 25 years later.

2005 was twenty years ago. A lot of important events happened that year. I went to South by Southwest for the first time and met people who became lifelong friends (including some dear friends no longer with us).

I gave my first conference talk. We had the first ever web conference in the UK. And myself, Rich, and Andy founded Clearleft. You can expect plenty of reminiscence and reflection on the Clearleft blog over the course of this year.

2010 was fifteen years ago. That’s when Jessica and I moved into our current home. For the first time, we were paying off a mortgage instead of paying a landlord. But I can’t bring myself to consider us “homeowners” at that time. For me, we didn’t really become homeowners until we paid that mortgage off ten years later.

2015 was ten years ago. It was relatively uneventful in the best possible way.

2020 was five years ago. It was also yesterday. The Situation was surreal, scary and weird. But the people I love came through it intact, for which I’m very grateful.

Apart from all these anniversaries, I’m not anticipating any big milestones in 2025. I hope it will be an unremarkable year.

2024

There goes 2024.

It was a year dominated by Ukraine and Gaza. Utterly horrific and unnecessary death courtesy of Putin and Netanyahu.

For me personally, 2024 was just fine. I was relatively healthy all year. The people I love were relatively healthy too. I don’t take that for granted.

Looking back on what I did and didn’t do during the year, here’s something interesting: I didn’t give a single conference talk. I spoke at a few events but as the host: Patterns Day, CSS Day, and UX London. That’s something I really enjoy and I think I’m pretty darn good at it too.

I was wondering why it was that I didn’t give a talk in 2024. Then Rachel said something:

I really miss An Event Apart.

That’s when I realised that An Event Apart would usually be the impetus behind preparing a conference talk. I’d lock myself away and spend ages crafting a presentation to match the calibre of that event. Then, once I had the talk prepared, I could give modified versions of it at other conferences.

With An Event Apart gone, I guess I just let the talk prep slide. Maybe that’ll continue into 2025 …although I’m kind of itching to give a talk on HTML web components.

In most years, speaking at conferences is what allows me to travel to interesting places. But even without being on the conference circuit I still travelled to lovely places in 2024. Turin, Belfast, Amsterdam, Freiburg, west Cork, Boston, Pittsburgh, Saint Augustine, Seville, Cáceres, Strasbourg, and Galway.

A lot of the travel was motivated by long-standing friendships. Exploring west Cork with Dan and Sue. Celebrating in Freiburg with Schorsch and Birgit. Visting Ethan and Liz in Boston. And playing music in Pittsburgh with Brad.

Frostapolooza was a high note:

I felt frickin’ great after being part of an incredible event filled with joy and love and some of the best music I’ve ever heard.

Being on sabattical for all of August was nice. It also meant that I had lots of annual leave to use up by the end of the year, so I ended up taking all of December off too. I enjoyed that.

I played a lot of music in 2024. I played in a couple of sessions for pretty much every week of the year. That’s got to be good for my mandolin playing. I even started bringing the tenor banjo out on occasion.

I ate lots of good food in 2024. Some of it was even food I made. I’ve been doing more and more cooking. I’ve still got a fairly limited range of dishes, but I’m enjoying the process of expanding my culinary repertoire a bit.

I read good books in 2024.

All in all, that’s a pretty nice way to spend a year: some travel, seeing some old friends, playing lots of music, reading books, and eating good food.

I hope for more of the same in 2025.

Words I wrote in 2024

People spent a lot of time and energy in 2024 talking about (and on) other people’s websites. Twitter. Bluesky. Mastodon. Even LinkedIn.

I observed it all with the dispassionate perspective of Dr. Manhattan on Mars. While I’m happy to see more people abondoning the cesspool that is Twitter, I’m not all that invested in either Mastodon or Bluesky. Or any other website, for that matter. I’m glad they’re there, but if they disappeared tomorrow, I’d carry on posting here on my own site.

I posted to my website over 850 times in 2024. sparkline

I shared over 350 links. sparkline

I posted over 400 notes. sparkline

I published just one article.

And I wrote almost 100 blog posts here in my journal this year. sparkline

Here are some cherry-picked highlights:

Books I read in 2024

I’ve been keeping track of the books I’m reading for about seven years now. I do that here on my own website, as well as on bookshop.org.

It has become something of a tradition for me to post an end-of-year summary of the books I’ve read in the previous twelve months. Maybe I should be posting my thoughts on each book right after I finish it instead. Then again, I quite like the act of thinking about a book again after letting it sit and stew for a while.

I should probably stop including stars with these little reviews. They’re fairly pointless—pretty much everything I read is right down the middle in the “good” category. But to recap, here’s how I allocate my scores:

  • One star means a book is meh.
  • Two stars means a book is perfectly fine.
  • Three stars means a book is a good—consider it recommended.
  • Four stars means a book is exceptional.
  • Five stars is pretty much unheard of.

No five-star books this year, but also no one-star books.

This year I read about 29 books. A bit of an increase on previous years, but the numbers can be deceptive—not every book is equal in length.

Fiction outnumbered non-fiction by quite a margin. I’m okay with that.

The Wager by David Grann

“A tale of shipwreck, mutiny and murder” is promised on the cover and this book delivers. What’s astonishing is that it’s a true story. If it were fiction it would be dismissed as too far-fetched. It’s well told, and it’s surely only a matter of time before an ambitious film-maker takes on its Rashomon-like narrative.

★★★☆☆

Bridge by Lauren Beukes

I think this might be Lauren’s best book since Zoo City. The many-worlds hypothesis has been mined to depletion in recent years but Bridge still manages to have a fresh take on it. The well-rounded characters ensure that you’re invested in the outcome.

★★★☆☆

The Penelopiad by Margaret Atwood

Part of my ongoing kick of reading retellings of Greek myths, this one focuses on a particularly cruel detail of Odysseus’s return.

★★★☆☆

Elektra by Jennifer Saint

Keeping with the Greek retellings, this was the year that I read most of Jennifer Saint’s books. All good stuff, though I must admit that in my memory it’s all starting to blend together with other books like Costanza Casati’s Clytemnestra.

★★★☆☆

Children Of Memory by Adrian Tchaikovsky

The final book in the trilogy, this doesn’t have the same wham-bam page-turning breathlessness as Children Of Time, but I have to say it’s really stuck with me. Whereas the previous books looked at the possibilities of biological intelligence (in spiders and octopuses), this one focuses inwards.

I don’t want to say anymore because I don’t want to spoil the culmination. I’ll just say that I think that by the end it posits a proposition that I don’t recall any other sci-fi work doing before.

Y’know what? Just because of how this one has lodged in my mind I’m going to give it an extra star.

★★★★☆

Stone Blind by Natalie Haynes

I think this is my favourite Natalie Haynes book so far. It also makes a great companion piece to another book I read later in the year…

★★★☆☆

The Great Hunger by Patrick Kavanagh

I picked up this little volume of poems when I was in Amsterdam—they go down surprisingly well with some strong beer and bitterballen. I was kind of blown away by how funny some of these vignettes were. There’s plenty of hardship too, but that’s the human condition for you.

★★★★☆

Europe In Autumn, Europe At Midnight, Europe In Winter, and Europe At Dawn by Dave Hutchinson

I read the Fractured Europe series throughout the year and thoroughly enjoyed it. I’ll readily admit that I didn’t always follow what was going on but that’s part of the appeal. The world-building is terrific. It’s an alternative version of a Brexity Europe that by the end of the first book starts to go in an unexpected direction. Jonathan Strange meets George Smiley.

★★★☆☆

The Odyssey by Homer translated by Robert Fagles

Seeing as I’m reading all the modern retellings, it’s only fair that I have the source material to hand. This is my coffee table book that I dip into sporadically. I’ve got a copy of the prequel too.

I am not going to assign stars to this.

Faith, Hope and Carnage by Nick Cave and Seán O’Hagan

Fairly navel-gazing stuff, and you get the impression that Nick Cave thinks so too. Just as Neil Young would rather talk about his model trains, Nick Cave would rather discuss his pottery. The music stands on its own, but this is still better than most books about music.

★★☆☆☆

Julia by Sandra Newman

Now this is an audacious move! Retelling 1984 from Julia’s perspective. Not only does it work, it also shines a light on some flaws in Orwell’s original (and I say that as someone who’s read everything Orwell ever wrote). I’m happy to say that the execution of this book matches its ambition.

★★★☆☆

Hamnet by Maggie O’Farrell

So if I’ve been reading alternative perspectives on Homer and Orwell, why not Shakespeare too? This is beautifully evocative and rich in detail. It’s also heartbreaking. A gorgeous work.

★★★★☆

Pandora’s Jar: Women in the Greek Myths by Natalie Haynes

I didn’t enjoy this as much as I enjoyed Natalie Hayne’s novels. It’s all good informative stuff, but it feels a bit like a collection of separate essays rather than a coherent piece.

★★☆☆☆

Best Of British Science Fiction 2023 edited by Donna Scott

I was lucky enough to get a pre-release copy of this from one of the authors. I love a good short story collection and this one is very good indeed.

★★★☆☆

Ithaca and House Of Odysseus by Claire North

Remember how I said that some of the Greek retellings started to blend together? Well, no fear of that with this terrific series. Like Margaret Atwood’s retelling, Penelope is the main character here. Each book is narrated by a different deity, and yet there is little to no supernatural intervention. I’m really looking forward to reading the third and final book in the series.

★★★☆☆

The Shadow Of Perseus by Claire Heywood

This is the one I was hinting at above that makes a great companion piece to Natalie Hayne’s Stone Blind. Two different—but equally sympathetic—takes on Medusa. This one is grittily earthbound—no gods here—and it’s a horrifying examination of toxic masculinity. And don’t expect any natural justice here.

★★★☆☆

Dogs Of War by Adrian Tchaikovsky

Adrian Tchaikovsky has a real knack for getting inside the animal mind. This story is smaller in scale than his Children Of Time series but it succeeds in telling its provocative tale snappily.

★★★☆☆

Reading 84K by Claire North

I described Dave Hutchinson’s Fractured Europe series as Brexity, but this Claire North’s book is one that pushes Tory austerity to its dystopian logical conclusion. It’s all-too believable, if maybe a little over-long. Grim’n’good.

★★★☆☆

Ariadne by Jennifer Saint

The first of Jennifer Saint’s books is also my favourite. There’s a fantasically vivid description of the arrival of Dionysus into the narrative.

★★★☆☆

The Female Man by Joanna Russ

I’ve been meaning to read this one for years, but in the end I didn’t end up finishing it. That’s no slight on the book; I just wasn’t in the right frame of mind for it. I’m actually kind of proud of myself for putting a book down—I’m usually stubbornly completionist, which is stupid because life is too short. I hope to return to this at some future time.

Atalanta by Jennifer Saint

Another vividly-written tale by Jennifer Saint, but maybe suffers from trying to cram in all the varied accounts of Atalanta’s deeds and trials—the character’s motivations are hard to reconcile at different points.

★★★☆☆

Polostan by Neal Stephenson

This was …fine. It’s the first in a series called Bomb Light. Maybe I’ll appreciate it more in its final context. As a standalone work, there’s not quite enough there to carry it (including the cutesiness of making a young Richard Feynman a side character).

★★☆☆☆

Tomorrow, and Tomorrow, and Tomorrow by Gabrielle Zevin

This too was …fine. I know some people really love this, and maybe that raised my expectations, but in the end it was a perfectly good if unremarkable novel.

★★★☆☆

The Fates by Rosie Garland

Pairs nicely with Jennifer Saint’s Atalanta. A decent yarn.

★★★☆☆

Earth Abides by George R. Stewart

I’ve just started this post-apocalyptic classic from 1949. Tune in next year to find out if I end up enjoying it.

Okay, so that was my reading for 2024. Nothing that completely blew me away but nothing that thoroughly disappointed me either. Plenty of good solid books. If I had to pick a favourite it would probably be Maggie Farrell’s Hamnet. And that Patrick Kavanagh collection of poems.

If you fancy going back in time, here are my previous round-ups:

Progressively enhancing maps

The Session has been online for over 20 years. When you maintain a site for that long, you don’t want to be relying on third parties—it’s only a matter of time until they’re no longer around.

Some third party APIs are unavoidable. The Session has maps for sessions and other events. When people add a new entry, they provide the address but then I need to get the latitude and longitude. So I have to use a third-party geocoding API.

My code is like a lesson in paranoia: I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

Things are better on the client side. I’m using other people’s JavaScript libraries—like the brilliant abcjs—but at least I can self-host them.

I’m using Leaflet for embedding maps. It’s a great little library built on top of Open Street Map data.

A little while back I linked to a new project called OpenFreeMap. It’s a mapping provider where you even have the option of hosting the tiles yourself!

For now, I’m not self-hosting my map tiles (yet!), but I did want to switch to OpenFreeMap’s tiles. They’re vector-based rather than bitmap, so they’re lovely and crisp.

But there’s an issue.

I can use OpenFreeMap with Leaflet, but to do that I also have to use the MapLibre GL library. But whereas Leaflet is 148K of JavaScript, MapLibre GL is 800K! Yowzers!

That’s mahoosive by the standards of The Session’s performance budget. I’m not sure the loveliness of the vector maps is worth increasing the JavaScript payload by so much.

But this doesn’t have to be an either/or decision. I can use progressive enhancement to get the best of both worlds.

If you land straight on a map page on The Session for the first time, you’ll get the old-fashioned bitmap map tiles. There’s no MapLibre code.

But if you browse around The Session and then arrive on a map page, you’ll get the lovely vector maps.

Here’s what’s happening…

The maps are embedded using an HTML web component called embed-map. The fallback is a static image between the opening and closing tags. The web component then loads up Leaflet.

Here’s where the enhancement comes in. When the web component is initiated (in its connectedCallback method), it uses the Cache API to see if MapLibre has been stored in a cache. If it has, it loads that library:

caches.match('/path/to/maplibre-gl.js')
.then( responseFromCache => {
    if (responseFromCache) {
        // load maplibre-gl.js
    }
});

Then when it comes to drawing the map, I can check for the existence of the maplibreGL object. If it exists, I can use OpenFreeMap tiles. Otherwise I use the old Leaflet tiles.

But how does the MapLibre library end up in a cache? That’s thanks to the service worker script.

During the service worker’s install event, I give it a list of static files to cache: CSS, JavaScript, and so on. That includes third-party libraries like abcjs, Leaflet, and now MapLibre GL.

Crucially this caching happens off the main thread. It happens in the background and it won’t slow down the loading of whatever page is currently being displayed.

That’s it. If the service worker installation works as planned, you’ll get the nice new vector maps. If anything goes wrong, you’ll get the older version.

By the way, it’s always a good idea to use a service worker and the Cache API to store your JavaScript files. As you know, JavaScript is unduly expensive to performance; not only does the JavaScript file have to be downloaded, it then has to be parsed and compiled. But JavaScript stored in a cache during a service worker’s install event is already parsed and compiled.

Syndicating to Bluesky

Last year I described how I syndicate my posts to different social networks.

Back then my approach to syndicating to Bluesky was to piggy-back off my micro.blog account (which is really just the RSS feed of my notes):

Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It worked well enough, but it wasn’t real-time and I didn’t have much control over the formatting. As Bluesky is having quite a moment right now, I decided to upgrade my syndication strategy and use the Bluesky API.

Here’s how it works…

First you need to generate an app password. You’ll need this so that you can generate a token. You need the token so you can generate …just kidding; the chain of generated gobbledegook stops there.

Here’s the PHP I’m using to generate a token. You’ll need your Bluesky handle and the app password you generated.

Now that I’ve got a token, I can send a post. Here’s the PHP I’m using.

There’s something extra code in there to spot URLs and turn them into links. Bluesky has a very weird way of doing this.

It didn’t take too long to get posting working. After some more tinkering I got images working too. Now I can post straight from my website to my Bluesky profile. The Bluesky API returns an ID for the post that I’ve created there so I can link to it from the canonical post here on my website.

I’ve updated my posting interface to add a toggle for Bluesky right alongside the toggle for Mastodon. There used to be a toggle for Twitter. That’s long gone.

Now when I post a note to my website, I can choose if I want to send a copy to Mastodon or Bluesky or both.

One day Bluesky will go away. It won’t matter much to me. My website will still be here.