Journal tags: browsers

177

sparkline

Simplify

I was messing about with some images on a website recently and while I was happy enough with the arrangement on large screens, I thought it would be better to have the images in a kind of carousel on smaller screens—a swipable gallery.

My old brain immediately thought this would be fairly complicated to do, but actually it’s ludicrously straightforward. Just stick this bit of CSS on the containing element inside a media query (or better yet, a container query):

display: flex;
overflow-x: auto;

That’s it.

Oh, and you can swap out overflow-x for overflow-inline if, like me, you’re a fan of logical properties. But support for that only just landed in Safari so I’d probably wait a little while before removing the old syntax.

Here’s an example using pictures of some of the lovely people who will be speaking at Web Day Out:

Jemima Abu Rachel Andrew Lola Odelola Richard Rutter Harry Roberts

While you’re at it, add this:

overscroll-behavior-inline: contain;

Thats prevents the user accidentally triggering a backwards/forwards navigation when they’re swiping.

You could add some more little niceties like this, but you don’t have to:

scroll-snap-type: inline mandatory;
scroll-behavior: smooth;

And maybe this on the individual items:

scroll-snap-align: center;

You could progressively enhance even more with the new pseudo-elements like ::scroll-button() and ::scroll-marker for Chromium browsers.

Apart from that last bit, none of this is particularly new or groundbreaking. But it was a pleasant reminder for me that interactions that used to be complicated to implement are now very straightforward indeed.

Here’s another example that Ana Tudor brought up yesterday:

You have a section with a p on the left & an img on the right. How do you make the img height always be determined by the p with the tiniest bit of CSS? 😼

No changing the HTML structure in any way, no pseudos, no background declarations, no JS. Just a tiny bit of #CSS.

Old me would’ve said it can’t be done. But with a little bit of investigating, I found a nice straightforward solution:

section >  img {
  contain: size;
  place-self: stretch;
  object-fit: cover;
}

That’ll work whether the section has its display set to flex or grid.

There’s something very, very satisfying in finding a simple solution to something you thought would be complicated.

Honestly, I feel like web developers are constantly being gaslit into thinking that complex over-engineered solutions are the only option. When the discourse is being dominated by people invested in frameworks and libraries, all our default thinking will involve frameworks and libraries. That’s not good for users, and I don’t think it’s good for us either.

Of course, the trick is knowing that the simpler solution exists. The information probably isn’t going to fall in your lap—especially when the discourse is dominated by overly-complex JavaScript.

So get yourself a ticket for Web Day Out. It’s on Thursday, March 12th, 2026 right here in Brighton.

I guarantee you’ll hear about some magnificent techniques that will allow you to rip out plenty of complex code in favour of letting the browser do the work.

Harry Roberts is speaking at Web Day Out

I was going to save this announcement for later, but I’m just too excited: Harry Roberts will be speaking at Web Day Out!

Goddamn, that’s one fine line-up, and it isn’t even complete yet! Get your ticket if you haven’t already.

There’s a bit of a story behind the talk that Harry is going to give…

Earlier this year, Harry posted a most excellent screed in which he said:

The web as a platform is a safe bet. It’s un-versioned by design. That’s the commitment the web makes to you—take advantage of it.

  • Opt into web platform features incrementally;
  • Embrace progressive enhancement to build fast, reliable applications that adapt to your customers’ context;
  • Write code that leans into the browser, not away from it.

Yes! Exactly!

Thing is, Harry posted this on LinkedIn. My indieweb sensibilities were affronted. So I harangued him:

You should blog this, Harry

My pestering paid off with an excellent blog post on Harry’s own site called Build for the Web, Build on the Web, Build with the Web:

The beauty of opting into web platform features as they become available is that your site becomes contextual. The same codebase adapts into its environment, playing to its strengths, rather than trying to build and ship your own environment from the ground up. Meet your users where they are.

That’s a pretty neat summation of the agenda for Web Day Out. So I thought, “Hmm …if I was able to pester Harry to turn a LinkedIn post into a really good blog post, I wonder if I could pester him to turn that blog post into a talk?”

I threw down the gauntlet. Harry accepted the challenge.

I’m sure you’re already familiar with Harry’s excellent work, but if you’re not, he’s basically Mr. Web Performance. That’s why I’m so excited to have him speak at Web Day Out—I want to hear the business case for leaning into what web browsers can do today, and he is most certainly the best person to bring receipts.

You won’t want to miss this, so be sure to get your ticket now; it’s only £225+VAT.

If you’re not ready to commit just yet, but you want to hear about more speaker announcements like this, you can sign up to the mailing list.

Speaking at Web Day Out

Half of the line-up of speakers for Web Day Out is already on the site. One more is already confirmed.

I’m ridiculously excited about the way the line-up is taking shape, and judging by the zippiness of ticket sales, so are lots of my peers. Seriously, don’t wait to get your ticket or you might end up missing out completely.

I’ve already got a shortlist of other people I could imagine on the line-up, but I’m open to more suggestions. If you’d like to speak at Web Day Out—or you know someone you think would be great—send an email to jeremy@clearleft.com

I won’t be checking my work email while I’m away on holiday next week but it would be lovely to come back to an inbox of exciting suggestions.

A couple of pointers…

I’d rather not have too many people like me on the line-up. White dudes are already over-represented in this industry, especially at conferences.

If you’ve never given a talk before, don’t worry. I’d love to help you put your talk together and coach you in presenting it. I have some experience in this area.

No product pitches. That includes JavaScript frameworks and CSS libraries.

If I get even a whiff of “AI”, your proposal doesn’t stand a chance. There are many, many, many other events that are only too happy to have wall-to-wall talks about …that sort of thing.

If you end up speaking at Web Day Out you will, of course, be paid. We will, of course, cover travel and accommodation too. We can’t afford the travel costs of bringing anyone in from outside Europe though (and we’d like to keep the carbon footprint of the event as small as possible).

Web Day Out has an opinionated agenda all about showing what’s possible in web browsers today. Some potential topics include:

The emphasis should be on using stuff in production rather than theoretical demos.

If you’ve got a case study about using the web platform—perhaps migrating away from a framework-driven approach—that would fit the bill perfectly.

How’s all that sounding? Know someone who could deliver the goods? Let me know!

Announcing Web Day Out

I’m going to cut right to the chase: Clearleft is putting on a brand new conference in 2026. It’s called Web Day Out. It’ll be on Thursday, March 12th right here in Brighton. Tickets are just £225+VAT. You should be there!

If you’ve ever been to Responsive Day Out or Patterns Day, the format will be familiar to you. There’s going to be eight 30 minute talks. Bam! Bam! Bam!

Like those other one-day conferences, this one has a laser-sharp focus.

Web Day Out is all about what you can do in web browsers today. You can expect talks that showcase hands-on practical uses for the latest advances in HTML, CSS, and JavaScript APIs. There will be no talks about libraries, frameworks or build tools, and I can guarantee there will be absolutely no so-called “AI”.

As you might have gathered, this is an opinionated conference.

If you care about performance, accessibility, and progressive enhancement, Web Day Out is the event for you.

Or if you’ve been living in React-land but starting to feel that maybe you’re missing out on what’s been shipping in web browsers, Web Day Out is the event for you too. And I’m not talking about cute demos here. This is very much about shipping to production.

I’ve got half of the line-up assembled already:

Jemima’s talk gives you a flavour of what to expect at Web Day Out:

In this talk, we’ll take a look at how to use HTML and CSS to build simpler alternatives to popular JavaScript components such as accordions, modals, scroll transitions, carousels etc We’ll also take a look at the performance and accessibility benefits and real-life applications and use-cases of these components.

Web Day Out will be in The Studio Theatre of the Brighton Dome, which is a fantastic intimate venue. That means that places are limited, so get your ticket now!

Progressive web apps

There was a time when you needed to make a native app in order to take advantage of specific technologies. That time has passed.

Now you can do all of these things on the web:

  • push notifications,
  • offline storage,
  • camera access,
  • and more.

Take a look at the home screen on your phone. Looking at the apps you’ve downloaded from an app store, ask yourself how many of them could’ve been web apps.

Social media apps, airline apps, shopping apps …none of them are using technologies that aren’t widely available on the web.

“But”, you might be thinking, “it feels different having a nice icon on my homescreen that launches a standalone app compared to navigating to a bookmark in my web browser.”

I agree! And you can do that with a web app. All it takes the addition of one manifest file that lists which icons and colours to use.

If that file exists for a website, then once the user adds the website to their homescreen it will behave just like native app.

Try it for yourself. Go to instagram.com in your mobile browser and it to your homescreen (on the iPhone, you get to the “add to home screen” option from the sharing icon—scroll down the list of options to find it).

See how it’s now an icon on your home screen just like all your other apps? Tap that icon to see how it launches just like a native app with no browser chrome around it.

This doesn’t just work on mobile. Desktop browsers like Chrome, Edge, and Safari also allow you to install web apps straight from the browser and into your dock.

About half of the icons in my dock are actually web apps and I honestly can’t tell which is which. Mastodon, Instagram, Google Calendar, Google Docs …I’m sure most of those services are available as downloadable desktop apps, but why would I bother doing that when I get exactly the same experience by adding the sites to my dock?

From a business perspective, it makes so much sense to build a web app (or simply turn your existing website into a web app with the addition of a manifest file). No need for separate iOS or Android developer teams. No need to play the waiting game with updates to your app in the app store—on the web, updates are instant.

You can even use an impressive-sounding marketing term for this approach: progressive web apps:

A Progressive Web App (PWA) is a web app that uses progressive enhancement to provide users with a more reliable experience, uses new capabilities to provide a more integrated experience, and can be installed. And, because it’s a web app, it can reach anyone, anywhere, on any device, all with a single codebase. Once installed, a PWA looks like any other app, specifically:

  • It has an icon on the home screen, app launcher, launchpad, or start menu.
  • It appears when you search for apps on the device.
  • It opens in a standalone window, wholly separated from a browser’s user interface.
  • It has access to higher levels of integration with the OS, for example, URL handling or title bar customization.
  • It works offline.

But there’s still one thing that native apps do better than the web. If you want to be able to monitor and track users to an invasive degree, the web can’t compete with the capabilities of native apps. That’s why you’ll see so many websites on your mobile device that implore to install their app from the app store.

If that’s not a priority for you, then you can differentiate yourself from your competitors by offering your users a progressive web app. Instead of having links to Apple and Google’s app stores, you can link to a page on your own site with installation instructions.

I can guarantee you that users won’t be able to tell the difference between a native app they installed from an app store and a web app they’ve added to their home screen.

Command and control

I’ve been banging on for a while now about how much I’d like a declarative option for the Web Share API. I was thinking that the type attribute on the button element would be a good candidate for this (there’s prior art in the way we extended the type attribute on the input element for HTML5).

I wrote about the reason for a share button type as well as creating a polyfill. I also wrote about how this idea would work for other button types: fullscreen, print, copy to clipboard, that sort of thing.

Since then, I’ve been very interested in the idea of “invokers” being pursued by the Open UI group. Rather than extending the type attribute, they’ve been looking at adding a new attribute. Initially it was called invoketarget (so something like button invoketarget="share").

Things have been rolling along and invoketarget has now become the command attribute (there’s also a new commandfor attribute that you can point to an element with an ID). Here’s a list of potential values for the command attribute on a button element.

Right now they’re focusing on providing declarative options for launching dialogs and other popovers. That’s already shipping.

The next step is to use command and commandfor for controlling audio and video, as well as some form controls. I very much approve! I love the idea of being able to build and style a fully-featured media player without any JavaScript.

I’m hoping that after that we’ll see the command attribute get expanded to cover JavaScript APIs that require a user interaction. These seem like the ideal candidates:

There’s also scope for declarative options for navigating the browser’s history stack:

  • button command="back"
  • button command="forward"
  • button command="refresh"

Whatever happens next, I’m very glad to see that so much thinking is being applied to declarative solutions for common interface patterns.

Style legend

There’s a new proposal for giving developers more control over styling form controls. I like it.

It’s clearly based on the fantastic work being done by the Open UI group on the select element. The proposal suggests that authors can opt-in to the new styling possibilities by declaring:

appearance: base;

So basically the developer is saying “I know what I’m doing—I’m taking the controls.” But browsers can continue to ship their default form styles. No existing content will break.

The idea is that once the developer has opted in, they can then style a number of pseudo-elements.

This proposal would apply to pretty much all the form controls you can think of: all the input types, along with select, progress, meter, buttons and more.

But there’s one element more that I wish were on the list:

legend

I know, technically it’s not a form control but legend and fieldset are only ever used within forms.

The legend element is notoriously annoying to style. So a lot of people just don’t bother using it, which is a real shame. It’s like we’re punishing people for doing the right thing.

Wouldn’t it be great if you, as a developer, had the option of saying “I know what I’m doing—I’m taking the controls”:

legend {
  appearance: base;
}

Imagine if that nuked the browser’s weird default styles, effectively turning the element into a span or div as far as styling is concerned. Then you could style it however you wanted. But crucially, if browsers shipped this, no existing content would break.

The shitty styling situation for legend (and its parent fieldset) is one of those long-standing annoyances that seems to have fallen down the back of the sofa of browser vendors. No one’s going to spend time working on it when there are more important newer features to ship. That’s why I’d love to see it sneak in to this new proposal for styling form controls.

I was in Amsterdam last week. Just like last year I was there to help out Vasilis’s students with a form-based assignment:

They’re given a PDF inheritance-tax form and told to convert it for the web.

Yes, all the excitement of taxes combined with the thrilling world of web forms.

(Side note: this time they were told to style it using the design system from the Dutch railway because the tax office was getting worried that they were making phishing sites.)

I saw a lot of the same challenges again. I saw how students wished they could specify a past date or a future date in a date picker without using JavaScript. And I saw them lamenting the time they spent styling legends that worked across all browsers.

Right now, Mason Freed has an open issue on the new proposal with his suggestion to add some more elements to consider. Both legend and fieldset are included. That gets a thumbs-up from me.

The web on mobile

Here’s a post outlining all the great things you can do in mobile web browsers today: Your App Should Have Been A Website (And Probably Your Game Too):

Today’s browsers are powerhouses. Notifications? Check. Offline mode? Check. Secure payments? Yep, they’ve got that too. And with technologies like WebAssembly and WebGPU, web games are catching up to native-level performance. In some cases, they’re already there.

This is all true. But this post from John Gruber is equally true: One Bit of Anecdata That the Web Is Languishing Vis-à-Vis Native Mobile Apps:

I won’t hold up this one experience as a sign that the web is dying, but it sure seems to be languishing, especially for mobile devices.

As John points out, the problems aren’t technical:

There’s absolutely no reason the mobile web experience shouldn’t be fast, reliable, well-designed, and keep you logged in. If one of the two should suck, it should be the app that sucks and the website that works well. You shouldn’t be expected to carry around a bundle of software from your utility company in your pocket. But it’s the other way around.

He’s right. It makes no sense, but this is the reality.

Ten or fifteen years ago, the gap between the web and native apps on mobile was entirely technical. There were certain things that you just couldn’t do in web browsers. That’s no longer the case now. The web caught up quite a while back.

But the experience of using websites on a mobile device is awful. Never mind the terrible performance penalties incurred by unnecessary frameworks and libraries like React and its ilk, there’s the constant game of whack-a-mole with banners and overlays. What’s just about bearable in a large desktop viewport becomes intolerable on a small screen.

This is not a technical problem. This doesn’t get solved by web standards. This is a cultural problem.

First of all, there’s the business culture. If your business model depends on tracking people or pushing newsletter sign-ups, then it’s inevitable that your website will be shite on mobile.

Mind you, if your business model depends on tracking people, you’re more likely to try push people to download your native app. Like Cory Doctorow says:

50% of web users are running ad-blockers. 0% of app users are running ad-blockers, because adding a blocker to an app requires that you first remove its encryption, and that’s a felony (Jay Freeman calls this ‘felony contempt of business-model’).

Matt May brings up the same point in his guide, How to grey-rock Meta:

Remove Meta apps from your devices and use only the mobile web versions. Mobile apps have greater access to your personal data, provided the app requests those privileges, and Facebook and Instagram in particular (more so than WhatsApp, another Meta property) request the vast majority of those privileges. This includes precise GPS data on where you are, whether or not you are using the app.

Ironically, it’s the strength of the web—and web browsers—that has led to such shitty mobile web experiences. The pretty decent security model on the web means that sites have to pester you.

Part of the reason why you don’t see the same egregious over-use of pop-ups and overlays in native apps is that they aren’t needed. If you’ve installed the app, you’re already being tracked.

But when I describe the dreadful UX of most websites on mobile as a cultural problem, I don’t just mean business culture.

Us, the people who make websites, designers and developers, we’re responsible for this too.

For all our talk of mobile-first design for the last fifteen years, we never really meant it, did we? Sure, we use media queries and other responsive techniques, but all we’ve really done is make sure that a terrible experience fits on the screen.

As developers, I’m sure we can tell ourselves all sorts of fairy tales about why it’s perfectly justified to make users on mobile networks download React, Tailwind, and megabytes more of third-party code.

As designers, I’m sure we can tell ourselves all sorts of fairy tales about why intrusive pop-ups and overlays are the responsibility of some other department (as though users make any sort of distinction).

Worst of all, we’ve spent the last fifteen years teaching users that if they want a good experience on their mobile device, they should look in an app store, not on the web.

Ask anyone about their experience of using websites on their mobile device. They’ll tell you plenty of stories of how badly it sucks.

It doesn’t matter that the web is the perfect medium for just-in-time delivery of information. It doesn’t matter that web browsers can now do just about everything that native apps can do.

In many ways, I wish this were a technical problem. At least then we could lobby for some technical advancement that would fix this situation.

But this is not a technical problem. This is a people problem. Specifically, the people who make websites.

We fucked up. Badly. And I don’t see any signs that things are going to change anytime soon.

But hey, websites on desktop are just great!

Making the new Salter Cane website

With the release of a new Salter Cane album I figured it was high time to update the design of the band’s website.

Here’s the old version for reference. As you can see, there’s a connection there in some of the design language. Even so, I decided to start completely from scratch.

I opened up a text editor and started writing HTML by hand. Same for the CSS. No templates. No build tools. No pipeline. Nothing. It was a blast!

And lest you think that sounds like a wasteful way of working, I pretty much had the website done in half a day.

Partly that’s because you can do so much with so little in CSS these days. Custom properties for colours, spacing, and fluid typography (thanks to Utopia). Logical properties. View transitions. None of this takes much time at all.

Because I was using custom properties, it was a breeze to add a dark mode with prefers-color-scheme. I think I might like the dark version more than the default.

The final stylesheet is pretty short. I didn’t bother with any resets. Browsers are pretty consistent with their default styles nowadays. As long as you’ve got some sensible settings on your body element, the cascade will take care of a lot.

There’s one little CSS trick I think is pretty clever…

The background image is this image. As you can see, it’s a rectangle that’s wider than it is tall. But the web pages are rectangles that are taller than they are wide.

So how I should I position the background image? Centred? Anchored to the top? Anchored to the bottom?

If you open up the website in Chrome (or Safari Technical Preview), you’ll see that the background image is anchored to the top. But if you scroll down you’ll see that the background image is now anchored to the bottom. The background position has changed somehow.

This isn’t just on the home page. On any page, no matter how tall it is, the background image is anchored to the top when the top of the document is in the viewport, and it’s anchored to the bottom when you reach the bottom of the document.

In the past, this kind of thing might’ve been possible with some clever JavaScript that measured the height of the document and updated the background position every time a scroll event is triggered.

But I didn’t need any JavaScript. This is a scroll-driven animation made with just a few lines of CSS.

@keyframes parallax {
    from {
        background-position: top center;
    }
    to {
        background-position: bottom center;
    }
}
@media (prefers-reduced-motion: no-preference) {
        html {
            animation: parallax auto ease;
            animation-timeline: scroll();
        }
    }
}

This works as a nice bit of progressive enhancement: by default the background image stays anchored to the top of the viewport, which is fine.

Once the site was ready, I spent a bit more time sweating some details, like the responsive images on the home page.

But the biggest performance challenge wasn’t something I had direct control over. There’s a Spotify embed on the home page. Ain’t no party like a third party.

I could put loading="lazy" on the iframe but in this case, it’s pretty close to the top of document so it’s still going to start loading at the same time as some of my first-party assets.

I decided to try a little JavaScript library called “lazysizes”. Normally this would ring alarm bells for me: solving a problem with third-party code by adding …more third-party code. But in this case, it really did the trick. The library is loading asynchronously (so it doesn’t interfere with the more important assets) and only then does it start populating the iframe.

This made a huge difference. The core web vitals went from being abysmal to being perfect.

I’m pretty pleased with how the new website turned out.

Making the website for Research By The Sea

UX London isn’t the only event from Clearleft coming your way in 2025. There’s a brand new spin-off event dedicated to user research happening in February. It’s called Research By The Sea.

I’m not curating this one, though I will be hosting it. The curation is being carried out most excellently by Benjamin, who has written more about how he’s doing it:

We’ve invited some of the best thinkers and doers from from in the research space to explore how researchers might respond to today’s most gnarly and pressing problems. They’ll challenge current perspectives, tools, practices and thinking styles, and provide practical steps for getting started today to shape a better tomorrow.

If that sounds like your cup of tea, you should put February 27th 2025 in your calendar and grab yourself a ticket.

Although I’m not involved in curating the line-up for the event, I offered Benjamin my swor… my web dev skillz. I made the website for Research By The Sea and I really enjoyed doing it!

These one-day events are a great chance to have a bit of fun with the website. I wrote about how enjoyable it was making the website for this year’s Patterns Day:

I felt like I was truly designing in the browser. Adjusting spacing, playing around with layout, and all that squishy stuff. Some of the best results came from happy accidents—the way that certain elements behaved at certain screen sizes would lead me into little experiments that yielded interesting results.

I took the same approach with Research By The Sea. I had a design language to work with, based on UX London, but with more of a playful, brighter feel. The idea was that the website (and the event) should feel connected to UX London, while also being its own thing.

I kept the typography of the UX London site more or less intact. The page structure is also very similar. That was my foundation. From there I was free to explore some other directions.

I took the opportunity to explore some new features of CSS. But before I talk about the newer stuff, I want to mention the bits of CSS that I don’t consider new. These are the things that are just the way things are done ‘round here.

Custom properties. They’ve been around for years now, and they’re such a life-saver, especially on a project like this where I’m messing around with type, colour, and spacing. Even on a small site like this, it’s still worth having a section at the start where you define your custom properties.

Logical properties. Again, they’ve been around for years. At this point I’ve trained my brain to use them by default. Now when I see a left, right, width or height in a style sheet, it looks like a bug to me.

Fluid type. It’s kind of a natural extension of responsive design to me. If a website’s typography doesn’t adjust to my viewport, it feels slightly broken. On this project I used Utopia because I wanted different type scales as the viewport increased. On other projects I’ve just used on clamp declaration on the body element, which can also get the job done.

Okay, so those are the things that feel standard to me. So what could I play around with that was new?

View transitions. So easy! Just point to an element on two different pages and say “Hey, do a magic move!” You can see this in action with the logo as you move from the homepage to, say, the venue page. I’ve also added view transitions to the speaker headshots on the homepage so that when you click through to their full page, you get a nice swoosh.

Unless, like me, you’re using Firefox. In that case, you won’t see any view transitions. That’s okay. They are very much an enhancement. Speaking of which…

Scroll-driven animations. You’ll only get these in Chromium browsers right now, but again, they’re an enhancement. I’ve got multiple background images—a bunch of cute SVG shapes. I’m using scroll-driven animations to change the background positions and sizes as you scroll. It’s a bit silly, but hopefully kind of cute.

You might be wondering how I calculated the movements of each background image. Good question. I basically just messed around with the values. I had fun! But imagine what an actually-skilled interaction designer could do.

That brings up an interesting observation about both view transitions and scroll-driven animations: Figma will not help you here. You need to be in a web browser with dev tools popped open. You’ve got to roll up your sleeves get your hands into the machine. I know that sounds intimidating, but it’s also surprisingly enjoyable and empowering.

Oh, and I made sure to wrap both the view transitions and the scroll-driven animations in a prefers-reduced-motion: no-preference @media query.

I’m pleased with how the website turned out. It feels fun. More importantly, it feels fast. There is zero JavaScript. That’s the main reason why it’s very, very performant (and accessible).

Smooth transitions across pages; smooth animations as you scroll: it’s great what you can do with just HTML and CSS.

content-visibility in Safari

Earlier this year I wrote about some performance improvements to The Session using the content-visibility property in CSS.

If you say content-visibility: auto you’re telling the browser not to bother calculating the layout and paint for an element until it needs to. But you need to combine it with the contain-intrinsic-block-size property so that the browser knows how much space to leave for the element.

I mentioned the browser support:

Right now content-visibility is only supported in Chrome and Edge. But that’s okay. This is a progressive enhancement. Adding this CSS has no detrimental effect on the browsers that don’t understand it (and when they do ship support for it, it’ll just start working).

Well, that’s happened! Safari 18 supports content-visibility. I didn’t have to do a thing and it just started working.

But …I think I’ve discovered a little bug in Safari’s implementation.

(I say I think it’s a bug with the browser because, like Jim, I’ve made the mistake in the past of thinking I had discovered a browser bug when in fact it was something caused by a browser extension. And when I say “in the past”, I mean yesterday.)

So here’s the issue: if you apply content-visibility: auto to an element that contains an SVG, and that SVG contains a text element, then Safari never paints that text to the screen.

To see an example, take a look at the fourth setting of Cooley’s reel on The Session archive. There’s a text element with the word “slide” (actually the text is inside a tspan element inside a text element). On Safari, that text never shows up.

I’m using a link to the archive of The Session I created recently rather than the live site because on the live site I’ve removed the content-visibility declaration for Safari until this bug gets resolved.

I’ve also created a reduced test case on Codepen. The only HTML is the element containing the SVGs. The only CSS—apart from the content-visibility stuff—is just a little declaration to push the content below the viewport so you have to scroll it into view (which is when the bug happens).

I’ve filed a bug report. I know it’s a fairly niche situation, but there are some other issues with Safari’s implementation of content-visibility so it’s possible that they’re all related.

Mismatch

This seems to be the attitude of many of my fellow nerds—designers and developers—when presented with tools based on large language models that produce dubious outputs based on the unethical harvesting of other people’s work and requiring staggering amounts of energy to run:

This is the future! I need to start using these tools now, even if they’re flawed, because otherwise I’ll be left behind. They’ll only get better. It’s inevitable.

Whereas this seems to be the attitude of those same designers and developers when faced with stable browser features that can be safely used today without frameworks or libraries:

I’m sceptical.

Ad tech

Back when South by Southwest wasn’t terrible, there used to be an annual panel called Browser Wars populated with representatives from the main browser vendors (except for Apple, obviously, who would never venture onto a stage outside of their own events).

I remember getting into a heated debate with the panelists during the 2010 edition. I was mad about web fonts.

Just to set the scene, web fonts didn’t exist back in 2010. That’s what I was mad about.

There was no technical reason why we couldn’t have web fonts. The reason why we didn’t get web fonts for years and years was because browser makers were concerned about piracy and type foundries.

That’s nice and all, but as I said during that panel, I don’t recall any such concerns being raised for photographers when the img element was shipped. Neither was the original text-only web held back by the legimate fear by writers of plagiarism.

My point was not that these concerns weren’t important, but that it wasn’t the job of web browsers to shore up existing business models. To use standards-speak, these concerns are orthogonal.

I’m reminded of this when I see browser makers shoring up the business of behavioural advertising.

I subscribe to the RSS feed of updates to Chrome. Not all of it is necessarily interesting to me, but all of it is supposedly aimed at developers. And yet, in amongst the posts about APIs and features, there’ll be something about the Orwellianly-titled “privacy sandbox”.

This is only of interest to one specific industry: behavioural online advertising driven by surveillance and tracking. I don’t see any similar efforts being made for teachers, cooks, architects, doctors or lawyers.

It’s a ludicrous situation that I put down to the fact that Google, the company that makes Chrome, is also the company that makes its money from targeted advertising.

But then Mozilla started with the same shit.

Now, it’s one thing to roll out a new so-called “feature” to benefit behavioural advertising. It’s quite another to make it enabled by default. That’s a piece of deceptive design that has no place in Firefox. Defaults matter. Browser makers know this. It’s no accident that this “feature” was enabled by default.

This disgusts me.

It disgusts me all the more that it’s all for nothing. Notice that I’ve repeatly referred to behavioural advertising. That’s the kind that relies on tracking and surveillance to work.

There is another kind of advertising. Contextual advertising is when you show an advertisement related to the content of the page the user is currently on. The advertiser doesn’t need to know anything about the user, just the topic of the page.

Conventional wisdom has it that behavioural advertising is much more effective than contextual advertising. After all, why would there be such a huge industry built on tracking and surveillance if it didn’t work? See, for example, this footnote by John Gruber:

So if contextual ads generate, say, one-tenth the revenue of targeted ads, Meta could show 10 times as many ads to users who opt out of targeting. I don’t think 10× is an outlandish multiplier there — given how remarkably profitable Meta’s advertising business is, it might even need to be higher than that.

Seems obvious, right?

But the idea that behavioural advertising works better than contextual advertising has no basis in reality.

If you think you know otherwise, Jon Bradshaw would like to hear from you:

Bradshaw challenges industry to provide proof that data-driven targeting actually makes advertising more effective – or in fact makes it worse. He’s spoiling for a debate – and has three deep, recent studies that show: broad reach beats targeting for incremental growth; that the cost of targeting outweighs the return; and that second and third party data does not outperform a random sample. First party data does beat the random sample – but contextual ads massively outperform even first party data. And they are much, much cheaper. Now, says Bradshaw, let’s see some counter-evidence from those making a killing.

If targeted advertising is going to get preferential treatment from browser makers, I too would like to see some evidence that it actually works.

Further reading:

Web App install API

My bug report on Apple’s websites-in-the-dock feature on desktop has me thinking about how starkly different it is on mobile.

On iOS if you want to add a website to your home screen, good luck. The option is buried within the “share” menu.

First off, it makes no sense that adding something to your homescreen counts as sharing. Secondly, how is anybody supposed to know that unless they’re explicitly told.

It’s a similar situation on Android. In theory you can prompt the user to install a progressive web app using the botched BeforeInstallPromptEvent. In practice it’s a mess. What it actually does is defer the installation prompt so you can offer it a more suitable time. But it only works if the browser was going to offer an installation prompt anyway.

When does Chrome on Android decide to offer the installation prompt? It’s a mix of required criteria—a web app manifest, some icons—and an algorithmic spell determined by the user’s engagement.

Other browser makers don’t agree with this arbitrary set of criteria. They quite rightly say that a user should be able to add any website to their home screen if they want to.

What we really need is an installation API: a way to programmatically invoke the add-to-homescreen flow.

Now, I know what you’re going to say. The security and UX implications would be dire. But this should obviously be like geolocation or notifications, only available in secure contexts and gated by user interaction.

Think of it like adding something to the clipboard: it’s something the user can do manually, but the API offers a way to do it programmatically without opening it up to abuse.

(I’d really love it if this API also had a declarative equivalent, much like I want button type="share" for the Web Share API. How about button type="install"?)

People expect this to already exist.

The beforeinstallprompt flow is an absolute mess. Users deserve better.

CSS Day 2024

My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.

During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.

The event itself was, as always, excellent. Both in terms of content and organisation.

Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.

The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.

Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.

It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!

I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.

The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.

But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.

Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.

For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!

During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:

  1. The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
  2. The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.

Insert .gif of character from The Office looking to camera.

Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.

Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.

Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.

A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.

Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.

In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”

I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.

Either way I’ll back next year for sure.

It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.

Browser support

There was a discussion at Clearleft recently about browser support. Rich has more details but the gist of it is that, even though we were confident that we had a good approach to browser support, we hadn’t written it down anywhere. Time to fix that.

This is something I had been thinking about recently anyway—see my post about Baseline and progressive enhancement—so it didn’t take too long to put together a document explaining our approach.

You can find it at browsersupport.clearleft.com

We’re not just making it public. We’re releasing it under a Creative Commons attribution license. You can copy this browser-support policy verbatim, you can tweak it, you can change it, you can do what you like. As long you include a credit to Clearleft, you’re all set.

I think this browser-support policy makes a lot of sense. It certainly beats trying to browser support to specific browsers or version numbers:

We don’t base our browser support on specific browser names and numbers. Instead, our support policy is based on the capabilities of those browsers.

The more organisations adopt this approach, the better it is for everyone. Hence the liberal licensing.

So next time your boss or your client is asking what your official browser-support policy is, feel free to use browsersupport.clearleft.com

Speculation rules and fears

After I wrote positively about the speculation rules API I got an email from David Cizek with some legitimate concerns. He said:

I think that this kind of feature is not good, because someone else (web publisher) decides that I (my connection, browser, device) have to do work that very often is not needed. All that blurred by blackbox algorithm in the browser.

That’s fair. My hope is that the user will indeed get more say, whether that’s at the level of the browser or the operating system. I’m thinking of a prefers-reduced-data setting, much like prefers-color-scheme or prefers-reduced-motion.

But this issue isn’t something new with speculation rules. We’ve already got service workers, which allow the site author to unilaterally declare that a bunch of pages should be downloaded.

I’m doing that for Resilient Web Design—when you visit the home page, a service worker downloads the whole site. I can justify that decision to myself because the entire site is still smaller in size than one article from Wired or the New York Times. But still, is it right that I get to make that call?

So I’m very much in favour of browsers acting as true user agents—doing what’s best for the user, even in situations where that conflicts with the wishes of a site owner.

Going back to speculation rules, David asked:

Do we really need this kind of (easily turned to evil) enhancement in the current state of (web) affairs?

That question could be asked of many web technologies.

There’s always going to be a tension with any powerful browser feature. The more power it provides, the more it can be abused. Animations, service workers, speculation rules—these are all things that can be used to improve websites or they can be abused to do things the user never asked for.

Or take the elephant in the room: JavaScript.

Right now, a site owner can link to a JavaScript file that’s tens of megabytes in size, and the browser has no alternative but to download it. I’d love it if users could specify a limit. I’d love it even more if browsers shipped with a default limit, especially if that limit is related to the device and network.

I don’t think speculation rules will be abused nearly as much as client-side JavaScript is already abused.

Speculation rules

There’s a new addition to the latest version of Chrome called speculation rules. This already existed before with a different syntax, but the new version makes more sense to me.

Notice that I called this an addition, not a standard. This is not a web standard, though it may become one in the future. Or it may not. It may wither on the vine and disappear (like most things that come from Google).

The gist of it is that you give the browser one or more URLs that the user is likely to navigate to. The browser can then pre-fetch or even pre-render those links, making that navigation really snappy. It’s a replacement for the abandoned link rel="prerender".

Because this is a unilateral feature, I’m not keen on shipping the code to all browsers. The old version of the API required a script element with a type value of “speculationrules”. That doesn’t do any harm to browsers that don’t support it—it’s a progressive enhancement. But unlike other progressive enhancements, this isn’t something that will just start working in those other browsers one day. I mean, it might. But until this API is an actual web standard, there’s no guarantee.

That’s why I was pleased to see that the new version of the API allows you to use an external JSON file with your list of rules.

I say “rules”, but they’re really more like guidelines. The browser will make its own evaluation based on bandwidth, battery life, and other factors. This feature is more like srcset than source: you give the browser some options, but ultimately you can’t force it to do anything.

I’ve implemented this over on The Session. There’s a JSON file called speculationrules.js with the simplest of suggestions:

{
  "prerender": [{
    "where": {
        "href_matches": "/*"
    },
    "eagerness": "moderate"
  }]
}

The eagerness value of “moderate” says that any link can be pre-rendered if the user hovers over it for 200 milliseconds (the nuclear option would be to use a value of “immediate”).

I still need to point to that JSON file from my HTML. Usually this would be done with something like a link element, but for this particular API, I can send a response header instead:

Speculation-Rules: “/speculationrules.json"

I like that. The response header is being sent to every browser, regardless of whether they support speculation rules or not, but at least it’s just a few bytes. Those other browsers will ignore the header—they won’t download the JSON file.

Here’s the PHP I added to send that header:

header('Speculation-Rules: "/speculationrules.json"');

There’s one extra thing I had to do. The JSON file needs to be served with mime-type of “application/speculationrules+json”. Here’s how I set that up in the .conf file for The Session on Apache:

<IfModule mod_headers.c>
  <FilesMatch "speculationrules.json">
    Header set Content-type application/speculationrules+json
   </FilesMatch>
</IfModule>

A bit of a faff, that.

You can see it in action on The Session. Open up Chrome or Edge (same same but different), fire up the dev tools and keep the network tab open while you navigate around the site. Notice how hovering over a link will trigger a new network request. Clicking on that link will get you that page lickety-split.

Mind you, in the case of The Session, the navigations were already really fast—performance is a feature—so it’s hard to guage how much of a practical difference it makes in this case, but it still seems like a no-brainer to me: taking a few minutes to add this to your site is worth doing.

Oh, there’s one more thing to be aware of when you’re implementing speculation rules. You have the option of excluding URLs from being pre-fetched or pre-rendered. You might need to do this if you’ve got links for adding items to shopping carts, or logging the user out. But my advice would instead be: stop using GET requests for those actions!

Most of the examples given for unsafe speculative loading conditions are textbook cases of when not to use links. Links are for navigating. They’re indempotent. For everthing else, we’ve got forms.

Baseline progressive enhancement

Support for view transitions for regular websites (as opposed to single-page apps) will ship in Chrome 126. As someone who’s a big fan—to put it mildly—I am very happy about this!

Hopefully Firefox and Safari won’t be too far behind. But it’s still worth adding view transitions to your website even if not every browser supports them. They’re the perfect example of a progressive enhancement.

The browsers that don’t yet support view transitions won’t be harmed in any way if you give them the CSS for view transitions. They’ll just ignore it. For users of those browsers, nothing changes.

Then when those browsers do ship support for view transitions, your website automatically gets an upgrade for those users. Code you’ve already written starts working from one day to the next.

Don’t wait, is what I’m saying.

I really like the Baseline initiative as a way to track browser support. It’s great to see it in use on MDN and Can I Use. It’s very handy having a glanceable indication of which browser features are newly available and which are widely available.

But…

Not all browser features work the same way. For features that work as progressive enhancements you don’t need to wait for them to be widely available.

Service workers. Preference queries. View transitions.

If a browser doesn’t support one of those features, that’s fine. Your website won’t break in that browser.

Now that’s not true of all browser features, particularly some JavaScript APIs. If a feature is critical for your site to function then you definitely want to wait until it’s widely supported.

Baseline won’t tell you the difference between those two different kinds of features.

I don’t want Baseline to get too complicated. Like I said, I really like how it’s nice and glanceable right now. But it would be nice if there way some indication that a newly-available feature is a progressive enhancement.

For now it’s up to us to make that distinction. So don’t fall into the trap of thinking that just because a feature isn’t listed as widely-available you can’t use it yet.

Really you want to ask two questions:

  1. How widely available is this feature?
  2. Can this feature be used as a progressive enhancement?

If Baseline tells you that the answer to the first question is “newly-available”, move on to the second question. If the answer to that is “no, it can’t be used as a progressive enhancement”, don’t ship that feature in production just yet.

But if the answer to that second question is “hell yeah, it’s a progressive enhancement!” then go for it, regardless of the answer to the first question.

Y’know, there’s a real irony in a common misunderstanding around progressive enhancement: some people seem to think it’s about not being able to use advanced browser features. In reality it’s the opposite. Progressive enhancement allows you to use advanced browser features even before they’re widely supported.

Displaying HTML web components

Those HTML web components I made for date inputs are very simple. All they do is slightly extend the behaviour of the existing input elements.

This would be the ideal use-case for the is attribute:

<input is="input-date-future" type="date">

Alas, Apple have gone on record to say that they will never ship support for customized built-in elements.

So instead we have to make HTML web components by wrapping existing elements in new custom elements:

<input-date-future>
  <input type="date">
<input-date-future>

The end result is the same. Mostly.

Because there’s now an additional element in the DOM, there could be unexpected styling implications. Like, suppose the original element was direct child of a flex or grid container. Now that will no longer be true.

So something I’ve started doing with HTML web components like these is adding something like this inside the connectedCallback method:

connectedCallback() {
    this.style.display = 'contents';
  …
}

This tells the browser that, as far as styling is concerned, there’s nothing to see here. Move along.

Or you could (and probably should) do it in your stylesheet instead:

input-date-future {
  display: contents;
}

Just to be clear, you should only use display: contents if your HTML web component is augmenting what’s within it. If you add any behaviours or styling to the custom element itself, then don’t add this style declaration.

It’s a bit of a hack to work around the lack of universal support for the is attribute, but it’ll do.