Performance-Optimized Video Embeds with Zero JavaScript – Frontend Masters Blog
This is a clever technique for a CSS/HTML only way of just-in-time loading of iframes using details and summary.
This is a clever technique for a CSS/HTML only way of just-in-time loading of iframes using details and summary.
Frameworks like React are often perceived as accelerators, or even as the only sensible way to do web development. There’s this notion that a more “modern” stack (read: JS-heavy, where the JS ends up running on the user’s browser) allows you to be more agile, release more often with fewer bugs, make code more maintainable, and ultimately launch better sites. In short, the claim is that this approach will offer huge improvements to developer experience, and that these DevEx benefits will trickle down to the user.
But over the years, this narrative has proven to be unrealistic, at best. In reality, for any decently sized JS-heavy project, you should expect that what you build will be slower than advertised, it will keep getting slower over time while it sees ongoing work, and it will take more effort to develop and especially to maintain than what you were led to believe, with as many bugs as any other approach.
Where it comes to performance, the important thing to note is that a JS-heavy approach (and particularly one based on React & friends) will most likely not be a good starting point; in fact, it will probably prove to be a performance minefield that you will need to keep revisiting, risking a detonation with every new commit.
The datalist element is all fucked up on iOS. Again.
I haven’t “upgraded” my iPhone to iOS 26 and I have no plans to. The whole Liquid Glass thing is literally offputting. So I wouldn’t have known about the latest regression in Safari if a friend hadn’t texted me about the problem.
He was trying to do a search on The Session. He was looking for the tune, The Road To Town. He started typing this into the form on the home page of the site. He got as far as “The Road To”. That’s when the entire input was obscured by a suggestion from the associated datalist.
This is incredibly annoying and seems to be a pattern of behaviour for Safari. Features are supported …technically. But the implementation is so buggy as to be unusable.
I’ll probably have to do some user-agent sniffing, which I hate. And it won’t be enough to just sniff for Safari on iOS 26. Remember that every browser on iOS is just Webkit in a trenchcoat.
Time to file a bug and then wait God knows how long for an update to get rolled out.
Update: I filed a bug, but in the meantime it looks like user-agent sniffing is going to be impossible.
Every millisecond you spend executing JavaScript is a millisecond the browser can’t spend responding to a click, updating a scroll position, or acknowledging that the user did just try to type something. When your code runs long, you’re not causing “jank” in some abstract technical sense; you’re ignoring someone who’s trying to talk to you.
This is a great way to think about client-side JavaScript!
Also:
Before your application code runs a single line, your framework has already spent some of the user’s main thread budget on initialization, hydration, and virtual DOM reconciliation.
You might not need (much) JavaScript for these common interface patterns.
While we all love the power and flexibility JS provides, we should also respect it, and our users, by limiting its use to only what it needs to do.
Yes! Client-side JavaScript should do what only client-side JavaScript can do.
This is a spot-on analysis of how CSS-in-JS failed to deliver on any of its promises:
CSS-in-JS was born out of good intentions — modularity, predictability and componentization. But what we got was complexity disguised as progress.
React is no longer just a library. It’s a full ecosystem that defines how frontend developers are allowed to think.
Browsers now ship View Transitions, Container Queries, and smarter scheduling primitives. The platform keeps evolving at a fair pace, but most teams won’t touch these capabilities until React officially wraps them in a hook or they show up in Next.js docs.
Innovation keeps happening right across the ecosystem, but for many it only becomes “real” once React validates the approach. Which is fine, assuming you enjoy waiting for permission to use the platform you’re already building on.
Zing!
The critique isn’t that React is bad, but that treating any single framework as infrastructure creates blind spots in how we think and build. When React becomes the lens through which we see the web, we stop noticing what the platform itself can already do, and we stop reaching for native solutions because we’re waiting for the framework-approved version to show up first.
If your team’s evolution depends on a single framework’s roadmap, you are not steering your product; you are waiting for permission to move.
This isn’t a rhetorical question. I genuinely want to know why developers choose to build websites using React.
There are many possible reasons. Alas, none of them relate directly to user experience, other than a trickle-down justification: happy productive developers will make better websites. Citation needed.
It’s also worth mentioning that some people don’t choose to use React, but its use is mandated by their workplace (like some other more recent technologies I could mention). By my definition, this makes React enterprise software in this situation. My definition of enterprise software is any software that you use but that you yourself didn’t choose.
By far the most common reason for choosing React today is inertia. If it’s what you’re comfortable with, you’d need a really compelling reason not to use it. That’s generally the reason behind usage mandates too. If we “standardise” on React, then it’ll make hiring more straightforward (though the reality isn’t quite so simple, as the React ecosystem has mutated and bifurcated over time).
And you know what? Inertia is a perfectly valid reason to choose a technology. If time is of the essence, and you know it’s going to take you time to learn a new technology, it makes sense to stick with what you know, even if it’s out of date. This isn’t just true of React, it’s true of any tech stack.
This would all be absolutely fine if React weren’t a framework that gets executed in browsers. Any client-side framework is a tax on the end user. They have to download, parse, and execute the framework in order for you to benefit.
But maybe React doesn’t need to run in the browser at all. That’s the promise of server-side rendering.
There used to be a fairly clear distinction between front-end development and back-end development. The front end consisted of HTML, CSS, and client-side JavaScript. The back end was anything you wanted as long as it could spit out those bits of the front end: PHP, Ruby, Python, or even just a plain web server with static files.
Then it became possible to write JavaScript on the back end. Great! Now you didn’t need to context-switch when you were scripting for the client or the server. But this blessing also turned out to be a bit of a curse.
When you’re writing code for the back end, some things matter more than others. File size, for example, isn’t really a concern. Your code can get really long and it probably won’t slow down the execution. And if it does, you can always buy your way out of the problem by getting a more powerful server.
On the front end, your code should have different priorities. File size matters, especially with JavaScript. The code won’t be executed on your server. It’s executed on all sorts of devices on all sorts of networks running all sorts of browsers. If things get slow, you can’t buy your way out of the problem because you can’t buy every single one of your users a new device and a new network plan.
Now that JavaScript can run on the server as well as the client, it’s tempting to just treat the code the same. It’s the same language after all. But the context really matters. Some JavaScript that’s perfectly fine to run on the server can be a resource hog on the client.
And this is where it gets interesting with React. Because most of the things people like about React still apply on the back end.
When React first appeared, it was touted as front-end tool. State management and a near-magical virtual DOM were the main selling points.
Over time, that’s changed. The claimed speed benefits of the virtual DOM turned out to be just plain false. That just left state management.
But by that time, the selling points had changed. The component-based architecture turned out to be really popular. Developers liked JSX. A lot. Once you got used to it, it was a neat way to encapsulate little bits of functionality into building blocks that can be combined in all sorts of ways.
For the longest time, I didn’t realise this had happened. I was still thinking of React as being a framework like jQuery. But React is a framework like Rails or Django. As a developer, it’s where you do all your work. Heck, it’s pretty much your identity.
But whereas Rails or Django run on the back end, React runs on the front end …except when it doesn’t.
JavaScript can run on the server, which means React can run on the server. It’s entirely possible to have your React cake and eat it. You can write all of your code in React without serving up a single line of React to your users.
That’s true in theory. The devil is in the tooling.
Next.js allows you to write in React and do server-side rendering. But it really, really wants to output React to the client as well.
By default, you get the dreaded hydration pattern—do all the computing on the server in JavaScript (yay!), serve up HTML straight away (yay! yay!) …and then serve up all the same JavaScript that’s on the server anyway (ya—wait, what?).
It’s possible to get Next.js to skip that last step, but it’s not easy. You’ll be battling it every step of the way.
Astro takes a very different approach. It will do everything it can to keep the client-side JavaScript to a minimum. Developers get to keep their beloved JSX authoring environment without penalising users.
Alas, the collective inertia of the “modern” development community is bound up in the React/Next/Vercel ecosystem. That’s a shame, because Astro shows us that it doesn’t have to be this way.
Switching away from using React on the front end doesn’t mean you have to switch away from using React on the back end.
The titular question I asked is too broad and naïve. There are plenty of reasons to use React, just as there are plenty of reasons to use Wordpress, Eleventy, or any other technology that works on the back end. If it’s what you like or what you’re comfortable with, that’s reason enough.
All I really care about is the front end. I’m not going to pass judgment on anyone’s choice of server-side framework, as long as it doesn’t impact what you can do in the client. Like Harry says:
…if you’re going to use one, I shouldn’t be able to smell it.
Here’s the question I should be asking:
Why use React in the browser?
Because if the reason you’re using React is cultural—the whole team works in JSX, it makes hiring easier—then there’s probably no need to make your users download React.
If you’re making a single-page app, then …well, the first thing you should do is ask yourself if it really needs to be a single-page app. They should be the exception, not the default. But if you’re determined to make a single-page app, then I can see why state management becomes very important.
In that situation, try shipping Preact instead of React. As a developer, you’ll almost certainly notice no difference, but your users will appreciate the refreshing lack of bloat.
Mostly though, I’d encourage you to investigate what you can do with vanilla JavaScript in the browser. I totally get why you’d want to hold on to React as an authoring environment, but don’t let your framework limit what you can do on the front end. If you use React on the client, you’re not doing your users any favours.
You can continue to write in React. You can continue to use JSX. You can continue to hire React developers. But keep it on your machine. For your users, make the most of what web browsers can do.
Once you keep React on the server, then a whole world of possibilities opens up on the client. Web browsers have become incredibly powerful in what they offer you. Don’t let React-on-the-client hold you back.
And if you want to know more about what web browsers are capable of today, come to Web Day Out in Brighton on Thursday, 12th March 2026.
A very, very deep dive into like-for-like comparison of JavaScript frameworks. The takeaway:
Nuxt demonstrates that established “big three” frameworks can achieve next-gen performance when properly configured. Vue’s architecture allows competitive mobile web performance while maintaining a mature ecosystem. React and Angular show no path to similar results.
And the real takeaway:
Mobile is the web. These measurements matter because mobile web is the primary internet for billions of people. If your app is accessible via URL, people will use it on phones with cellular connections. Optimizing for desktop and hoping mobile is good enough is backwards. The web is mobile. Build for that reality.
React exists as a profound perversion of the web platform. React has failed upwards to widespread adoption because it provides a “developer experience” that bypasses the hard parts. Like learning HTML, or CSS, or JavaScript. Even learning React itself is discouraged; that’s for adults, you should use meta-frameworks. React devs are burdened with multi-megabyte monstrosities before they’ve written a single line of code. You cannot fix “too much JavaScript” with more JavaScript and yet React devs are trained to
npm installuntil their problems become their users’ problems.
So instead of asking yourself, “How can I write code that does what I want?” Consider asking yourself, “Can I write code that ties together things the browser already does to accomplish what I want (or close enough to it)?”
Grrr… it turns out that browsers exhibit some very frustrating behaviour when it comes to the video element. Rob has the details…
When I set about writing this article, I intended it to be a strong argument for progressive rendering. But after digging into it, my feelings are less certain.
Tim recently gave a talk at Smashing Conference in New York called One Step Ahead. Based on the slides, it looks like it was an excellent talk.
Towards the end, there’s a slide that could be the tagline for Web Day Out:
Betting on the browser is our best chance at long-term success.
Most of the talk focuses on two technologies that you can add to any website with just a couple of lines of code: view transitions and speculation rules.
I’m using both of them on The Session and I can testify to their superpowers—super-snappy navigations with smooth animations.
Honestly, that takes care of 95% of the reasons for building a single-page app (the other 5% would be around managing state, which most sites—e-commerce, publishing, whatever—don’t need to bother with). Instead build a good ol’-fashioned website with pages of HTML linked together, then apply view transitions and speculation rules.
I mean, why wouldn’t you do that?
That’s not a rhetorical question. I’m genuinely interested in the reasons why people would reject a simple declarative solution in favour of the complexity of doing everything with a big JavaScript framework.
One reason might be browser support. After all, both view transitions and speculation rules are designed to be used as progressive enhancements, regardless of how many browsers happen to support them right now. If you want to attempt to have complete control, I understand why you might reach for the single-page app model, even if it means bloating the initial payload.
But think about that mindset for a second. Rather than reward the browsers that support modern features, you would instead be punishing them. You’d be treating every browser the same. Instead of taking advantage of the amazing features that some browsers have, you’d rather act as though they’re no different to legacy browsers.
I kind of understand the thinking behind that. You assume a level playing field by treating every browser as though they’re Internet Explorer. But what a waste! You ship tons of uneccesary code to perfectly capable browsers.
That could be the tagline for React.
I don’t get out to gigs as much as I’d like. But for some reason, the past week has been packed with live music.
On Tuesday I saw Ye Vagabonds. I’m particularly partial to their nice mandolin playing. It was a nice concert that felt like being in a Greenwich Village folk club in the ’60s. It’s great to see how popular Ye Vagabonds are with indie kids, even if I’m slightly perplexed by the extent of the popularity—see also Lankum.
On Thursday it was time for Robert Forster and his band. I’m a huge fan of The Go-Betweens, as well as Forster’s solo work. He gave us a thoroughly enjoyable show, interspersing some select Go-Betweens tracks, including quite a few off 16 Lovers Lane.
On Saturday Jessica and I made the journey over to Lewes to see The Wilderness Yet at the folk club. We know Rowan and Rosie from when they used to live ‘round here and it was lovely to see and hear them again.
Then last night we went out to see DakhaBrakha. The Ukrainian population of Brighton came out to give them a very warm welcome. The band themselves were, unsurprisingly, brilliant. Like I said last time they came to town:
Imagine if Tom Waits and Cocteau Twins came from Eastern Europe and joined forces. Well, DakhaBrakha are even better than that.
A good week of music from Ireland, Australia, England, and Ukraine.
A profile of Tim and the World World Web.
I was going to save this announcement for later, but I’m just too excited: Harry Roberts will be speaking at Web Day Out!
Goddamn, that’s one fine line-up, and it isn’t even complete yet! Get your ticket if you haven’t already.
There’s a bit of a story behind the talk that Harry is going to give…
Earlier this year, Harry posted a most excellent screed in which he said:
The web as a platform is a safe bet. It’s un-versioned by design. That’s the commitment the web makes to you—take advantage of it.
- Opt into web platform features incrementally;
- Embrace progressive enhancement to build fast, reliable applications that adapt to your customers’ context;
- Write code that leans into the browser, not away from it.
Yes! Exactly!
Thing is, Harry posted this on LinkedIn. My indieweb sensibilities were affronted. So I harangued him:
You should blog this, Harry
My pestering paid off with an excellent blog post on Harry’s own site called Build for the Web, Build on the Web, Build with the Web:
The beauty of opting into web platform features as they become available is that your site becomes contextual. The same codebase adapts into its environment, playing to its strengths, rather than trying to build and ship your own environment from the ground up. Meet your users where they are.
That’s a pretty neat summation of the agenda for Web Day Out. So I thought, “Hmm …if I was able to pester Harry to turn a LinkedIn post into a really good blog post, I wonder if I could pester him to turn that blog post into a talk?”
I threw down the gauntlet. Harry accepted the challenge.
I’m sure you’re already familiar with Harry’s excellent work, but if you’re not, he’s basically Mr. Web Performance. That’s why I’m so excited to have him speak at Web Day Out—I want to hear the business case for leaning into what web browsers can do today, and he is most certainly the best person to bring receipts.
You won’t want to miss this, so be sure to get your ticket now; it’s only £225+VAT.
If you’re not ready to commit just yet, but you want to hear about more speaker announcements like this, you can sign up to the mailing list.
When I was talking about monitoring web performance yesterday, I linked to the CrUX data for The Session.
CrUX is a contraction of Chrome User Experience Report. CrUX just sounds better than CEAR.
It’s data gathered from actual Chrome users worldwide. It can be handy as part of a balanced performance-monitoring diet, but it’s always worth remembering that it only shows a subset of your users; those on Chrome.
The actual CrUX data is imprisoned in some hellish Google interface so some kindly people have put more humane interfaces on it. I like Calibre’s CrUX tool as well as Treo’s.
What’s nice is that you can look at the numbers for any reasonably popular website, not just your own. Lest I get too smug about the performance metrics for The Session, I can compare them to the numbers for WikiPedia or the BBC. Both of those sites are made by people who prioritise speed, and it shows.
If you scroll down to the numbers on navigation types, you’ll see something interesting. Across the board, whether it’s The Session, Wikipedia, or the BBC, the BFcache—back/forward cache—is used around 16% to 17% of the time. This is when users use the back button (or forward button).
Unless you do something to stop them, browsers will make sure that those navigations are super speedy. You might inadvertently be sabotaging the BFcache if you’re sending a Cache-Control: no-store header or if you’re using an unload event handler in JavaScript.
I guess it’s unsurprising the BFcache numbers are relatively consistent across three different websites. People are people, whatever website they’re browsing.
Where it gets interesting is in the differences. Take a look at pre-rendering. It’s 4% for the BBC and just 0.4% for Wikipedia. But on The Session it’s a whopping 35%!
That’s because I’m using speculation rules. They’re quite straightforward to implement and they pair beautifully with full-page view transitions for a slick, speedy user experience.
It doesn’t look like WikiPedia or the BBC are using speculation rules at all, which kind of surprises me.
Then again, because they’re a hidden technology I can understand why they’d slip through the cracks.
On any web project, I think it’s worth having a checklist of The Invisibles—things that aren’t displayed directly in the browser, but that can make a big difference to the user experience.
Some examples:
meta elements in the head of documents so they “unroll” nicely when the link is shared.Speculation-Rules header that points to a JSON file.lang attribute on the body of every page.If you’ve got a checklist like that in place, you can at least ask “Whose job is this?” All too often, these things are missing because there’s no clarity on whose responsible for them. They’re sorta back-end and sorta front-end.
A few years back, Craig wrote a great piece called Fast Software, the Best Software:
Speed in software is probably the most valuable, least valued asset. To me, speedy software is the difference between an application smoothly integrating into your life, and one called upon with great reluctance.
Nelson Elhage said much the same thing in his reflections on software performance:
I’ve really come to appreciate that performance isn’t just some property of a tool independent from its functionality or its feature set. Performance — in particular, being notably fast — is a feature in and of its own right, which fundamentally alters how a tool is used and perceived.
Or, as Robin put it:
I don’t think a website can be good until it’s fast.
Those sentiments underpin The Session. Speed is as much a priority as usability, accessibility, privacy, and security.
I’m fortunate in that the site doesn’t have an underlying business model at odds with these priorities. I’m under no pressure to add third-party code that would track users and slow down the website.
When it comes to making fast websites, most of the obstacles are put in place by front-end development, mostly JavaScript. I’ve been pretty ruthless in my pursuit of speed on The Session, removing as much JavaScript as possible. On the bigger pages, the bottleneck now is DOM size rather than parsing and excuting JavaScript. As bottlenecks go, it’s not the worst.
But even with all my core web vitals looking good, I still have an issue that can’t be solved with front-end optimisations. Time to first byte (or TTFB if you’d rather use an initialism that takes just as long to say as the words it’s replacing).
When it comes to reducing the time to first byte, there are plenty of factors that are out of my control. But in the case of The Session, something I do have control over is the server set-up, specifically the database.
Now I could probably solve a lot of my speed issues by throwing money at the problem. If I got a bigger better server with more RAM and CPUs, I’m pretty sure it would improve the time to first byte. But my wallet wouldn’t thank me.
(It’s still worth acknowledging that this is a perfectly valid approach when it comes to back-end optimisation that isn’t available on the front end; you can’t buy all your users new devices.)
So I’ve been spending some time really getting to grips with the MySQL database that underpins The Session. It was already normalised and indexed to the hilt. But perhaps there were server settings that could be tweaked.
This is where I have to give a shout-out to Releem, a service that is exactly what I needed. It monitors your database and then over time suggests configuration tweaks, explaining each one along the way. It’s a seriously good service that feels as empowering as it is useful.
I wish I could afford to use Releem on an ongoing basis, but luckily there’s a free trial period that I could avail of.
Thanks to Releem, I was also able to see which specific queries were taking the longest. There was one in particular that had always bothered me…
If you’re a member of The Session, then you can see any activity related to something you submitted in the past. Say, for example, that you added a tune or an event to the site a while back. If someone else comments on that, or bookmarks it, then that shows up in your “notifications” feed.
That’s all well and good but under the hood it was relying on a fairly convuluted database query to a very large table (a table that’s effectively a log of all user actions). I tried all sorts of query optimisations but there always seemed to be some combination of circumstances where the request would take ages.
For a while I even removed the notifications functionality from the site, hoping it wouldn’t be missed. But a couple of people wrote to ask where it had gone so I figured I ought to reinstate it.
After exhausting all the technical improvements, I took a step back and thought about the purpose of this particular feature. That’s when I realised that I had been thinking about the database query too literally.
The results are ordered in reverse chronological order, which makes sense. They’re also chunked into groups of ten, which also makes sense. But I had allowed for the possibility that you could navigate through your notifications back to the very start of your time on the site.
But that’s not really how we think of notifications in other settings. What would happen if I were to limit your notifications only to activity in, say, the last month?
Boom! Instant performance improvement by orders of magnitude.
I guess there’s a lesson there about switching off the over-analytical side of my brain and focusing on actual user needs.
Anyway, thanks to the time I’ve spent honing the database settings and optimising the longest queries, I’ve reduced the latency by quite a bit. I’m hoping that will result in an improvement to the time to first byte.
Time—and monitoring tools—will tell.
I’m almost certainly preaching to the choir here because I bet you’re reading these very words in a feed reader, but what Molly White has written here is too good not to share:
RSS offers readers and writers a path away from unreliable, manipulative, and hostile platforms and intermediaries. In a media landscape dominated by algorithmic feeds that aim to manipulate and extract, sometimes the most radical thing you can do is choose to read what you want, when you want, without anyone watching over your shoulder.