Tags: for

1412

sparkline

Saturday, March 1st, 2025

google webfonts helper

Google Fonts only lets you download .ttf files meaning that if you want to self-host your fonts (and you should), you have to first convert them to .woff2 files.

Luckily this tool has been online for over a decade, doing what Google Fonts should be doing by default.

Wednesday, February 19th, 2025

The web on mobile

Here’s a post outlining all the great things you can do in mobile web browsers today: Your App Should Have Been A Website (And Probably Your Game Too):

Today’s browsers are powerhouses. Notifications? Check. Offline mode? Check. Secure payments? Yep, they’ve got that too. And with technologies like WebAssembly and WebGPU, web games are catching up to native-level performance. In some cases, they’re already there.

This is all true. But this post from John Gruber is equally true: One Bit of Anecdata That the Web Is Languishing Vis-à-Vis Native Mobile Apps:

I won’t hold up this one experience as a sign that the web is dying, but it sure seems to be languishing, especially for mobile devices.

As John points out, the problems aren’t technical:

There’s absolutely no reason the mobile web experience shouldn’t be fast, reliable, well-designed, and keep you logged in. If one of the two should suck, it should be the app that sucks and the website that works well. You shouldn’t be expected to carry around a bundle of software from your utility company in your pocket. But it’s the other way around.

He’s right. It makes no sense, but this is the reality.

Ten or fifteen years ago, the gap between the web and native apps on mobile was entirely technical. There were certain things that you just couldn’t do in web browsers. That’s no longer the case now. The web caught up quite a while back.

But the experience of using websites on a mobile device is awful. Never mind the terrible performance penalties incurred by unnecessary frameworks and libraries like React and its ilk, there’s the constant game of whack-a-mole with banners and overlays. What’s just about bearable in a large desktop viewport becomes intolerable on a small screen.

This is not a technical problem. This doesn’t get solved by web standards. This is a cultural problem.

First of all, there’s the business culture. If your business model depends on tracking people or pushing newsletter sign-ups, then it’s inevitable that your website will be shite on mobile.

Mind you, if your business model depends on tracking people, you’re more likely to try push people to download your native app. Like Cory Doctorow says:

50% of web users are running ad-blockers. 0% of app users are running ad-blockers, because adding a blocker to an app requires that you first remove its encryption, and that’s a felony (Jay Freeman calls this ‘felony contempt of business-model’).

Matt May brings up the same point in his guide, How to grey-rock Meta:

Remove Meta apps from your devices and use only the mobile web versions. Mobile apps have greater access to your personal data, provided the app requests those privileges, and Facebook and Instagram in particular (more so than WhatsApp, another Meta property) request the vast majority of those privileges. This includes precise GPS data on where you are, whether or not you are using the app.

Ironically, it’s the strength of the web—and web browsers—that has led to such shitty mobile web experiences. The pretty decent security model on the web means that sites have to pester you.

Part of the reason why you don’t see the same egregious over-use of pop-ups and overlays in native apps is that they aren’t needed. If you’ve installed the app, you’re already being tracked.

But when I describe the dreadful UX of most websites on mobile as a cultural problem, I don’t just mean business culture.

Us, the people who make websites, designers and developers, we’re responsible for this too.

For all our talk of mobile-first design for the last fifteen years, we never really meant it, did we? Sure, we use media queries and other responsive techniques, but all we’ve really done is make sure that a terrible experience fits on the screen.

As developers, I’m sure we can tell ourselves all sorts of fairy tales about why it’s perfectly justified to make users on mobile networks download React, Tailwind, and megabytes more of third-party code.

As designers, I’m sure we can tell ourselves all sorts of fairy tales about why intrusive pop-ups and overlays are the responsibility of some other department (as though users make any sort of distinction).

Worst of all, we’ve spent the last fifteen years teaching users that if they want a good experience on their mobile device, they should look in an app store, not on the web.

Ask anyone about their experience of using websites on their mobile device. They’ll tell you plenty of stories of how badly it sucks.

It doesn’t matter that the web is the perfect medium for just-in-time delivery of information. It doesn’t matter that web browsers can now do just about everything that native apps can do.

In many ways, I wish this were a technical problem. At least then we could lobby for some technical advancement that would fix this situation.

But this is not a technical problem. This is a people problem. Specifically, the people who make websites.

We fucked up. Badly. And I don’t see any signs that things are going to change anytime soon.

But hey, websites on desktop are just great!

Wednesday, February 12th, 2025

AI wants to rule the World, but it can’t handle dairy.

AI has the same problem that I saw ten year ago at IBM. And remember that IBM has been at this AI game for a very long time. Much longer than OpenAI or any of the new kids on the block. All of the shit we’re seeing today? Anyone who worked on or near Watson saw or experienced the same problems long ago.

Saturday, February 8th, 2025

UI Pace Layers - Jim Nielsen’s Blog

Every UI control you roll yourself is a liability. You have to design it, test it, ship it, document it, debug it, maintain it — the list goes on.

It makes you wonder why we insist on rolling (or styling) our own common UI controls so often. Perhaps we’d be better off asking: What are the fewest amount of components we have to build to deliver value to our users?

Saturday, February 1st, 2025

Making the new Salter Cane website

With the release of a new Salter Cane album I figured it was high time to update the design of the band’s website.

Here’s the old version for reference. As you can see, there’s a connection there in some of the design language. Even so, I decided to start completely from scratch.

I opened up a text editor and started writing HTML by hand. Same for the CSS. No templates. No build tools. No pipeline. Nothing. It was a blast!

And lest you think that sounds like a wasteful way of working, I pretty much had the website done in half a day.

Partly that’s because you can do so much with so little in CSS these days. Custom properties for colours, spacing, and fluid typography (thanks to Utopia). Logical properties. View transitions. None of this takes much time at all.

Because I was using custom properties, it was a breeze to add a dark mode with prefers-color-scheme. I think I might like the dark version more than the default.

The final stylesheet is pretty short. I didn’t bother with any resets. Browsers are pretty consistent with their default styles nowadays. As long as you’ve got some sensible settings on your body element, the cascade will take care of a lot.

There’s one little CSS trick I think is pretty clever…

The background image is this image. As you can see, it’s a rectangle that’s wider than it is tall. But the web pages are rectangles that are taller than they are wide.

So how I should I position the background image? Centred? Anchored to the top? Anchored to the bottom?

If you open up the website in Chrome (or Safari Technical Preview), you’ll see that the background image is anchored to the top. But if you scroll down you’ll see that the background image is now anchored to the bottom. The background position has changed somehow.

This isn’t just on the home page. On any page, no matter how tall it is, the background image is anchored to the top when the top of the document is in the viewport, and it’s anchored to the bottom when you reach the bottom of the document.

In the past, this kind of thing might’ve been possible with some clever JavaScript that measured the height of the document and updated the background position every time a scroll event is triggered.

But I didn’t need any JavaScript. This is a scroll-driven animation made with just a few lines of CSS.

@keyframes parallax {
    from {
        background-position: top center;
    }
    to {
        background-position: bottom center;
    }
}
@media (prefers-reduced-motion: no-preference) {
        html {
            animation: parallax auto ease;
            animation-timeline: scroll();
        }
    }
}

This works as a nice bit of progressive enhancement: by default the background image stays anchored to the top of the viewport, which is fine.

Once the site was ready, I spent a bit more time sweating some details, like the responsive images on the home page.

But the biggest performance challenge wasn’t something I had direct control over. There’s a Spotify embed on the home page. Ain’t no party like a third party.

I could put loading="lazy" on the iframe but in this case, it’s pretty close to the top of document so it’s still going to start loading at the same time as some of my first-party assets.

I decided to try a little JavaScript library called “lazysizes”. Normally this would ring alarm bells for me: solving a problem with third-party code by adding …more third-party code. But in this case, it really did the trick. The library is loading asynchronously (so it doesn’t interfere with the more important assets) and only then does it start populating the iframe.

This made a huge difference. The core web vitals went from being abysmal to being perfect.

I’m pretty pleased with how the new website turned out.

Saturday, January 25th, 2025

Build for the Web, Build on the Web, Build with the Web – Web Performance and Site Speed Consultant

If I was only able to give one bit of advice to any company: iterate quickly on a slow-moving platform.

Excellent advice from Harry (who first cast his pearls before the swine of LinkedIn but I talked him ‘round to posting this on his own site).

  1. Opt into web platform features incrementally
  2. Embrace progressive enhancement to build fast, reliable applications that adapt to your customers’ context
  3. Write code that leans into the browser, not away from it

I’m not against front-end frameworks, and, believe me, I’m not naive enough to believe that the only thing a front-end framework provides is soft navigations, but if you’re going to use one, I shouldn’t be able to smell it.

Tuesday, January 21st, 2025

The New York Good Times

Better than the real thing. All true too.

Refresh for more.

Moving on from React, a Year Later

Many interactions are not possible without JavaScript, but that doesn’t mean we should look to write more than we have to. The server doing something useful is a requirement for building an interesting business. The client doing something is often a nice-to-have.

There’s also this:

It’s really fast

One of the arguments for a SPA is that it provides a more reactive customer experience. I think that’s mostly debunked at this point, due to the performance creep and complexity that comes in with a more complicated client-server relationship.

What I’ve learned about writing AI apps so far | Seldo.com

LLMs are good at transforming text into less text

Laurie is really onto something with this:

This is the biggest and most fundamental thing about LLMs, and a great rule of thumb for what’s going to be an effective LLM application. Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

Depending how much of the hype around AI you’ve taken on board, the idea that they “take text and turn it into less text” might seem gigantic back-pedal away from previous claims of what AI can do. But taking text and turning it into less text is still an enormous field of endeavour, and a huge market. It’s still very exciting, all the more exciting because it’s got clear boundaries and isn’t hype-driven over-reaching, or dependent on LLMs overnight becoming way better than they currently are.

Friday, January 17th, 2025

una.im | Updates to the customizable select API

It’s great to see the evolution of HTML happening in response to real use-cases—the turbo-charging of the select element just gets better and better!

Thursday, January 16th, 2025

Daring Fireball: One Bit of Anecdata That the Web Is Languishing Vis-à-Vis Native Mobile Apps

I have to agree with John here:

There’s absolutely no reason the mobile web experience shouldn’t be fast, reliable, well-designed, and keep you logged in. If one of the two should suck, it should be the app that sucks and the website that works well. You shouldn’t be expected to carry around a bundle of software from your utility company in your pocket. But it’s the other way around.

There’s absolutely no technical reason why it should be this way around. This is a cultural problem with “modern front-end web development”.

Wednesday, January 15th, 2025

Prescriptive and Descriptive Information Architectures | Jorge Arango

Interesting—this is exactly the same framing I used to talk about design systems a few years ago.

Friday, January 10th, 2025

Website Speed Test

Here’s a handy free tool from Calibre that’ll give your website a performance assessment.

Sunday, December 22nd, 2024

Full RSS feed

Oh, this is a very handy service from Paul—given the URL of an RSS feed that only has summaries, it will attempt to get the full post content from the HTML.

Sunday, December 15th, 2024

Progressively enhancing maps

The Session has been online for over 20 years. When you maintain a site for that long, you don’t want to be relying on third parties—it’s only a matter of time until they’re no longer around.

Some third party APIs are unavoidable. The Session has maps for sessions and other events. When people add a new entry, they provide the address but then I need to get the latitude and longitude. So I have to use a third-party geocoding API.

My code is like a lesson in paranoia: I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

Things are better on the client side. I’m using other people’s JavaScript libraries—like the brilliant abcjs—but at least I can self-host them.

I’m using Leaflet for embedding maps. It’s a great little library built on top of Open Street Map data.

A little while back I linked to a new project called OpenFreeMap. It’s a mapping provider where you even have the option of hosting the tiles yourself!

For now, I’m not self-hosting my map tiles (yet!), but I did want to switch to OpenFreeMap’s tiles. They’re vector-based rather than bitmap, so they’re lovely and crisp.

But there’s an issue.

I can use OpenFreeMap with Leaflet, but to do that I also have to use the MapLibre GL library. But whereas Leaflet is 148K of JavaScript, MapLibre GL is 800K! Yowzers!

That’s mahoosive by the standards of The Session’s performance budget. I’m not sure the loveliness of the vector maps is worth increasing the JavaScript payload by so much.

But this doesn’t have to be an either/or decision. I can use progressive enhancement to get the best of both worlds.

If you land straight on a map page on The Session for the first time, you’ll get the old-fashioned bitmap map tiles. There’s no MapLibre code.

But if you browse around The Session and then arrive on a map page, you’ll get the lovely vector maps.

Here’s what’s happening…

The maps are embedded using an HTML web component called embed-map. The fallback is a static image between the opening and closing tags. The web component then loads up Leaflet.

Here’s where the enhancement comes in. When the web component is initiated (in its connectedCallback method), it uses the Cache API to see if MapLibre has been stored in a cache. If it has, it loads that library:

caches.match('/path/to/maplibre-gl.js')
.then( responseFromCache => {
    if (responseFromCache) {
        // load maplibre-gl.js
    }
});

Then when it comes to drawing the map, I can check for the existence of the maplibreGL object. If it exists, I can use OpenFreeMap tiles. Otherwise I use the old Leaflet tiles.

But how does the MapLibre library end up in a cache? That’s thanks to the service worker script.

During the service worker’s install event, I give it a list of static files to cache: CSS, JavaScript, and so on. That includes third-party libraries like abcjs, Leaflet, and now MapLibre GL.

Crucially this caching happens off the main thread. It happens in the background and it won’t slow down the loading of whatever page is currently being displayed.

That’s it. If the service worker installation works as planned, you’ll get the nice new vector maps. If anything goes wrong, you’ll get the older version.

By the way, it’s always a good idea to use a service worker and the Cache API to store your JavaScript files. As you know, JavaScript is unduly expensive to performance; not only does the JavaScript file have to be downloaded, it then has to be parsed and compiled. But JavaScript stored in a cache during a service worker’s install event is already parsed and compiled.

Century-Scale Storage

This magnificent piece by Maxwell Neely-Cohen—with some tasteful art-direction—is right up my alley!

This piece looks at a single question. If you, right now, had the goal of digitally storing something for 100 years, how should you even begin to think about making that happen? How should the bits in your stewardship be stored with such a target in mind? How do our methods and platforms look when considered under the harsh unknowns of a century? There are plenty of worthy related subjects and discourses that this piece does not touch at all. This is not a piece about the sheer volume of data we are creating each day, and how we might store all of it. Nor is it a piece about the extremely tough curatorial process of deciding what is and isn’t worth preserving and storing. It is about longevity, about the potential methods of preserving what we make for future generations, about how we make bits endure. If you had to store something for 100 years, how would you do it? That’s it.

Monday, December 2nd, 2024

If Not React, Then What? - Infrequently Noted

Put the kettle on; it’s another epic data-driven screed from Alex. The footnotes on this would be a regular post on any other blog (and yes, even the footnotes have footnotes).

This is a spot-on description of the difference between back-end development and front-end development:

Code that runs on the server can be fully costed. Performance and availability of server-side systems are under the control of the provisioning organisation, and latency can be actively managed by developers and DevOps engineers.

Code that runs on the client, by contrast, is running on The Devil’s Computer. Nothing about the experienced latency, client resources, or even available APIs are under the developer’s control.

Client-side web development is perhaps best conceived of as influence-oriented programming. Once code has left the datacenter, all a web developer can do is send thoughts and prayers.

As a result, an unreasonably effective strategy is to send less code. In practice, this means favouring HTML and CSS over JavaScript, as they degrade gracefully and feature higher compression ratios. Declarative forms generate more functional UI per byte sent. These improvements in resilience and reductions in costs are beneficial in compounding ways over a site’s lifetime.

Thursday, November 7th, 2024

Information literacy and chatbots as search • Buttondown

If someone uses an LLM as a replacement for search, and the output they get is correct, this is just by chance. Furthermore, a system that is right 95% of the time is arguably more dangerous tthan one that is right 50% of the time. People will be more likely to trust the output, and likely less able to fact check the 5%.

Daring Fireball: Kottke on the Art and Power of Hypertextual Writing

Hypertext links are an information-density multiplier.

The way I’ve long thought about it is that traditional writing — like for print — feels two-dimensional. Writing for the web adds a third dimension. It’s not an equal dimension, though. It doesn’t turn writing from a flat plane into a full three-dimensional cube. It’s still primarily about the same two dimensions as old-fashioned writing. What hypertext links provide is an extra layer of depth. Just the fact that the links are there — even if you, the reader, don’t follow them — makes a sentence read slightly differently. It adds meaning in a way that is unique to the web as a medium for prose.

Sunday, October 20th, 2024

Archives

Speaking of serendipity, not long after I wrote about making a static archive of The Session for people to download and share, I came across a piece by Alex Chan about using static websites for tiny archives.

The use-case is slightly different—this is about personal archives, like paperwork, screenshots, and bookmarks. But we both came up with the same process:

I’m deliberately going low-scale, low-tech. There’s no web server, no build system, no dependencies, and no JavaScript frameworks.

And we share the same hope:

Because this system has no moving parts, and it’s just files on a disk, I hope it will last a long time.

You should read the whole thing, where Alex describes all the other approaches they took before settling on plain ol’ HTML files in a folder:

HTML is low maintenance, it’s flexible, and it’s not going anywhere. It’s the foundation of the entire web, and pretty much every modern computer has a web browser that can render HTML pages. These files will be usable for a very long time – probably decades, if not more.

I’m enjoying this approach, so I’m going to keep using it. What I particularly like is that the maintenance burden has been essentially zero – once I set up the initial site structure, I haven’t had to do anything to keep it working.

They also talk about digital preservation:

I’d love to see static websites get more use as a preservation tool.

I concur! And it’s particularly interesting for Alex to be making this observation in the context of working with the Flickr foundation. That’s where they’re experimenting with the concept of a data lifeboat

What should we do when a digital service sinks?

This is something that George spoke about at the final dConstruct in 2022. You can listen to the talk on the dConstruct archive.