Journal tags: cat

75

sparkline

Session spider

Here’s some code to show the distance to the nearest airports on a map.

Here’s a modified version that shows the distance to the nearest Gregg’s. The hub-and-spoke visualisation overlaid on the map changes as you pan around, making it look like a spider bestriding the landscape.

Jonty’s version shows the distance to the nearest Pret a Manger.

I got nerdsniped by someone saying:

@adactio This would be cool for sessions 😉

He’s right, dammit! So here you go:

Session spider.

Now you can see how far you are from the nearest traditional Irish music sessions.

It’s using data from the weekly data dumps from thesession.org—I added a GeoJSON file in there.

Pure silliness, but it does make me wonder what kind of actually good data visualisations could be made with all this scrumptious data.

Twittotage

I left Twitter in 2022. With every day that has passed since then, that decision has proven to be correct.

(I’m honestly shocked that some people I know still have active Twitter accounts. At this point there is no justification for giving your support to a place that’s literally run by a nazi.)

I also used to have some Twitter bots. There were Twitter accounts for my blog and for my links. A simple If-This-Then-That recipe would poll my RSS feeds and then post an update whenever there was a new item.

I had something something similar going for The Session. Its Twitter bot has been replaced with automated accounts on Mastodon and Bluesky (I couldn’t use IFTTT directly to post to Bluesky from RSS, but I was able to set up Buffer to do the job).

I figured The Session’s Twitter account would probably just stop working at some point, but it seems like it’s still going.

Hah! I spoke too soon. I just decided to check that URL and nothing is loading. Now, that may just be a temporary glitch because Alan Musk has decided to switch off a server or something. Or it might be that the account has been cancelled because of how I modified its output.

I’ve altered the IFTTT recipe so that whenever there’s a new item in an RSS feed, the update is posted to Twitter along with a message like “Please use Bluesky or Mastodon instead of Twitter” or “Please stop using Twitter/X”, or “Get off Twitter—please. It’s a cesspit” or “If you’re still on Twitter, you’re supporting a fascist.”

That’s a start but I need to think about how I can get the bot to do as much damage as possible before it’s destroyed.

Choosing a geocoding provider

Yesterday when I mentioned my paranoia of third-party dependencies on The Session, I said:

I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

(Geocoding, by the way, is when you provide a human-readable address and get back latitude and longitude coordinates.)

My paranoia is well-founded. I’ve been using Google’s geocoding API, which is changing its pricing model from next March.

You wouldn’t know it from the breathlessly excited emails they’ve been sending about it, but this is not a good change for me. I don’t do that much geocoding on The Session—around 13,000 or 14,000 requests a month. With the new pricing model that’ll be around $15 to $20 a month. Currently I slip by under the radar with the free tier.

So it might be time for me to flip that switch in my code. But which geocoding provider should I use?

There are plenty of slop-like listicles out there enumerating the various providers, but they’re mostly just regurgitating the marketing blurbs from the provider websites. What I need is more like a test kitchen.

Here’s what I did…

I took a representative sample of six recent additions to the sessions section of thesession.org. These examples represent places in the USA, Ireland, England, Scotland, Northern Ireland, and Spain, so a reasonable spread.

For each one of those sessions, I’m taking:

  • the venue name,
  • the town name,
  • the area name, and
  • the country.

I’m deliberately not including the street address. Quite often people don’t bother including this information so I want to see how well the geocoding APIs cope without it.

I’ve scored the results on a simple scale of good, so-so, and just plain wrong.

  • A good result gets a score of one. This is when the result gives back an accurate street-level result.
  • A so-so result gets a score of zero. This when it’s got the right coordinates for the town, but no more than that.
  • A wrong result gets a score of minus one. This is when the result is like something from a large language model: very confident but untethered from reality, like claiming the address is in a completely different country. Being wrong is worse than being vague, hence the difference in scoring.

Then I tot up those results for an overall score for each provider.

When I tried my six examples with twelve different geocoding providers, these were the results:

Geocoding providers
Provider USA England Ireland Spain Scotland Northern Ireland Total
Google 1111117
Mapquest 1111117
Geoapify 0110103
Here 1101003
Mapbox 11011-13
Bing 1000001
Nominatim 0000-110
OpenCage -11000-1-1
Tom Tom -1-100-11-2
Positionstack 0-10-11-1-2
Locationiq -10-100-1-3
Map Maker -10-1-1-1-1-5

Some interesting results there. I was surprised by how crap Bing is. I was also expecting better results from Mapbox.

Most interesting for me, Mapquest is right up there with Google.

So now that I’ve got a good scoring system, my next question is around pricing. If Google and Mapquest are roughly comparable in terms of accuracy, how would the pricing work out for each of them?

Let’s say I make 15,000 API requests a month. Under Google’s new pricing plan, that works out at $25. Not bad.

But if I’ve understood Mapquest’s pricing correctly, I reckon I’ll just squeek in under the free tier.

Looks like I’m flipping the switch to Mapquest.

If you’re shopping around for geocoding providers, I hope this is useful to you. But I don’t think you should just look at my results; they’re very specific to my needs. Come up with your own representative sample of tests and try putting the providers through their paces with your data.

If, for some reason, you want to see the terrible PHP code I’m using for geocoding on The Session, here it is.

Progressively enhancing maps

The Session has been online for over 20 years. When you maintain a site for that long, you don’t want to be relying on third parties—it’s only a matter of time until they’re no longer around.

Some third party APIs are unavoidable. The Session has maps for sessions and other events. When people add a new entry, they provide the address but then I need to get the latitude and longitude. So I have to use a third-party geocoding API.

My code is like a lesson in paranoia: I’ve built in the option to switch between multiple geocoding providers. When one of them inevitably starts enshittifying their service, I can quickly move on to another. It’s like having a “go bag” for geocoding.

Things are better on the client side. I’m using other people’s JavaScript libraries—like the brilliant abcjs—but at least I can self-host them.

I’m using Leaflet for embedding maps. It’s a great little library built on top of Open Street Map data.

A little while back I linked to a new project called OpenFreeMap. It’s a mapping provider where you even have the option of hosting the tiles yourself!

For now, I’m not self-hosting my map tiles (yet!), but I did want to switch to OpenFreeMap’s tiles. They’re vector-based rather than bitmap, so they’re lovely and crisp.

But there’s an issue.

I can use OpenFreeMap with Leaflet, but to do that I also have to use the MapLibre GL library. But whereas Leaflet is 148K of JavaScript, MapLibre GL is 800K! Yowzers!

That’s mahoosive by the standards of The Session’s performance budget. I’m not sure the loveliness of the vector maps is worth increasing the JavaScript payload by so much.

But this doesn’t have to be an either/or decision. I can use progressive enhancement to get the best of both worlds.

If you land straight on a map page on The Session for the first time, you’ll get the old-fashioned bitmap map tiles. There’s no MapLibre code.

But if you browse around The Session and then arrive on a map page, you’ll get the lovely vector maps.

Here’s what’s happening…

The maps are embedded using an HTML web component called embed-map. The fallback is a static image between the opening and closing tags. The web component then loads up Leaflet.

Here’s where the enhancement comes in. When the web component is initiated (in its connectedCallback method), it uses the Cache API to see if MapLibre has been stored in a cache. If it has, it loads that library:

caches.match('/path/to/maplibre-gl.js')
.then( responseFromCache => {
    if (responseFromCache) {
        // load maplibre-gl.js
    }
});

Then when it comes to drawing the map, I can check for the existence of the maplibreGL object. If it exists, I can use OpenFreeMap tiles. Otherwise I use the old Leaflet tiles.

But how does the MapLibre library end up in a cache? That’s thanks to the service worker script.

During the service worker’s install event, I give it a list of static files to cache: CSS, JavaScript, and so on. That includes third-party libraries like abcjs, Leaflet, and now MapLibre GL.

Crucially this caching happens off the main thread. It happens in the background and it won’t slow down the loading of whatever page is currently being displayed.

That’s it. If the service worker installation works as planned, you’ll get the nice new vector maps. If anything goes wrong, you’ll get the older version.

By the way, it’s always a good idea to use a service worker and the Cache API to store your JavaScript files. As you know, JavaScript is unduly expensive to performance; not only does the JavaScript file have to be downloaded, it then has to be parsed and compiled. But JavaScript stored in a cache during a service worker’s install event is already parsed and compiled.

Cocolingo

This year I decided I wanted to get better at speaking Irish.

Like everyone brought up in Ireland, I sort of learned the Irish language in school. It was a compulsory subject, along with English and maths.

But Irish wasn’t really taught like a living conversational language. It was all about learning enough to pass the test. Besides, if there’s one thing that’s guaranteed to put me off something, it’s making it compulsory.

So for the first couple of decades of my life, I had no real interest in the Irish language, just as I had no real interest in traditional Irish music. They were both tainted by some dodgy political associations. They were both distinctly uncool.

But now? Well, Irish traditional music rules my life. And I’ve come to appreciate the Irish language as a beautiful expressive thing.

I joined a WhatsApp group for Irish language learners here in Brighton. The idea is that we’d get together to attempt some converstation as Gaeilge but we’re pretty lax about actually doing that.

Then there’s Duolingo. I started …playing? doing? Not sure what the verb is.

Duolingo is a bit of a mixed bag. I think it works pretty well for vocabulary acquisition. But it’s less useful for grammar. I was glad that I had some rudiments of Irish from school or I would’ve been completely lost.

Duolingo will tell you what the words are, but it never tells you why. For that I’m going to have to knuckle down with some Irish grammar books, videos, or tutors.

Duolingo is famous for its gamification. It mostly worked on me. I had to consciously remind myself sometimes that the purpose was to get better at Irish, not to score more points and ascend a league table.

Oh, did I ascend that league table!

But I can’t take all the credit. That must go to Coco, the cat.

It’s not that Coco is particularly linguistically gifted. Quite the opposite. She never says a word. But she did introduce a routine that lent itself to doing Duolingo every day.

Coco is not our cat. But she makes herself at home here, for which we feel inordinately honoured.

Coco uses our cat flap to come into the house pretty much every morning. Then she patiently waits for one of us to get up. I’m usually up first, so I’m the one who gives Coco what she wants. I go into the living room and sit on the sofa. Coco then climbs on my lap.

It’s a lovely way to start the day.

But of course I can’t just sit there alone with my own thoughts and a cat. I’ve got to do something. So rather than starting the day with some doomscrolling, I start with some Irish on Duolingo.

After an eleven-month streak, something interesting happened; I finished.

I’m not used to things on the internet having an end. Had I been learning a more popular language I’m sure there would’ve been many more lessons. But Irish has a limited lesson plan.

Of course the Duolingo app doesn’t say “You did it! You can delete the app now!” It tries to get me to do refresher exercises, but we both know that there are diminishing returns and we’d just be going through the motions. It’s time for us to part ways.

I’ve started seeing other apps. Mango is really good so far. It helps that they’ve made some minority languages available for free, Irish included.

I’m also watching programmes on TG4, the Irish language television station that has just about everything in its schedule available online for free anywhere in the world. I can’t bring myself to get stuck into Ros na Rún, the trashy Irish language soap opera, but I have no problem binging on CRÁ, the gritty Donegal crime drama.

There are English subtitles available for just about everything on TG4. I wish that Irish subtitles were also available—it’s really handy to hear and read Irish at the same time—but only a few shows offer that, like the kid’s cartoon Lí Ban.

Oh, and I’ve currently got a book on Irish grammar checked out of the local library. So now when Coco comes to visit in the morning, she can keep me company while I try to learn from that.

Syndicating to Bluesky

Last year I described how I syndicate my posts to different social networks.

Back then my approach to syndicating to Bluesky was to piggy-back off my micro.blog account (which is really just the RSS feed of my notes):

Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It worked well enough, but it wasn’t real-time and I didn’t have much control over the formatting. As Bluesky is having quite a moment right now, I decided to upgrade my syndication strategy and use the Bluesky API.

Here’s how it works…

First you need to generate an app password. You’ll need this so that you can generate a token. You need the token so you can generate …just kidding; the chain of generated gobbledegook stops there.

Here’s the PHP I’m using to generate a token. You’ll need your Bluesky handle and the app password you generated.

Now that I’ve got a token, I can send a post. Here’s the PHP I’m using.

There’s something extra code in there to spot URLs and turn them into links. Bluesky has a very weird way of doing this.

It didn’t take too long to get posting working. After some more tinkering I got images working too. Now I can post straight from my website to my Bluesky profile. The Bluesky API returns an ID for the post that I’ve created there so I can link to it from the canonical post here on my website.

I’ve updated my posting interface to add a toggle for Bluesky right alongside the toggle for Mastodon. There used to be a toggle for Twitter. That’s long gone.

Now when I post a note to my website, I can choose if I want to send a copy to Mastodon or Bluesky or both.

One day Bluesky will go away. It won’t matter much to me. My website will still be here.

Feed reading

I described using my feed reader like this:

I would hate if catching up on RSS feeds felt like catching up on email.

Instead it’s like this:

When I open my RSS reader to catch up on the feeds I’m subscribed to, it doesn’t feel like opening my email client. It feels more like opening a book.

It also feels different to social media. Like Lucy Bellwood says:

I have a richer picture of the group of people in my feed reader than I did of the people I regularly interacted with on social media platforms like Instagram.

There’s also the blessed lack of any algorithm:

Because blogs are much quieter than social media, there’s also the ability to switch off that awareness that Someone Is Always Watching.

Cory Doctorow has been praising the merits of RSS:

This conduit is anti-lock-in, it works for nearly the whole internet. It is surveillance-resistant, far more accessible than the web or any mobile app interface.

Like Lucy, he emphasises the lack of algorithm:

By default, you’ll get everything as it appears, in reverse-chronological order.

Does that remind you of anything? Right: this is how social media used to work, before it was enshittified. You can single-handedly disenshittify your experience of virtually the entire web, just by switching to RSS, traveling back in time to the days when Facebook and Twitter were more interested in showing you the things you asked to see, rather than the ads and boosted content someone else would pay to cram into your eyeballs.

The only algorithm at work in my feed reader—or on Mastodon—is good old-fashioned serendipity, when posts just happened to rhyme or resonate. Like this morning, when I read this from Alice:

There is no better feeling than walking along, lost in my own thoughts, and feeling a small hand slip into mine. There you are. Here I am. I love you, you silly goose.

And then I read this from Denise

I pass a mother and daughter, holding hands. The little girl is wearing a sequinned covered jacket. She looks up at her mother who says “…And the sun’s going to come out and you’re just going to shine and shine and shine.”

What price?

I’ve noticed a really strange justification from people when I ask them about their use of generative tools that use large language models (colloquially and inaccurately labelled as artificial intelligence).

I’ll point out that the training data requires the wholesale harvesting of creative works without compensation. I’ll also point out the ludicrously profligate energy use required not just for the training, but for the subsequent queries.

And here’s the thing: people will acknowledge those harms but they will justify their actions by saying “these things will get better!”

First of all, there’s no evidence to back that up.

If anything, as the well gets poisoned by their own outputs, large language models may well end up eating their own slop and getting their own version of mad cow disease. So this might be as good as they’re ever going to get.

And when it comes to energy usage, all the signals from NVIDIA, OpenAI, and others are that power usage is going to increase, not decrease.

But secondly, what the hell kind of logic is that?

It’s like saying “It’s okay for me to drive my gas-guzzling SUV now, because in the future I’ll be driving an electric vehicle.”

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

I suspect that most people know full well that the “they’ll get better!” defence doesn’t hold water. But you can convince yourself of anything when everyone around is telling you that this is the future baby, and you’d better get on board or you’ll be left behind.

Baldur reminds us that this is how people talked about asbestos:

Every time you had an industry campaign against an asbestos ban, they used the same rhetoric. They focused on the potential benefits – cheaper spare parts for cars, cheaper water purification – and doing so implicitly assumed that deaths and destroyed lives, were a low price to pay.

This is the same strategy that’s being used by those who today talk about finding productive uses for generative models without even so much as gesturing towards mitigating or preventing the societal or environmental harms.

It reminds me of the classic Ursula Le Guin short story, The Ones Who Walk Away from Omelas that depicts:

…the utopian city of Omelas, whose prosperity depends on the perpetual misery of a single child.

Once citizens are old enough to know the truth, most, though initially shocked and disgusted, ultimately acquiesce to this one injustice that secures the happiness of the rest of the city.

It turns out that most people will blithely accept injustice and suffering not for a utopia, but just for some bland hallucinated slop.

Don’t get me wrong: I’m not saying large language models aren’t without their uses. I love seeing what Simon and Matt are doing when it comes to coding. And large language models can be great for transforming content from one format to another, like transcribing speech into text. But the balance sheet just doesn’t add up.

As Molly White put it: AI isn’t useless. But is it worth it?:

Even as someone who has used them and found them helpful, it’s remarkable to see the gap between what they can do and what their promoters promise they will someday be able to do. The benefits, though extant, seem to pale in comparison to the costs.

Directory enquiries

I was having a discussion with some of my peers a little while back. We were collectively commenting on the state of education and documentation for front-end development.

A lot of the old stalwarts have fallen by the wayside of late. CSS Tricks hasn’t been the same since it got bought out by Digital Ocean. A List Apart goes through fallow periods. Even the Mozilla Developer Network is looking to squander its trust by adding inaccurate “content” generated by a large language model.

The most obvious solution is to start up a brand new resource for front-end developers. But there are two probems with that:

  1. It’s really, really, really hard work, and
  2. It feels a bit 927.

I actually think there are plenty of good articles and resources on front-end development being published. But they’re not being published in any one specific place. People are publishing them on their own websites.

Ahmed, Josh, Stephanie, Andy, Lea, Rachel, Robin, Michelle …I could go on, but you get the picture.

All this wonderful stuff is distributed across the web. If you have a well-stocked RSS reader, you’re all set. But if you’re new to front-end development, how do you know where to find this stuff? I don’t think you can rely on search, unless you have a taste for slop.

I think the solution lies not with some hand-wavey “AI” algorithm that burns a forest for every query. I think the solution lies with human curation.

I take inspiration from Phil’s fantastic project, ooh.directory. Imagine taking that idea of categorisation and applying it to front-end dev resources.

Whether it’s a post on web.dev, Smashing Magazine, or someone’s personal site, it could be included and categorised appropriately.

Now, there would still be a lot of work involved, especially in listing and categorising the articles that are already out there, but it wouldn’t be nearly as much work as trying to create those articles from scratch.

I don’t know what the categories should be. Does it make sense to have top-level categories for HTML, CSS, and JavaScript, with sub-directories within them? Or does it make more sense to categorise by topics like accessibility, animation, and so on?

And this being the web, there’s no reason why one article couldn’t be tagged to simultaneously live in multiple categories.

There’s plenty of meaty information architecture work to be done. And there’d be no shortage of ongoing work to handle new submissions.

A stretch goal could be the creation of “playlists” of hand-picked articles. “Want to get started with CSS grid layout? Read that article over there, watch this YouTube video, and study this page on MDN.”

What do you think? Does this one-stop shop of hyperlinks sound like it would be useful? Does it sound feasible?

I’m just throwing this out there. I’d love it if someone were to run with it.

Filters

My phone rang today. I didn’t recognise the number so although I pressed the big button to answer the call, I didn’t say anything.

I didn’t say anything because usually when I get a call from a number I don’t know, it’s some automated spam. If I say nothing, the spam voice doesn’t activate.

But sometimes it’s not a spam call. Sometimes after a few seconds of silence a human at the other end of the call will say “Hello?” in an uncertain tone. That’s the point when I respond with a cheery “Hello!” of my own and feel bad for making this person endure those awkward seconds of silence.

Those spam calls have made me so suspicious that real people end up paying the price. False positives caught in my spam-detection filter.

Now it’s happening on the web.

I wrote about how Google search, Bing, and Mozilla Developer network are squandering trust:

Trust is a precious commodity. It takes a long time to build trust. It takes a short time to destroy it.

But it’s not just limited to specific companies. I’ve noticed more and more suspicion related to any online activity.

I’ve seen members of a community site jump to the conclusion that a new member’s pattern of behaviour was a sure sign that this was a spambot. But it could just as easily have been the behaviour of someone who isn’t neurotypical or who doesn’t speak English as their first language.

Jessica was looking at some pictures on an AirBnB listing recently and found herself examining some photos that seemed a little too good to be true, questioning whether they were in fact output by some generative tool.

Every email that lands in my inbox is like a little mini Turing test. Did a human write this?

Our guard is up. Our filters are activated. Our default mode is suspicion.

This is most apparent with web search. We’ve always needed to filter search results through our own personal lenses, but now it’s like playing whack-a-mole. First we have to find workarounds for avoiding slop, and then when we click through to a web page, we have to evaluate whether’s it’s been generated by some SEO spammer making full use of the new breed of content-production tools.

There’s been a lot of hand-wringing about how this could spell doom for the web. I don’t think that’s necessarily true. It might well spell doom for web search, but I’m okay with that.

Back before its enshittification—an enshittification that started even before all the recent AI slop—Google solved the problem of accurate web searching with its PageRank algorithm. Before that, the only way to get to trusted information was to rely on humans.

Humans made directories like Yahoo! or DMOZ where they categorised links. Humans wrote blog posts where they linked to something that they, a human, vouched for as being genuinely interesting.

There was life before Google search. There will be life after Google search.

Look, there’s even a new directory devoted to cataloging blogs: websites made by humans. Life finds a way.

All of the spam and slop that’s making us so suspicious may end up giving us a new appreciation for human curation.

It wouldn’t be a straightforward transition to move away from search. It would be uncomfortable. It would require behaviour change. People don’t like change. But when needs must, people adapt.

The first bit of behaviour change might be a rediscovery of bookmarks. It used to be that when you found a source you trusted, you bookmarked it. Browsers still have bookmarking functionality but most people rely on search. Maybe it’s time for a bookmarking revival.

A step up from that would be using a feed reader. In many ways, a feed reader is a collection of bookmarks, but all of the bookmarks get polled regularly to see if there are any updates. I love using my feed reader. Everything I’ve subscribed to in there is made by humans.

The ultimate bookmark is an icon on the homescreen of your phone or in the dock of your desktop device. A human source you trust so much that you want it to be as accessible as any app.

Right now the discovery mechanism for that is woeful. I really want that to change. I want a web that empowers people to connect with other people they trust, without any intermediary gatekeepers.

The evangelists of large language models (who may coincidentally have invested heavily in the technology) like to proclaim that a slop-filled future is inevitable, as though we have no choice, as though we must simply accept enshittification as though it were a force of nature.

But we can always walk away.

InstAI

If you use Instagram, there may be a message buried in your notifications. It begins:

We’re getting ready to expand our AI at Meta experiences to your region.

Fuck that. Here’s the important bit:

To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied going forwards.

Follow that link and fill in the form. For the field labelled “Please tell us how this processing impacts you” I wrote:

It’s fucking rude.

That did the trick. I got an email saying:

We’ve reviewed your request and will honor your objection.

Mind you, there’s still this:

We may still process information about you to develop and improve AI at Meta, even if you object or don’t use our products and services.

Labels

I love libraries. I think they’re one of humanity’s greatest inventions.

My local library here in Brighton is terrific. It’s well-stocked, it’s got a welcoming atmosphere, and it’s in a great location.

But it has an information architecture problem.

Like most libraries, it’s using the Dewey Decimal system. It’s not a great system, but every classification system is going to have flaws—wherever you draw boundaries, there will be disagreement.

The Dewey Decimal class of 900 is for history and geography. Within that class, those 100 numbers (900 to 999) are further subdivded in groups of 10. For example, everything from 940 to 949 is for the history of Europe.

Dewey Decimal number 941 is for the history of the British Isles. The term “British Isles” is a geographical designation. It’s not a good geographical designation, but technically it’s not a political term. So it’s actually pretty smart to use a geographical rather than a political term for categorisation: geology moves a lot slower than politics.

But the Brighton Library is using the wrong label for their shelves. Everything under 941 is labelled “British History.”

The island of Ireland is part of the British Isles.

The Republic of Ireland is most definitely not part of Britain.

Seeing books about the history of Ireland, including post-colonial history, on a shelf labelled “British History” is …not good. Frankly, it’s offensive.

(I mentioned this situation to an English friend of mine, who said “Well, Ireland was once part of the British Empire”, to which I responded that all the books in the library about India should also be filed under “British History” by that logic.)

Just to be clear, I’m not saying there’s a problem with the library using the Dewey Decimal system. I’m saying they’re technically not using the system. They’ve deviated from the system’s labels by treating “History of the British Isles” and “British History” as synonymous.

I spoke to the library manager. They told me to write an email. I’ve written an email. We’ll see what happens.

You might think I’m being overly pedantic. That’s fair. But the fact this is happening in a library in England adds to the problem. It’s not just technically incorrect, it’s culturally clueless.

Mind you, I have noticed that quite a few English people have a somewhat fuzzy idea about the Republic of Ireland. Like, they understand it’s a different country, but they think it’s a different country in the way that Scotland is a different country, or Wales is a different country. They don’t seem to grasp that Ireland is a different country like France is a different country or Germany is a different country.

It would be charming if not for, y’know, those centuries of subjugation, exploitation, and forced starvation.

British history.

Update: They fixed it!

Federation syndication

I’m quite sure this is of no interest to anyone but me, but I finally managed to fix a longstanding weird issue with my website.

I realise that me telling you about a bug specific to my website is like me telling you about a dream I had last night—fascinating for me; incredibly dull for you.

For some reason, my site was being brought to its knees anytime I syndicated a note to Mastodon. I rolled up my sleeves to try to figure out what the problem could be. I was fairly certain the problem was with my code—I’m not much of a back-end programmer.

My tech stack is classic LAMP: Linux, Apache, MySQL and PHP. When I post a note, it gets saved to my database. Then I make a curl request to the Mastodon API to syndicate the post over there. That’s when my CPU starts climbing and my server gets all “bad gateway!” on me.

After spending far too long pulling apart my PHP and curl code, I had to come to the conclusion that I was doing nothing wrong there.

I started watching which processes were making the server fall over. It was MySQL. That seemed odd, because I’m not doing anything too crazy with my database reads.

Then I realised that the problem wasn’t any particular query. The problem was volume. But it only happened when I posted a note to Mastodon.

That’s when I had a lightbulb moment about how the fediverse works.

When I post a note to Mastodon, it includes a link back to the original note to my site. At this point Mastodon does its federation magic and starts spreading the post to all the instances subscribed to my account. And every single one of them follows the link back to the note on my site …all at the same time.

This isn’t a problem when I syndicate my blog posts, because I’ve got a caching mechanism in place for those. I didn’t think I’d need any caching for little ol’ notes. I was wrong.

A simple solution would be not to include the link back to the original note. But I like the reminder that what you see on Mastodon is just a copy. So now I’ve got the same caching mechanism for my notes as I do for my journal (and I did my links while I was at it). Everything is hunky-dory. I can syndicate to Mastodon with impunity.

See? I told you it would only be of interest to me. Although I guess there’s a lesson here. Something something caching.

Switching costs

Cory has published the transcript of his talk at the Transmediale festival in Berlin. It’s all about enshittification, and what we can collectively do to reverse it.

He succinctly describes the process of enshittification like this:

First, platforms are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

More importantly, he describes the checks and balances that keep enshittification from happening, all of which have been dismantled over time: competition, regulation, self-help, and workers.

One of the factors that allows enshittification to proceed is a high switching cost:

Switching costs are everything you have to give up when you leave a product or service. In Facebook’s case, it was all the friends there that you followed and who followed you. In theory, you could have all just left for somewhere else; in practice, you were hamstrung by the collective action problem.

It’s hard to get lots of people to do the same thing at the same time.

We’ve seen this play out over at Twitter, where people I used to respect are still posting there as if it hasn’t become a cesspool of far-right racist misogyny reflecting its new owner’s values. But for a significant amount of people—including myself and anyone with a modicum of decency—the switching cost wasn’t enough to stop us getting the hell out of there. Echoing Robin’s observation, Cory says:

…the difference between “I hate this service but I can’t bring myself to quit it,” and “Jesus Christ, why did I wait so long to quit? Get me the hell out of here!” is razor thin.

If users can’t leave because everyone else is staying, when when everyone starts to leave, there’s no reason not to go, too.

That’s terminal enshittification, the phase when a platform becomes a pile of shit. This phase is usually accompanied by panic, which tech bros euphemistically call ‘pivoting.’

Anyway, I bring this up because I recently read something else about switching costs, but in a very different context. Jake Lazaroff was talking about JavaScript frameworks:

I want to talk about one specific weakness of JavaScript frameworks: interoperability, or the lack thereof. Almost without exception, each framework can only render components written for that framework specifically.

As a result, the JavaScript community tends to fragment itself along framework lines. Switching frameworks has a high cost, especially when moving to a less popular one; it means leaving most of the third-party ecosystem behind.

That switching cost stunts framework innovation by heavily favoring incumbents with large ecosystems.

Sounds a lot like what Cory was describing with incumbents like Google, Facebook, Twitter, and Amazon.

And let’s not kid ourselves, when we’re talking about incumbent client-side JavaScript frameworks, we might mention Vue or some other contender, but really we’re talking about React.

React has massive switching costs. For over a decade now, companies have been hiring developers based on one criterion: do they know React?

“An expert in CSS you say? No thanks.”

“Proficient in vanilla JavaScript? Don’t call us, we’ll call you.”

Heck, if I were advising someone who was looking for a job in front-end development (as opposed to actually being good at front-end development; two different things), I’d tell them to learn React.

Just as everyone ended up on Facebook because everyone was on Facebook, everyone ended up using React because everyone was using React.

You can probably see where I’m going with this: the inevitable enshittification of React.

Just to be clear, I’m not talking about React getting shittier in terms of what it does. It’s always been a shitty technology for end users:

React is legacy tech from 2013 when browsers didn’t have template strings or a BFCache.

No, I’m talking about the enshittification of the developer experience …the developer experience being the thing that React supposedly has going for it, though as Simon points out, the developer experience has always been pretty crap:

Whether on purpose or not, React took advantage of this situation by continuously delivering or promising to deliver changes to the library, with a brand new API being released every 12 to 18 months. Those new APIs and the breaking changes they introduce are the new shiny objects you can’t help but chase. You spend multiple cycles learning the new API and upgrading your application. It sure feels like you are doing something, but in reality, you are only treading water.

Well, it seems like the enshittification of the React ecosystem is well underway. Cassidy is kind of annoyed at React. Tom is increasingly miffed about the state of React releases, and Matteo asks React, where are you going?

Personally, I would love it if more people were complaining about the dreadful user experience inflicted by client-side React. Instead the complaints are universally about the developer experience.

I guess doing the right thing for the wrong reasons is fine. It’s just a little dispiriting.

I sometimes feel like I’m living that old joke, where I’m the one in the restaurant saying “the food here is terrible!” and most of my peers are saying “I know! And such small portions!”

adactio.com on Mastodon

I’ve been on Mastodon since 2017, but now my website is on there too. I’m @adactio@mastodon.social. My website is @adactio.com@adactio.com—search for adactio.com in your Mastodon client of choice.

What’s the difference? Well, with my mastodon.social account, I’m syndicating stuff—most of my notes, and all of my journal—and including a link back to the original source here on my site. With the adactio.com account, it is the original source.

I thought about migrating over to my adactio.com account from my mastodon.social account, but I actually like having a separate profile. I browse Mastodon a lot more than I post, and browsing is a lot easier to do with a regular account.

If you’d like your website to be available on Mastodon, Bridgy Fed is the magic tool that makes it available. You’ll need to be able to send webmentions, and you’ll need to configure some .well-known directives. If you’ve you’ve already got webmention-sending set up, it’s all quite straightforward (though you will also need to update your HTML to include a link back to fed.brid.gy in each entry you want syndicated).

For some reason the syndication didn’t seem to be working at first, but then when I followed @adactio.com@adactio.com from my @adactio@mastodon.social account, it started working. Maybe there needs to be at least one follower.

Also, my links don’t seem to be showing up on Mastodon even though it looks like everything is posting okay. Not sure what that is about.

Anyway, if you want to follow me on Mastodon, you now have a choice. There’s me, @adactio@mastodon.social, or there’s my website, @adactio.com@adactio.com.

The syndicate

Social networks come and social networks go.

Right now, there’s a whole bunch of social networks coming (Blewski, Freds, Mastication) and one big one going, thanks to Elongate.

Me? I watch all of this unfold like Doctor Manhattan on Mars. I have no great connection to any of these places. They’re all just syndication endpoints to me.

I used to have a checkbox in my posting interface that said “Twitter”. If I wanted to add a copy of one of my notes to Twitter, I’d enable that toggle.

I have, of course, now removed that checkbox. Twitter is dead to me (and it should be dead to you too).

I used to have another checkbox next to that one that said “Flickr”. If I was adding a photo to one of my notes, I could toggle that to send a copy to my Flickr account.

Alas, that no longer works. Flickr only allows you to post 1000 photos before requiring a pro account. Fair enough. I’ve actually posted 20 times that amount since 2005, but I let my pro membership lapse a while back.

So now I’ve removed the “Flickr” checkbox too.

Instead I’ve now got a checkbox labelled “Mastodon” that sends a copy of a note to my Mastodon account.

When I publish a blog post like the one you’re reading now here on my journal, there’s yet another checkbox that says “Medium”. Toggling that checkbox sends a copy of my post to my page on Ev’s blog.

At least it used to. At some point that stopped working too. I was going to start debugging my code, but when I went to the documentation for the Medium API, I saw this:

This repository has been archived by the owner on Mar 2, 2023. It is now read-only.

I guessed I missed the memo. I guess Medium also missed the memo, because developers.medium.com is still live. It proudly proclaims:

Medium’s Publishing API makes it easy for you to plug into the Medium network, create your content on Medium from anywhere you write, and expand your audience and your influence.

Not a word of that is accurate.

That page also has a link to the Medium engineering blog. Surely the announcement of the API deprecation would be published there?

Crickets.

Moving on…

I have an account on Bluesky. I don’t know why.

I was idly wondering about sending copies of my notes there when I came across a straightforward solution: micro.blog.

That’s yet another place where I have an account. They make syndication very straightfoward. You can go to your account and point to a feed from your own website.

That’s it. Syndication enabled.

It gets better. Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It’s like dominoes falling: I post something on my website which updates my RSS feed which gets picked up by micro.blog which passes it on to Bluesky.

I noticed that one of the other services that micro.blog can post to is Medium. Hmmm …would that still work given the abandonment of the API?

I gave permission to micro.blog to cross-post to Medium when my feed of blog posts is updated. It seems to have worked!

We’ll see how long it lasts. We’ll see how long any of them last. Today’s social media darlings are tomorrow’s Friendster and MySpace.

When the current crop of services wither and die, my own website will still remain in full bloom.

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Sunday

Today was a good day. The weather was beautiful.

Jessica and I did a little bit of work in the garden—nothing too sweaty. Then Jessica cut my hair. It looks good. And it feels good to have my neck freed up.

We went for a Sunday roast at the nearest pub, which does a most excellent carvery. It was tasty and plentiful so after strolling home, I wanted to do nothing more than sit around.

I sat outside in the back garden under the dappled shade offered by the overhanging trees. I had a good book. I had my mandolin to hand. I’d reach for it occassionally to play a tune or two.

Coco the cat—not our cat—sat nearby, stretching her paws out lazily in the warm muggy air.

It was a good day.

Push

Push notifications are finally arriving on iOS—hallelujah! Like I said last year, this is my number one wish for the iPhone, though not because I personally ever plan to use the feature:

When I’m evangelising the benefits of building on the open web instead of making separate iOS and Android apps, I inevitably get asked about notifications. As long as mobile Safari doesn’t support them—even though desktop Safari does—I’m somewhat stumped. There’s no polyfill for this feature other than building an entire native app, which is a bit extreme as polyfills go.

With push notifications in mobile Safari, the arguments for making proprietary apps get weaker. That’s good.

The announcement post is a bit weird though. It never uses the phrase “progressive web apps”, even though clearly the entire article is all about progressive web apps. I don’t know if this down to Not-Invented-Here syndrome by the Apple/Webkit team, or because of genuine legal concerns around using the phrase.

Instead, there are repeated references to “Home Screen apps”. This distinction makes some sense though. In order to use web push on iOS, your website needs to be added to the home screen.

I think that would be fair enough, if it weren’t for the fact that adding a website to the home screen remains such a hidden feature that even power users would be forgiven for not knowing about it. I described the steps here:

  1. Tap the “share” icon. It’s not labelled “share.” It’s a square with an arrow coming out of the top of it.
  2. A drawer pops up. The option to “add to home screen” is nowhere to be seen. You have to pull the drawer up further to see the hidden options.
  3. Now you must find “add to home screen” in the list
  • Copy
  • Add to Reading List
  • Add Bookmark
  • Add to Favourites
  • Find on Page
  • Add to Home Screen
  • Markup
  • Print

As long as this remains the case, we can expect usage of web push on iOS to be vanishingly low. Hardly anyone is going to add a website to their home screen when their web browser makes it so hard.

If you’d like to people to install your progressive web app, you’ll almost certainly need to prompt people to do so. Here’s the page I made on thesession.org with instructions on how to add to home screen. I link to it from the home page of the site.

I wish that pages like that weren’t necessary. It’s not the best user experience. But as long as mobile Safari continues to bury the home screen option, we don’t have much choice but to tackle this ourselves.

One morning in the future

I had a video call this morning with someone who was in India. The call went great, except for a few moments when the video stalled.

“Sorry about that”, said the person I was talking to. “It’s the monkeys. They like messing with the cable.”

There’s something charming about an intercontinental internet-enabled meeting being slightly disrupted by some fellow primates being unruly.

It also made me stop and think about how amazing it was that we were having the call in the first place. I remembered Arthur C. Clarke’s predictions from 1964:

I’m thinking of the incredible breakthrough which has been possible by developments in communications, particularly the transistor and, above all, the communications satellite.

These things will make possible a world in which we can be in instant contact with each other wherever we may be, where we can contact our friends anywhere on Earth even if we don’t know their actual physical location.

It will be possible in that age—perhaps only 50 years from now—for a man to conduct his business from Tahiti or Bali just as well as he could from London.

The casual sexism of assuming that it would be a “man” conducting business hasn’t aged well. And it’s not the communications satellite that enabled my video call, but old-fashioned undersea cables, many in the same locations as their telegraphic antecedents. But still; not bad, Arthur.

After my call, I caught up on some email. There was a new newsletter from Ariel who’s currently in Antarctica.

Just thinking about the fact that I know someone who’s in Antarctica—who sent me a postcard from Antarctica—gave me another rush of feeling like I was living in the future. As I started to read the contents of the latest newsletter, that feeling became even more specific. Doesn’t this sound exactly like something straight out of a late ’80s/early ’90s cyberpunk novel?

Four of my teammates head off hiking towards the mountains to dig holes in the soil in hopes of finding microscopic animals contained within them. I hang back near the survival bags with the remaining teammate and begin unfolding my drone to get a closer look at the glaciers. After filming the textures of the land and ice from multiple angles for 90 minutes, my batteries are spent, my hands are cold and my stomach is growling. I land the drone, fold it up into my bright yellow Pelican case, and pull out an expired granola bar to keep my hunger pangs at bay.