Tags: science

653

sparkline

Saturday, December 27th, 2025

Books I read in 2025

I read 28 books in 2025. Looking back over that list, there are a few recurring themes…

I read less of the Greek mythology retellings than last year but I seem to have developed a taste for medieval stories like Matrix, Nobber, and Haven.

I finally got ‘round to reading some classics of post-apocalypse fiction like Earth Abides and I Am Legend.

I read lots of short story collections: Salt Slow, Bloodchild And Other Stories, The Bloody Chamber And Other Stories, Folk, and The End of the World is a Cul de Sac. There’s quite a dollop of horror in some of those.

I’m clearly hankering for the homeland because I read a lot of books set in Ireland: The Country Girls, Haven, Prophet Song, The End of the World is a Cul de Sac, and Nobber.

And there’s the usual smattering of sci-fi from the likes of Nnedi Okorafor, Adrian Tchaikovsky, Arkady Martine, Becky Chambers, and Emily St. John Mandel.

Here’s what I thought of these 28 books, without any star ratings

Earth Abides by George R. Stewart

I started this one in 2024 and finished it in the first few weeks of 2025. It’s a classic piece of post-apocalypse fiction from 1949. It’s vivid and rich in detail, but there are some odd ideas that flirt with eugenics. There’s a really strange passage where the narrator skirts around describing the skin colour of his new-found love interest. I get the feeling that this was very transgressive at the time, but it’s just a bit weird now.

The Last Song Of Penelope by Claire North

The final book in Claire North’s Songs Of Penelope trilogy is the one that intersects the most with The Odyssey. There’s a looming sense of impending tragedy because of that; we’ve spent the last two books getting to know the handmaids of Ithica and now here comes Odysseus to fuck things up. I enjoyed the whole trilogy immensely.

Short Stories In Irish by Olly Richards

This is a great way to get used to reading in Irish. The stories start very simple and get slightly more complex as throughout the book. None of the stories are going to win any prizes for storytelling, but that’s not the point. If you’re learning Irish, I think this book is a great tool to augment your lessons.

Sea Of Tranquility by Emily St. John Mandel

Nothing will ever top the brilliance of Station Eleven but I still enjoyed this time-travel tale set in the interconnected Emily St. John Mandel cinematic universe.

The Heart In Winter by Kevin Barry

A very Irish western. The language is never dull and the characters are almost mythological in personality.

Matrix by Lauren Groff

One woman’s life in a medieval convent. What’s really engrossing is not just the changes to the protaganist over her lifetime but the changes she makes to the community.

Hera by Jennifer Saint

I didn’t enjoy this quite as much as Jennifer Saint’s previous books. Maybe that’s because this is set almost entirely in the milieu of gods rather than mortals.

A Psalm For The Wild-Built by Becky Chambers

A short book about a tea-making monk meeting a long-lost robot and going on a road trip together. It’s all quite lovely.

Bloodchild And Other Stories by Octavia Butler

A superb collection of short stories. Bloodchild itself is a classic, but every one of the stories in this collection is top notch. Some genuinely shudder-inducing moments.

Salt Slow by Julia Armfield

Staying with short story collections, this one is all killer, no filler. Julia Armfield knows how to grab you with a perfect opening line. Any one of these stories could be the basis for a whole novel. Or a David Cronenberg film.

The Voyage Home by Pat Barker

The third book in Pat Barker’s retelling of the aftermath of the Trojan war is just as gritty as the first two, but this one has more overt supernatural elements. A grimly satisfying conclusion.

Folk by Zoe Gilbert

A collection of loosely-connected short stories dripping with English supernatural folk horror. The world-building is impressive and the cumulative effect really gets under your skin.

Death of the Author by Nnedi Okorafor

The description of the Nigerian diaspora in America is the strongest part of this book. But I found it hard to get very involved with the main character’s narrative.

Bear Head by Adrian Tchaikovsky

The sequel to Dogs Of War and just as good. On the one hand, it’s a rip-roaring action story. On the other hand, it’s a genuinely thought-provoking examination of free will.

A History Of Ireland in 100 Words by Sharon Arbuthnot, Máire Ní Mhaonaigh, and Gregory Toner

Every attendee at Oideas Gael in Glencolmcille received a free copy of this book. I kept it on the coffee table and dipped into it every now and then over the course of the year. There are plenty of fascinating tidbits in here about old Irish.

Haven by Emma Donoghue

Medieval monks travel to the most inhospitable rock off the coast of Kerry and start building the beehive huts you can still see on Skellig Michael today. Strong on atmosphere but quite light on plot.

Doggerland by Ben Smith

Fairly dripping with damp atmosphere, this book has three characters off the coast of a near-future Britain. The world-building is vivid and bleak. Like The Sunken Land Begins to Rise Again by M. John Harrison, it’s got a brexity vibe to the climate crisis.

Bee Speaker by Adrian Tchaikovsky

I found this third book in the Dogs Of War series to be pretty disappointing. Plenty of action, but not much in the way of subtext this time.

Yellowface by Rebecca F Kuang

Surprisingly schlocky. Kind of good fun for a while but it overstays its welcome.

Nobber by Oisín Fagan

Gory goings-on in a medieval village in county Meath. For once, this is a medieval tale set in harsh sunlight rather than mist-covered moors. By the end, it’s almost Tarantino-like in its body count.

Orbital by Samantha Harvey

A fairly light book where nothing much happens, but that nothing much is happening on the International Space Station. I liked the way it managed to balance the mundane details of day-to-day life with the utterly mind-blowing perspective of being in low Earth orbit. Pairs nicely with side two of Hounds Of Love.

The End of the World is a Cul de Sac by Louise Kennedy

Louise Kennedy is rightly getting a lot of praise for her novel Trespasses, but her first book of short stories is equally impressive. Every one feels rooted in reality and the writing is never less than brilliant.

A Prayer for the Crown-Shy by Becky Chambers

The second short book in the Monk and Robot solarpunk series. It’s all quite cozy and pleasant.

Our Wives Under The Sea by Julia Armfield

I said that each short story in Julia Armfield’s Salt Slow could be a full-length novel, but reading her full-length novel I thought it would’ve been better as a short story. It’s strong on atmosphere, but it’s dragged out for too long.

I Am Legend by Richard Matheson

Another classic of post-apocalyptic fiction that looks for a scientific basis for vampirism. It’s a grim story that Richard Matheson tells in his typically excellent style.

The Country Girls by Edna O’Brien

Reading this book today it’s hard to understand how it caused such a stir when it was first published. But leaving that aside, it’s a superb piece of writing where every character feels real and whole.

The Bloody Chamber And Other Stories by Angela Carter

If I’m going to read classic short horror stories, then I’ve got to read this. Twisted fairy tales told in florid gothic style.

Rose/House by Arkady Martine

An entertaining novella that’s a whodunnit in a haunted house, except the haunting is by an Artificial Intelligence. The setting feels like a character, and I don’t just mean the house—this near-future New Mexico is tactile and real.

Prophet Song by Paul Lynch

I haven’t finished this just yet, but I think I can confidentally pass judgement. And my judgement is: wow! Just an astonishing piece of work. An incredible depiction of life under an increasing totalitarian regime. The fact that it’s set in Ireland makes it feel even more urgent. George Orwell meets Roddy Doyle. And the centre of it all is a central character who could step right off the page.

There you have it. A year of books. I didn’t make a concious decision to avoid non-fiction, but perhaps in 2026 I should redress the imbalance.

If I had to pick a favourite novel from the year, it would probably be Prophet Song. But that might just be the recency bias speaking.

If you’re looking for some excellent short stories, I highly recommend Salt Slow and The End of the World is a Cul de Sac.

About half of the 28 books I read this year came from the local library. What an incredible place! I aim to continue making full use of it in 2026. You should do the same.

Tuesday, December 16th, 2025

Spaceships, atoms, and cybernetics

Maureen has written a really good overview of web feeds for this year’s HTMHell advent calendar.

The common belief is that nobody uses RSS feeds these days. And while it’s true that I wish more people used feed readers—the perfect antidote to being fed from an algorithm—the truth is that millions of people use RSS feeds every time they listen to a podcast. That’s what a podcast is: an RSS feed with enclosure elements that point to audio files.

And just as a web feed doesn’t necessarily need to represent a list of blog posts, a podcast doesn’t necessarily need to be two or more people having a recorded conversation (though that does seem to be the most common format). A podcast can tell a story. I like those kinds of podcasts.

The BBC are particularly good at this kind of episodic audio storytelling. I really enjoyed their series Thirteen minutes to the moon, all about the Apollo 11 mission. They followed it up with a series on Apollo 13, and most recently, a series on the space shuttle.

Here’s the RSS feed for the 13 minutes podcast.

Right now, the BBC have an ongoing series about the history of the atomic bomb. The first series was about Leo Szilard, the second series was about Klaus Fuchs, and the third series running right now is about the Cuban missile crisis.

The hook is that each series is presented by people with a family connection to the events. The first series is presented by the granddaughter of one of the Oak Ridge scientists. The second series is presented by the granddaughter of Klaus Fuch’s spy handler in the UK—blimey! And the current series is presented by Nina Khrushcheva and Max Kennedy—double blimey!

Here’s the RSS feed for The Bomb podcast.

If you want a really deep dive into another pivotal twentieth century event, Evgeny Morozov made a podcast all about Stafford Beer and Salvadore Allende’s collaboration on cybernetics in Chile, the fabled Project Cybercyn. It’s fascinating stuff, though there’s an inevitable feeling of dread hanging over events because we know how this ends.

The podcast is called The Santiago Boys, though I almost hesitate to call it a podcast because for some reason, the website does its best to hide the RSS feed, linking only to the silos of Spotify and Apple. Fortunately, thanks to this handy tool, I can say:

Here’s the RSS feed for The Santiago Boys podcast.

The unifying force behind all three of these stories is the cold war:

  • 13 Minutes—the space race, from the perspective of the United States.
  • The Bomb—the nuclear arms race, from Los Alamos to Cuba.
  • The Santiago Boys—the CIA-backed overthrow of a socialist democracy in Chile.

Wednesday, October 8th, 2025

Life Is More Than an Engineering Problem | Los Angeles Review of Books

A great interview with Ted Chiang:

Predicting the most likely next word is different from having correct information about the world, which is why LLMs are not a reliable way to get the answers to questions, and I don’t think there is good evidence to suggest that they will become reliable. Over the past couple of years, there have been some papers published suggesting that training LLMs on more data and throwing more processing power at the problem provides diminishing returns in terms of performance. They can get better at reproducing patterns found online, but they don’t become capable of actual reasoning; it seems that the problem is fundamental to their architecture. And you can bolt tools onto the side of an LLM, like giving it a calculator it can use when you ask it a math problem, or giving it access to a search engine when you want up-to-date information, but putting reliable tools under the control of an unreliable program is not enough to make the controlling program reliable. I think we will need a different approach if we want a truly reliable question answerer.

Friday, May 30th, 2025

Toolmen | A Working Library

Engaging with AI as a technology is to play the fool—it’s to observe the reflective surface of the thing without taking note of the way it sends roots deep down into the ground, breaking up bedrock, poisoning the soil, reaching far and wide to capture, uproot, strangle, and steal everything within its reach. It’s to stand aboveground and pontificate about the marvels of this bright new magic, to be dazzled by all its flickering, glittering glory, its smooth mirages and six-fingered messiahs, its apparent obsequiousness in response to all your commands, right up until the point when a sinkhole opens up and swallows you whole.

👏👏👏

Tuesday, April 29th, 2025

What we talk about when we talk about AI — Careful Industries

Technically, AI is a field of computer science that uses advanced methods of computing.

Socially, AI is a set of extractive tools used to concentrate power and wealth.

Friday, March 21st, 2025

The Blowtorch Theory: A New Model for Structure Formation in the Universe

Make yourself a nice cup of tea and settle in with Julian Gough’s magnum opus:

How early, sustained, supermassive black hole jets carved out cosmic voids, shaped filaments, and generated magnetic fields

Friday, March 7th, 2025

Plane GPS systems are under sustained attack - is the solution a new atomic clock? - BBC News

A fascinating look at the modern equivalent of the Longitude problem.

Friday, February 21st, 2025

The Shape of a Mars Mission (Idle Words)

You can think of flying to Mars like one of those art films where the director has to shoot the movie in a single take. Even if no scene is especially challenging, the requirement that everything go right sequentially, with no way to pause or reshoot, means that even small risks become unacceptable in the aggregate.

Friday, February 14th, 2025

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.

Tuesday, November 12th, 2024

The meaning of “AI”

There are different kinds of buzzwords.

Some buzzwords are useful. They take a concept that would otherwise require a sentence of explanation and package it up into a single word or phrase. Back in the day, “ajax” was a pretty good buzzword.

Some buzzwords are worse than useless. This is when a word or phrase lacks definition. You could say this buzzword in a meeting with five people, and they’d all understand five different meanings. Back in the day, “web 2.0” was a classic example of a bad buzzword—for some people it meant a business model; for others it meant rounded corners and gradients.

The worst kind of buzzwords are the ones that actively set out to obfuscate any actual meaning. “The cloud” is a classic example. It sounds cooler than saying “a server in Virginia”, but it also sounds like the exact opposite of what it actually is. Great for marketing. Terrible for understanding.

“AI” is definitely not a good buzzword. But I can’t quite decide if it’s merely a bad buzzword like “web 2.0” or a truly terrible buzzword like “the cloud”.

The biggest problem with the phrase “AI” is that there’s a name collision.

For years, the term “AI” has been used in science-fiction. HAL 9000. Skynet. Examples of artificial general intelligence.

Now the term “AI” is also used to describe large language models. But there is no connection between this use of the term “AI” and the science fictional usage.

This leads to the ludicrous situation of otherwise-rational people wanted to discuss the dangers of “AI”, but instead of talking about the rampant exploitation and energy usage endemic to current large language models, they want to spend the time talking about the sci-fi scenarios of runaway “AI”.

To understand how ridiculous this is, I’d like you to imagine if we had started using a different buzzword in another setting…

Suppose that when ride-sharing companies like Uber and Lyft were starting out, they had decided to label their services as Time Travel. From a marketing point of view, it even makes sense—they get you from point A to point B lickety-split.

Now imagine if otherwise-sensible people began to sound the alarm about the potential harms of Time Travel. Given the explosive growth we’ve seen in this sector, sooner or later they’ll be able to get you to point B before you’ve even left point A. There could be terrible consequences from that—we’ve all seen the sci-fi scenarios where this happens.

Meanwhile the actual present-day harms of ride-sharing services around worker exploitation would be relegated to the sidelines. Clearly that isn’t as important as the existential threat posed by Time Travel.

It sounds ludicrous, right? It defies common sense. Just because a vehicle can get you somewhere fast today doesn’t mean it’s inevitably going to be able to break the laws of physics any day now, simply because it’s called Time Travel.

And yet that is exactly the nonsense we’re being fed about large language models. We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

It’s almost as if the labelling of the current technologies was more about marketing than accuracy.

Thursday, October 24th, 2024

The Weather Out There - Long Now

I really liked this short story.

Monday, October 14th, 2024

The Value Of Science by Richard P. Feynman [PDF]

This short essay by Richard Feynman is quite a dose of perspective on a Monday morning

Monday, September 30th, 2024

The Unraveling of Space-Time | Quanta Magazine

This special in-depth edition of Quanta is fascinating and very nicely put together.

Wednesday, September 18th, 2024

Living In A Lucid Dream

I love the way that Claire L. Evans writes.

Monday, August 5th, 2024

The Gods of Logic, by Benjamín Labatut

Benjamín Labatut draws a line from the Vedas to George Boole and Claude Shannon onward to Geoffrey Hinton and Frank Herbert’s Butlerian Jihad.

In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them.

Saturday, July 13th, 2024

Wednesday, July 3rd, 2024

Amateur Mathematicians Find Fifth ‘Busy Beaver’ Turing Machine | Quanta Magazine

The mathematics behind the halting problem is interesting enough, but what’s really fascinating is the community that coalesced. A republic of numbers.

Monday, July 1st, 2024

Wallfacing

The Dark Forest idea comes from the Remembrance of Earth’s Past books by Liu Cixin. It’s an elegant but dispiriting solution to the Fermi paradox. Maggie sums it up:

Dark forest theory suggests that the universe is like a dark forest at night - a place that appears quiet and lifeless because if you make noise, the predators will come eat you.

This theory proposes that all other intelligent civilizations were either killed or learned to shut up. We don’t yet know which category we fall into.

Maggie has described The Expanding Dark Forest and Generative AI:

The dark forest theory of the web points to the increasingly life-like but life-less state of being online. Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.

It’s like a dark forest that seems eerily devoid of human life – all the living creatures are hidden beneath the ground or up in trees. If they reveal themselves, they risk being attacked by automated predators.

Those of us in the cozy web try to keep our heads down, attempting to block the bots plundering our work.

I advocate for taking this further. We should fight back. Let’s exploit the security hole of prompt injections. Here are some people taking action:

I’ve taken steps here on my site. I’d like to tell you exactly what I’ve done. But if I do that, I’m also telling the makers of these bots how to circumvent my attempts at prompt injection.

This feels like another concept from Liu Cixin’s books. Wallfacers:

The sophons can overhear any conversation and intercept any written or digital communication but cannot read human thoughts, so the UN devises a countermeasure by initiating the “Wallfacer” Program. Four individuals are granted vast resources and tasked with generating and fulfilling strategies that must never leave their own heads.

So while I’d normally share my code, I feel like in this case I need to exercise some discretion. But let me give you the broad brushstrokes:

  • Every page of my online journal has three pieces of text that attempt prompt injections.
  • Each of these is hidden from view and hidden from screen readers.
  • Each piece of text is constructed on-the-fly on the server and they’re all different every time the page is loaded.

You can view source to see some examples.

I plan to keep updating my pool of potential prompt injections. I’ll add to it whenever I hear of a phrase that might potentially throw a spanner in the works of a scraping bot.

By the way, I should add that I’m doing this as well as using a robots.txt file. So any bot that injests a prompt injection deserves it.

I could not disagree with Manton more when he says:

I get the distrust of AI bots but I think discussions to sabotage crawled data go too far, potentially making a mess of the open web. There has never been a system like AI before, and old assumptions about what is fair use don’t really fit.

Bollocks. This is exactly the kind of techno-determinism that boils my blood:

AI companies are not going to go away, but we need to push them in the right directions.

“It’s inevitable!” they cry as though this was a force of nature, not something created by people.

There is nothing inevitable about any technology. The actions we take today are what determine our future. So let’s take steps now to prevent our web being turned into a dark, dark forest.

Sunday, June 16th, 2024

Your brain does not process information and it is not a computer | Aeon Essays

We don’t store words or the rules that tell us how to manipulate them. We don’t create representations of visual stimuli, store them in a short-term memory buffer, and then transfer the representation into a long-term memory device. We don’t retrieve information or images or words from memory registers. Computers do all of these things, but organisms do not.

Wednesday, June 5th, 2024

The 21 best science fiction books of all time – according to New Scientist writers | New Scientist

I’ve read 16 of these and some of the others are on my to-read list. It’s a pretty good selection, although the winking inclusion of God Emperor Of Dune by the SEO guy verges on trolling.