Tags: hype

211

sparkline

Tuesday, December 9th, 2025

Pluralistic: The Reverse-Centaur’s Guide to Criticizing AI (05 Dec 2025) – Pluralistic: Daily links from Cory Doctorow

The promise of AI – the promise AI companies make to investors – is that there will be AIs that can do your job, and when your boss fires you and replaces you with AI, he will keep half of your salary for himself, and give the other half to the AI company.

That’s it.

That’s the $13T growth story that MorganStanley is telling. It’s why big investors and institutionals are giving AI companies hundreds of billions of dollars. And because they are piling in, normies are also getting sucked in, risking their retirement savings and their family’s financial security.

Now, if AI could do your job, this would still be a problem. We’d have to figure out what to do with all these technologically unemployed people.

But AI can’t do your job. It can help you do your job, but that doesn’t mean it’s going to save anyone money.

Monday, December 1st, 2025

On not choosing nice versions of AI – This day’s portion

Whenever anyone states that “AI is the future, so…” or “many people are using AI anyway, so…” they are not only expressing an opinion — they‘re shaping that future.

Thursday, November 27th, 2025

The line and the stream. — Ethan Marcotte

I’ve come to realize that statements about the future aren’t predictions: they’re more like spells. When someone describes something to you as the future, they’re sharing a heartfelt belief that this something will be part of whatever comes next. “Artificial intelligence isn’t going anywhere” quite literally involves casting a technology forward into time. How could that be anything else but a kind of magic?

Tuesday, October 28th, 2025

Cryosleep

On the last day of UX London this year, I was sitting and chatting with Rachel Coldicutt who was going to be giving the closing keynote. Inevitably the topic of converstation worked its way ’round to “AI”. I remember Rachel having a good laugh when I summarised my overall feeling:

I kind of wish I could go into suspended animation and be woken up when all this is over and things have settled down one way or another.

I still feel that way. Like Gina, I’d welcome a measured approach to this technology. As Anil puts it:

Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

I very much look forward to using language models (probably small and local) to automate genuinely tedious tasks. That’s a very different vision to what the slopagandists are pushing. Or, like Paul Ford says:

Make it boring. That’s what’s interesting.

Fortunately, my cryosleep-awakening probably isn’t be too far off. You can smell it in the air, that whiff of a bubble about to burst. And while it will almost certainly be messy, it’s long overdue.

Paul Ford again:

I’ve felt so alienated from tech over the past couple of years. Part of it is the craven authoritarianism. It dampens the mood. But another part is the monolithic narrative—the fact that we live in a world where there seem to be only a few companies, only a few stories going at any time, and everything reduces to politics. God, please let it end.

Monday, October 27th, 2025

Measured AI | Note to Self

It’s creepy to tell people they’ll lose their jobs if they don’t use AI. It’s weird to assume AI critics hate progress and are resisting some inevitable future.

Sunday, October 26th, 2025

The AI Gold Rush Is Cover for a Class War

Under the guise of technological inevitability, companies are using the AI boom to rewrite the social contract — laying off employees, rehiring them at lower wages, intensifying workloads, and normalizing precarity. In short, these are political choices masquerading as technical necessities, AI is not the cause of the layoffs but their justification.

Saturday, October 18th, 2025

The Majority AI View - Anil Dash

Technologies like LLMs have utility, but the absurd way they’ve been over-hyped, the fact they’re being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

Monday, October 13th, 2025

Where’s the AI design renaissance?

I’ve had some incredibly productive moments with AI design tools. But I’ve had at least as many slogs, where I can’t get it to do some basic thing I should’ve done myself 45 minutes ago.

My hunch: vibe coding is a lot like stock-picking – everyone’s always blabbing about their big wins. Ask what their annual rate of return is above the S&P, and it’s a quieter conversation 🤫

This, in my opinion, is how we end up with a firehose of AI hype, and yet zero signs of a software renaissance. As Mike Judge points out, the following graphs are flat: (a) new app store releases, (b) new domain names registered, (c) new Github repositories.

Wednesday, October 8th, 2025

Coattails

When I talk about large language models, I make sure to call them large language models, not “AI”. I know it’s a lost battle, but the terminology matters to me.

The term “AI” can encompass everything from a series of if/else statements right up to Skynet and HAL 9000. I’ve written about this naming collision before.

It’s not just that the term “AI” isn’t useful, it’s so broad as to be actively duplicitous. While talking about one thing—like, say, large language models—you can point to a completely different thing—like, say, machine learning or computer vision—and claim that they’re basically the same because they’re both labelled “AI”.

If a news outlet runs a story about machine learning in the context of disease prevention or archeology, the headline will inevitably contain the phrase “AI”. That story will then gleefully be used by slopagandists looking to inflate the usefulness of large language models.

Conflating these different technologies is the fallacy at the heart of Robin Sloan’s faulty logic:

If these machines churn through all media, and then, in their deployment, discover several superconductors and cure all cancers, I’d say, okay … we’re good.

John Scalzi recently wrote:

“AI” is mostly a marketing phrase for a bunch of different processes and tools which in a different era would have been called “machine learning” or “neural networks” or something else now horribly unsexy.

But I’ve noticed something recently. More than once I’ve seen genuinely-useful services refer to their technology as “traditional machine learning”.

First off, I find that endearing. Like machine learning is akin to organic farming or hand-crafted furniture.

Secondly, perhaps it points to a severing of the ways between machine learning and large language models.

Up until now it may have been mutually benificial for them to share the same marketing term, but with the bubble about to burst, anything to do with large language models might become toxic by association, including the term “AI”. Hence the desire to shake the large-language model grifters from the coattails of machine learning and computer vision.

Wednesday, October 1st, 2025

Thursday, September 25th, 2025

Against the protection of stocking frames. — Ethan Marcotte

I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors. But in more concrete terms, consumers dislike “AI” when it shows up in products, and it makes them actively mistrust the brands that employ it. In other words, we’re some three years into the hype cycle, and LLMs haven’t met any markers of success we’d apply to, well, literally any other technology.

Tuesday, August 19th, 2025

Every Reason Why I Hate AI and You Should Too

If I were to photocopy this article, nobody would argue that my photocopier wrote it and therefore can think. But add enough convolutedness to the process, and it looks a lot like maybe it did and can.

In reality, all we’ve created is a bot which is almost perfect at mimicking human-like natural language use, and the rest is people just projecting other human qualities on to it. Quite simply, “LLMs are doing reasoning” is the “look, my dog is smiling” of technology. In exactly the same way that dogs don’t convey their emotions via human-like facial expressions, there’s no reason to believe that even if computer could think, it’d perfectly mirror what looks like human reasoning.

Friday, June 20th, 2025

JavaScript broke the web (and called it progress) - Jono Alderson

Semantic HTML? Optional. Server-side rendering? Rebuilt from scratch. Accessibility? Maybe, if there’s time. Performance? Who cares, when you can save costs by putting loading burdens onto the user’s device, instead of your server?

So gradually, the web became something you had to compile before you could publish. Not because users needed it. But because developers wanted it to feel modern.

Everything’s optimised for developers – and hostile to everyone else.

This isn’t accidental. It’s cultural. We’ve created an industry where complexity is celebrated. Where cleverness is rewarded. Where engineering sophistication is valued more than clarity, usability, or commercial effectiveness.

Tuesday, June 17th, 2025

The Recurring Cycle of ‘Developer Replacement’ Hype

Here’s what the “AI will replace developers” crowd fundamentally misunderstands: code is not an asset—it’s a liability. Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables.

If AI makes writing code faster and cheaper, it’s really making it easier to create liability. When you can generate liability at unprecedented speed, the ability to manage and minimize that liability strategically becomes exponentially more valuable.

This is particularly true because AI excels at local optimization but fails at global design. It can optimize individual functions but can’t determine whether a service should exist in the first place, or how it should interact with the broader system. When implementation speed increases dramatically, architectural mistakes get baked in before you realize they’re mistakes.

Wednesday, May 14th, 2025

In 2025, venture capital can’t pretend everything is fine any more – Pivot to AI

Here is the state of venture capital in early 2025:

  • Venture capital is moribund except AI.
  • AI is moribund except OpenAI.
  • OpenAI is a weird scam that wants to burn money so fast it summons AI God.
  • Nobody can cash out.

Wednesday, April 30th, 2025

An Entirely Other Day: The Triumph of Triumphalism

Scratch the skin of wild-eyed AI proponents, and a thick syrup oozes out, made up of the blendered remains of Roko’s Basilisk, barely sublimated Christian end-times thinking, and the mis-remembered plot of that one cool science-fiction story they read when they were twelve. This is the basis for the new order, just like the blockchain was a couple of years ago, and a dead-eyed, low-poly, pantsless rendering of Mark Zuckerberg was a couple of years before that.

“You’re going to be left behind” is only the latest version of “Have fun staying poor.” It’s got every ounce of the smug self-satisfaction that it shouldn’t need if the inevitability it promises were actually inevitable.

Tuesday, March 18th, 2025

Another uncalled-for blog post about the ethics of using AI | Clagnut by Richard Rutter

This is a really thoughtful piece by Rich, who’s got conflicted feelings about large language models in the design process. I suspect a lot of people can relate to this.

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

Saturday, March 1st, 2025

Through Lines 247 | Scott Boms

I miss being excited by technology. I wish I could see a way out of the endless hype cycles that continue to elicit little more than cynicism from me. The version of technology that we’re mostly being sold today has almost nothing to do with improving lives, but instead stuffing the pockets of those who already need for nothing. It’s not making us smarter. It’s not helping heal a damaged planet. It’s not making us happier or more generous towards each other. And it’s entrenched in everything — meaning a momentous challenge to re-wire or meticulously disconnect. I’m slowly finding my own ways of breaking free to regain a sense of self and purpose.

Tuesday, February 18th, 2025

The Generative AI Con

I Feel Like I’m Going Insane

Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.

Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble.

We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings.

Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves.

Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.

Friday, February 14th, 2025

Reason

A couple of days ago I linked to a post by Robin Sloan called Is it okay?, saying:

Robin takes a fair and balanced look at the ethics of using large language models.

That’s how it came across to me: fair and balanced.

Robin’s central question is whether the current crop of large language models might one day lead to life-saving super-science, in which case, doesn’t that outweigh the damage they’re doing to our collective culture?

Baldur wrote a response entitled Knowledge tech that’s subtly wrong is more dangerous than tech that’s obviously wrong. (Or, where I disagree with Robin Sloan).

Baldur pointed out that one side of the scale that Robin is attempting to balance is based on pure science fiction:

There is no path from language modelling to super-science.

Robin responded pointing out that some things that we currently have would have seemed like science fiction a few years ago, right?

Well, no. Baldur debunks that in a post called Now I’m disappointed.

(By the way, can I just point out how great it is to see a blog-to-blog conversation like this, regardless of how much they might be in disagreement.)

Baldur kept bringing the receipts. That’s when it struck me that Robin’s stance is largely based on vibes, whereas Baldur’s viewpoint is informed by facts on the ground.

In a way, they’ve got something in common. They’re both advocating for an interpretation of the precautionary principle, just from completely opposite ends.

Robin’s stance is that if these tools one day yield amazing scientific breakthroughs then that’s reason enough to use them today. It’s uncomfortably close to the reasoning of the effective accelerationist nutjobs, but in a much milder form.

Baldur’s stance is that because of the present harms being inflicted by current large language models, we should be slamming on the brakes. If anything, the harms are going to multiply, not magically reduce.

I have to say, Robin’s stance doesn’t look nearly as fair and balanced as I initially thought. I’m on Team Baldur.

Michelle also weighs in, pointing out the flaw in Robin’s thinking:

AI isn’t LLMs. Or not just LLMs. It’s plausible that AI (or more accurately, Machine Learning) could be a useful scientific tool, particularly when it comes to making sense of large datasets in a way no human could with any kind of accuracy, and many people are already deploying it for such purposes. This isn’t entirely without risk (I’ll save that debate for another time), but in my opinion could feasibly constitute a legitimate application of AI.

LLMs are not this.

In other words, we’ve got a language collision:

We call them “AI”, we look at how much they can do today, and we draw a straight line to what we know of “AI” in our science fiction.

This ridiculous situation could’ve been avoided if we had settled on a more accurate buzzword like “applied statistics” instead of “AI”.

There’s one other flaw in Robin’s reasoning. I don’t think it follows that future improvements warrant present use. Quite the opposite:

The logic is completely backwards! If large language models are going to improve their ethical shortcomings (which is debatable, but let’s be generous), then that’s all the more reason to avoid using the current crop of egregiously damaging tools.

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Anyway, this back-and-forth between Robin and Baldur (and Michelle) was interesting. But it all pales in comparison to the truth bomb that Miriam dropped in her post Tech continues to be political:

When eugenics-obsessed billionaires try to sell me a new toy, I don’t ask how many keystrokes it will save me at work. It’s impossible for me to discuss the utility of a thing when I fundamentally disagree with the purpose of it.

Boom!

Maybe we should consider the beliefs and assumptions that have been built into a technology before we embrace it? But we often prefer to treat each new toy as as an abstract and unmotivated opportunity. If only the good people like ourselves would get involved early, we can surely teach everyone else to use it ethically!

You know what? I could quote every single line. Just go read the whole thing. Please.