Link tags: ai

912

sparkline

I am in an abusive relationship with the technology industry

The cognitive overload of AI trying to Make You More Productive™️ whilst you’re actually trying to be productive is so shockingly absurd. And yet, we are being made to feel like we are stagnating, being left behind, not good enough, that we are luddites should we not adopt this imposing technology. We are being told we’re missing out, even though we’re probably doing just fine. The technology is gaslighting us.

LLMs Are Antithetical to Writing and Humanity

If you’re dyslexic and just trying to communicate more clearly in writing, or you’ve got a bullshit job and you just want to get your bullshit job’s bullshit tasks out of the way so you can move on to more meaningful endeavors, or at least move past the day-to-day slog that permeates your workday and serves no real purpose other than to pay the bills, then I cede; I cannot fault you.

But if, say, you’re a “writer” and you’re using an LLM to “help you” “write” or “think” because it’s easier and takes less time and thought, then I stand my ground; I can and do fault you.

The nature of the job

Large language models help you build the thing faster, which is the primary end goal for your company but only sometimes for you. My primary goal might be to build the thing faster, but it also might be to learn something durably, to enjoy the work, to look forward to Monday.

I don’t like the mental fragility of not fully understanding how my own code works, where AI-generated code is “mine” in that it’s attributed to me in the git blame and I’m its maintainer going forward.

Webspace Invaders · Matthias Ott

There’s a power imbalance at work here that’s hard to ignore. Large “AI” companies, the ones with billions in venture capital, send their bots to harvest free content. Not only from big publishers or Wikipedia, but from small, independent websites, too. But we, the people running these sites – often as passion projects, as ways to freely share what we’ve learned, as digital gardens we tend in our spare time – we’re the ones paying for the bandwidth and server resources to handle all those additional requests while those companies profit from the training data they extract. It’s an asymmetric battle: small systems absorbing the demands generated at an entirely different, industrial scale.

Constraints and the Lost Art of Optimization — Den Odell

The entire intellectual and creative output of a team that reinvented personal computing fits in a space that, today, we wouldn’t think twice about wasting on a single font file.

Somewhere in the years that followed we’ve lost the creative solutions, the art of optimization, that being constrained in that way produces.

The best engineers I’ve worked with carry this instinct even when others might think it crazy. They impose their own constraints. They ask what this would look like if it had to be half the size, or run twice as fast, or use a tenth of the memory. Not because anyone demanded it, but because just by thinking there could be a better, more efficient solution, one often emerges.

Smaller and dumber - daverupert.com

The principle of least power expressed nicely:

Smaller, dumber things have more applications, go more places, and require less maintenance.

I guess I kinda get why people hate AI

To be clear, I think AI will be ultimately extremely helpful. I still am using it on my projects. I am going to use it at my next job. I, personally, don’t hate AI.

But I can’t deny that the vibes right now are awful.

Not just bad, awful. It’s not just the “chat we’re cooked you’re the permanent underclass” stuff influencers say. It’s not just the “everybody is fucked” hyperbole CEOs sprout. It’s the actual, day-to-day experience with the technology. I’m a programmer—AI actually helps me a lot. But for normal people, their interactions are profoundly more negative, and none of the people behind this technology seem to care.

blakewatson.com - I used Claude Code and GSD to build the accessibility tool I’ve always wanted

You know my thoughts on generative tools based on large language models, but this example of personal empowerment is undeniably liberating.

The Mythology Of Conscious AI

This superb essay by Anil Seth won the 2025 Berggruen Prize Essay Competition.

The future history of AI is not yet written. There is no inevitability to the directions AI might yet take. To think otherwise is to be overly constrained by our conceptual inheritance, weighed down by the baggage of bad science fiction and submissive to the self-serving narrative of tech companies laboring to make it to the next financial quarter. Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.

Performance-Optimized Video Embeds with Zero JavaScript – Frontend Masters Blog

This is a clever technique for a CSS/HTML only way of just-in-time loading of iframes using details and summary.

Training your replacement | Go Make Things

I’ve had a lot of people recently tell me AI is “inevitable.” That this is “the future” and “we all better get used to it.”

For the last decade, I’ve had a lot of people tell me the same thing about React.

And over that decade of React being “the future” and “inevitable,” I worked on many, many projects without it. I’ve built a thriving career.

AI feels like that in many ways. It also feels different in that non-technical people also won’t shut the fuck about it.

A considered approach to generative AI in front-end… | Clearleft

A thoughtful approach from Sam:

  1. Use AI only for tasks you already know how to do, on occasions when the time that would be spent completing the task can be better spent on other problems.
  2. When using AI, provide the chosen tool with something you’ve made as an input along with a specific prompt.
  3. Always comprehensively review the output from an AI tool for quality.

A programmer’s loss of identity - ratfactor

We value learning. We value the merits of language design, type systems, software maintenance, levels of abstraction, and yeah, if I’m honest, minute syntactical differences, the color of the bike shed, and the best way to get that perfectly smooth shave on a yak. I’m not sure what we’re called now, “heirloom programmers”?

Do I sound like a machine code programmer in the 1950s refusing to learn structured programming and compiled languages? I reject that comparison. I love a beautiful abstraction just as much as I love a good low-level trick.

If the problem is that we’ve painted our development environments into a corner that requires tons of boilerplate, then that is the problem. We should have been chopping the cruft away and replacing it with deterministic abstractions like we’ve always done. That’s what that Larry Wall quote about good programmers being lazy was about. It did not mean that we would be okay with pulling a damn slot machine lever a couple times to generate the boilerplate.

Deep Blue

My social networks are currently awash with Deep Blue:

…the sense of psychological ennui leading into existential dread that many software developers are feeling thanks to the encroachment of generative AI into their field of work.

How Generative and Agentic AI Shift Concern from Technical Debt to Cognitive Debt

I recently wrote:

The issue isn’t with the code itself, but with the understanding of the code.

That’s the difference between technical debt and cognitive debt.

John has written lots more on this.

10 Thoughts On “AI,” February 2026 Edition | Whatever

  1. I don’t and won’t use “AI” in the text of any of my published work.
  2. I’m not worried about “AI” replacing me as a novelist.
  3. People in general are burning out on “AI.”
  4. I’m supporting human artists, including as they relate to my own work.
  5. “AI” is Probably Sticking Around In Some Form.
  6. “AI” is a marketing term, not a technical one, and encompasses different technologies.
  7. There were and are ethical ways to have trained generative “AI” but because they weren’t done, the entire field is suspect.
  8. The various processes lumped into “AI” are likely to be integrated into programs and applications that are in business and creative workflows.
  9. It’s all right to be informed about the state of the art when it comes to “AI.”
  10. Some people are being made to use “AI” as a condition of their jobs. Maybe don’t give them too much shit for it.

JS-heavy approaches are not compatible with long-term performance goals

Frameworks like React are often perceived as accelerators, or even as the only sensible way to do web development. There’s this notion that a more “modern” stack (read: JS-heavy, where the JS ends up running on the user’s browser) allows you to be more agile, release more often with fewer bugs, make code more maintainable, and ultimately launch better sites. In short, the claim is that this approach will offer huge improvements to developer experience, and that these DevEx benefits will trickle down to the user.

But over the years, this narrative has proven to be unrealistic, at best. In reality, for any decently sized JS-heavy project, you should expect that what you build will be slower than advertised, it will keep getting slower over time while it sees ongoing work, and it will take more effort to develop and especially to maintain than what you were led to believe, with as many bugs as any other approach.

Where it comes to performance, the important thing to note is that a JS-heavy approach (and particularly one based on React & friends) will most likely not be a good starting point; in fact, it will probably prove to be a performance minefield that you will need to keep revisiting, risking a detonation with every new commit.

I miss thinking hard.

There are two wolves inside you…

My Builder side won’t let me just sit and think about unsolved problems, and my Thinker side is starving while I vibe-code. I am not sure if there will ever be a time again when both needs can be met at once.

Progress Without Disruption - Christopher Butler

We’ve been taught that technological change must be chaotic, uncontrolled, and socially destructive — that anything less isn’t real innovation.

The conflation of progress with disruption serves specific interests. It benefits those who profit from rapid, uncontrolled deployment. “You can’t stop progress” is a very convenient argument when you’re the one profiting from the chaos, when your business model depends on moving fast and breaking things before anyone can evaluate whether those things should be broken.

We’ve internalized technological determinism so completely that choosing not to adopt something — or choosing to adopt it slowly, carefully, with conditions — feels like naive resistance to inevitable progress. But “inevitable” is doing a lot of work in that sentence. Inevitable for whom? Inevitable according to whom?