Tags: ethics

214

sparkline

Saturday, December 13th, 2025

Dissent | blarg

I suppose it’s not clear to me what a ‘good’ window into unreliable, systemically toxic systems accomplishes, or how it changes anything that matters for the better, or what that idea even means at all. I don’t understand how “ethical AI” isn’t just “clean coal” or “natural gas.” The power of normalization as four generations are raised breathing low doses of aerosolized neurotoxins; the alternative was called “unleaded”, but the poison was called “regular gas”.

There’s a real technology here, somewhere. Stochastic pattern recognition seems like a powerful tool for solving some problems. But solving a problem starts at the problem, not working backwards from the tools.

Monday, September 15th, 2025

When All You Have Is a Robots.txt Hammer – Pixel Envy

I write here for you, not for the benefit of building the machines producing a firehose of spam, scams, and slop. The artificial intelligence companies have already violated the expectations of even a public web. Regardless of the benefits they have created — and I do believe there are benefits to these technologies — they have behaved unethically. Defensive action is the only control a publisher can assume right now.

Saturday, August 30th, 2025

Thursday, August 28th, 2025

I Am An AI Hater | moser’s frame shop

I wanted to quote an excerpt of this post, but honestly I couldn’t choose just one part—the whole thing is perfect. You should read it for the beauty of the language alone.

(This is Anthony Moser’s first blog post. I fear he has created his Citizen Kane.)

Monday, August 11th, 2025

This website is for humans - localghost

This website is for humans, and LLMs are not welcome here.

Cosigned.

Monday, August 4th, 2025

You Should Probably Leave Substack | How to Leave Substack.

Substack willingly platforms and allows bad actors to monetize, hate speech and misinformation.

Says who?

Here are some well-reasoned pieces on the subject for you to educate yourself and decide.

Thursday, July 24th, 2025

The sound of inevitability | My place to put things

People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

Friday, June 20th, 2025

The Imperfectionist: Navigating by aliveness

Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.

More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.

Tuesday, June 17th, 2025

Large Language Muddle • Jason Santa Maria

It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.

Critical questions for design leaders working with artificial intelligence, New York 2025 | Leading Design

AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.

This document is not a manifesto or an agenda. It is a series of prompts written by design leaders for design leaders, conceived to help us navigate these tricky waters.

The Recurring Cycle of ‘Developer Replacement’ Hype

Here’s what the “AI will replace developers” crowd fundamentally misunderstands: code is not an asset—it’s a liability. Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables.

If AI makes writing code faster and cheaper, it’s really making it easier to create liability. When you can generate liability at unprecedented speed, the ability to manage and minimize that liability strategically becomes exponentially more valuable.

This is particularly true because AI excels at local optimization but fails at global design. It can optimize individual functions but can’t determine whether a service should exist in the first place, or how it should interact with the broader system. When implementation speed increases dramatically, architectural mistakes get baked in before you realize they’re mistakes.

Friday, May 30th, 2025

Ensloppification – David Bushell – Web Dev (UK)

Frankly, I’d rather quit my career than live in the future they’re selling. It’s the sheer dystopian drabness of it. Mediocrity as a service.

I tried the tab-completion slot machines; not my cup of tea. I tried image generation and was overcome with literal depression. I don’t want a future as a “prompt artist”.

I’m mostly linking this for what it says, but oh boy, do I love the way it says it with this wonderful HTML web compenent.

Tuesday, May 27th, 2025

Uses

I don’t use large language models. My objection to using them is ethical. I know how the sausage is made.

I wanted to clarify that. I’m not rejecting large language models because they’re useless. They can absolutely be useful. I just don’t think the usefulness outweighs the ethical issues in how they’re trained.

Molly White came to the same conclusion:

The benefits, though extant, seem to pale in comparison to the costs.

Rich has similar thoughts:

What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.

I genuinely look forward to being able to use a large language model with a clear conscience. Such a model would need to be trained ethically. When we get a free-range organic large language model I’ll be the first in line to use it. Until then, I’ll abstain. Remember:

You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.

Still, in anticipation of an ethical large language model someday becoming reality, I think it’s good for me to have an understanding of which tasks these tools are good at.

Prototyping seems like a good use case. My general attitude to prototyping is the exact opposite to my attitude to production code; use absolutely any tool you want and prioritise speed over quality.

When it comes to coding in general, I think Laurie is really onto something when he says:

Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

In other words, despite what the hype says, these tools are far better at transforming than they are at generating.

Iris Meredith goes deeper into this distinction between transformative and compositional work:

Compositionality relies (among other things) on two core values or functions: choice and precision, both of which are antithetical to LLM functioning.

My own take on this is that transformative work is often the drudge work—take this data dump and convert it to some other format; take this mock-up and make a disposable prototype. I want my tools to help me with that.

But compositional work that relies on judgement, taste, and choice? Not only would I not use a large language model for that, it’s exactly the kind of work that I don’t want to automate away.

Transformative work is done with broad brushstrokes. Compositional work is done with a scalpel.

Large language models are big messy brushes, not scalpels.

Saturday, May 24th, 2025

The luxury of saying no.

If I’m understanding Greg correctly here, he’s saying it’s okay for people to use large language models …because they’re being forced to?

Friday, May 23rd, 2025

Tools

One persistent piece of slopaganda you’ll hear is this:

“It’s just a tool. What matters is how you use it.”

This isn’t a new tack. The same justification has been applied to many technologies.

Leaving aside Kranzberg’s first law, large language models are the very antithesis of a neutral technology. They’re imbued with bias and political decisions at every level.

There’s the obvious problem of where the training data comes from. It’s stolen. Everyone knows this, but some people would rather pretend they don’t know how the sausage is made.

But if you set aside how the tool is made, it’s still just a tool, right? A building is still a building even if it’s built on stolen land.

Except with large language models, the training data is just the first step. After that you need to traumatise an underpaid workforce to remove the most horrifying content. Then you build an opaque black box that end-users have no control over.

Take temperature, for example. That’s the degree of probability a large language model uses for choosing the next token. Dial the temperature too low and the tool will parrot its training data too closely, making it a plagiarism machine. Dial the temperature too high and the tool generates what we kindly call “hallucinations”.

Either way, you have no control over that dial. Someone else is making that decision for you.

A large language model is as neutral as an AK-47.

I understand why people want to feel in control of the tools they’re using. I know why people will use large language models for some tasks—brainstorming, rubber ducking—but strictly avoid them for any outputs intended for human consumption.

You could even convince yourself that a large language model is like a bicycle for the mind. In truth, a large language model is more like one of those hover chairs on the spaceship in WALL·E.

Large language models don’t amplify your creativity and agency. Large language models stunt your creativity and rob you of agency.

When someone applies a large language model it is an example of tool use. But the large language model isn’t the tool.

Sunday, May 18th, 2025

EU ruling: tracking-based advertising by Google, Microsoft, Amazon, X, across Europe has no legal basis - Irish Council for Civil Liberties

It’s official. No matter how many annoying cookie consent banners you slap on a website, real-time bidding for behavioural adverts is illegal in Europe.

And before you go crying about advertising-supported businesses, this only applies to behavioural advertising, not contextual advertising …which works better anyway.

Wednesday, May 7th, 2025

Figure and ground • Buttondown

Man, this resonates:

At one end, you prioritise your own interests. Slap on the SPF and enjoy the cricket; ignore the emails; nip to Paris for the day. But egocentrism erodes social goods. It harms other people. So perhaps you reject it and skew the other way, anchoring your wellbeing to the trajectory of the world. But that undertow will easily drown you. The beneficence of caring only about others seems noble, but in truth few of us can endure that level of self-sacrifice. Total empathy harms you. And so most of us stumble in the fog between these extremes, recoiling from either end when the shame or the sadness becomes too much to bear. I plug away at my pleasant life with heartache for what’s happening to us. Perhaps you feel similarly, smiling but seconds from tears.

Wednesday, April 30th, 2025

Codewashing

I have little understanding for people using large language models to generate slop; words and images that nobody asked for.

I have more understanding for people using large language models to generate code. Code isn’t the thing in the same way that words or images are; code is the thing that gets you to the thing.

And if a large language model hallucinates some code, you’ll find out soon enough:

With code you get a powerful form of fact checking for free. Run the code, see if it works.

But I want to push back on one justification I see repeatedly about using large language models to write code. Here’s Craig:

There are many moral and ethical issues with using LLMs, but building software feels like one of the few truly ethically “clean”(er) uses (trained on open source code, etc.)

That’s not how this works. Yes, the large language models are trained on lots of code (most of it open source), but they’re not only trained on that. That’s on top of everything else; all the stolen books, all the unpaid creative work of others.

Even Robin Sloan, who first says:

I think the case of code is especially clear, and, for me, basically settled.

…goes on to acknowledge:

But, again, it’s important to say: the code only works because of Everything. Take that data away, train a model using GitHub alone, and you’ll get a far less useful tool.

When large language models are trained on domain-specific data, it’s always in addition to the mahoosive amount of content they’ve already stolen. It’s that mohoosive amount of content—not the domain-specific data—that enables them to parse your instructions.

(Note that I’m being very delibarate in saying “parse”, not “understand.” Though make no mistake, I’m astonished at how good these tools are at parsing instructions. I say that as someone who tried to write natural language parsers for text-only adventure games back in the 1980s.)

So, sure, go ahead and use large language models to write code. But don’t fool yourself into thinking that it’s somehow ethical.

What I said here applies to code too:

If you’re going to use generative tools powered by large language models, don’t pretend you don’t know how your sausage is made.

An Entirely Other Day: The Triumph of Triumphalism

Scratch the skin of wild-eyed AI proponents, and a thick syrup oozes out, made up of the blendered remains of Roko’s Basilisk, barely sublimated Christian end-times thinking, and the mis-remembered plot of that one cool science-fiction story they read when they were twelve. This is the basis for the new order, just like the blockchain was a couple of years ago, and a dead-eyed, low-poly, pantsless rendering of Mark Zuckerberg was a couple of years before that.

“You’re going to be left behind” is only the latest version of “Have fun staying poor.” It’s got every ounce of the smug self-satisfaction that it shouldn’t need if the inevitability it promises were actually inevitable.