On not choosing nice versions of AI – This day’s portion
Whenever anyone states that “AI is the future, so…” or “many people are using AI anyway, so…” they are not only expressing an opinion — they‘re shaping that future.
It doesn’t bother me much that bleeding-edge ML technology sometimes gets things wrong. It bothers me a lot when it gives no warnings, cites no sources, and provides no confidence interval.
Yes! Like I said:
Expose the wires. Show the workings-out.
Whenever anyone states that “AI is the future, so…” or “many people are using AI anyway, so…” they are not only expressing an opinion — they‘re shaping that future.
I’ve come to realize that statements about the future aren’t predictions: they’re more like spells. When someone describes
somethingto you as the future, they’re sharing a heartfelt belief that thissomethingwill be part of whatever comes next. “Artificial intelligence isn’t going anywhere” quite literally involves casting a technology forward into time. How could that be anything else but a kind of magic?
I love the web, and this thing is bad for the web.
- Atlas substitutes its own AI-generated content for the web, but it looks like it’s showing you the web
- The user experience makes you guess what commands to type instead of clicking on links
- You’re the agent for the browser, it’s not being an agent for you
It’s very clear that a lot of the new AI era is about dismantling the web’s original design.
I’ve had some incredibly productive moments with AI design tools. But I’ve had at least as many slogs, where I can’t get it to do some basic thing I should’ve done myself 45 minutes ago.
My hunch: vibe coding is a lot like stock-picking – everyone’s always blabbing about their big wins. Ask what their annual rate of return is above the S&P, and it’s a quieter conversation 🤫
This, in my opinion, is how we end up with a firehose of AI hype, and yet zero signs of a software renaissance. As Mike Judge points out, the following graphs are flat: (a) new app store releases, (b) new domain names registered, (c) new Github repositories.
People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.
This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.
Three designers I know have been writing about large language models.
Voigt-Kampff.
Thinking about priorities at UX Brighton.
Writing and reading.
A presentation at An Event Apart Chicago 2019.