Link tags: mac

322

sparkline

Where’s the AI design renaissance?

I’ve had some incredibly productive moments with AI design tools. But I’ve had at least as many slogs, where I can’t get it to do some basic thing I should’ve done myself 45 minutes ago.

My hunch: vibe coding is a lot like stock-picking – everyone’s always blabbing about their big wins. Ask what their annual rate of return is above the S&P, and it’s a quieter conversation 🤫

This, in my opinion, is how we end up with a firehose of AI hype, and yet zero signs of a software renaissance. As Mike Judge points out, the following graphs are flat: (a) new app store releases, (b) new domain names registered, (c) new Github repositories.

The Programmer Identity Crisis ❈ Simon Højberg ❈ Principal Frontend Engineer

I prefer my tools to help me with repetitive tasks (and there are many of those in programming), understanding codebases, and authoring correct programs. I take offense at products that are designed to think for me. To remove the agency of my own understanding of the software I produce, and to cut connections with my coworkers. Even if LLMs lived up to the hype, we would still stand to lose all of that and our craft.

A cartoonist’s review of AI art - The Oatmeal

Stick with this. It’s worth it.

Life Is More Than an Engineering Problem | Los Angeles Review of Books

A great interview with Ted Chiang:

Predicting the most likely next word is different from having correct information about the world, which is why LLMs are not a reliable way to get the answers to questions, and I don’t think there is good evidence to suggest that they will become reliable. Over the past couple of years, there have been some papers published suggesting that training LLMs on more data and throwing more processing power at the problem provides diminishing returns in terms of performance. They can get better at reproducing patterns found online, but they don’t become capable of actual reasoning; it seems that the problem is fundamental to their architecture. And you can bolt tools onto the side of an LLM, like giving it a calculator it can use when you ask it a math problem, or giving it access to a search engine when you want up-to-date information, but putting reliable tools under the control of an unreliable program is not enough to make the controlling program reliable. I think we will need a different approach if we want a truly reliable question answerer.

Against the protection of stocking frames. — Ethan Marcotte

I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors. But in more concrete terms, consumers dislike “AI” when it shows up in products, and it makes them actively mistrust the brands that employ it. In other words, we’re some three years into the hype cycle, and LLMs haven’t met any markers of success we’d apply to, well, literally any other technology.

When All You Have Is a Robots.txt Hammer – Pixel Envy

I write here for you, not for the benefit of building the machines producing a firehose of spam, scams, and slop. The artificial intelligence companies have already violated the expectations of even a public web. Regardless of the benefits they have created — and I do believe there are benefits to these technologies — they have behaved unethically. Defensive action is the only control a publisher can assume right now.

In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen – Random Thoughts

A microwave isn’t going to take your job; a chef who knows how to use a microwave is going to take your job.

I Am An AI Hater | moser’s frame shop

I wanted to quote an excerpt of this post, but honestly I couldn’t choose just one part—the whole thing is perfect. You should read it for the beauty of the language alone.

(This is Anthony Moser’s first blog post. I fear he has created his Citizen Kane.)

Every Reason Why I Hate AI and You Should Too

If I were to photocopy this article, nobody would argue that my photocopier wrote it and therefore can think. But add enough convolutedness to the process, and it looks a lot like maybe it did and can.

In reality, all we’ve created is a bot which is almost perfect at mimicking human-like natural language use, and the rest is people just projecting other human qualities on to it. Quite simply, “LLMs are doing reasoning” is the “look, my dog is smiling” of technology. In exactly the same way that dogs don’t convey their emotions via human-like facial expressions, there’s no reason to believe that even if computer could think, it’d perfectly mirror what looks like human reasoning.

This website is for humans - localghost

This website is for humans, and LLMs are not welcome here.

Cosigned.

Vibe code is legacy code | Val Town Blog

When you vibe code, you are incurring tech debt as fast as the LLM can spit it out. Which is why vibe coding is perfect for prototypes and throwaway projects: It’s only legacy code if you have to maintain it!

The worst possible situation is to have a non-programmer vibe code a large project that they intend to maintain. This would be the equivalent of giving a credit card to a child without first explaining the concept of debt.

If you don’t understand the code, your only recourse is to ask AI to fix it for you, which is like paying off credit card debt with another credit card.

The sound of inevitability | My place to put things

People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

A human review | Trys Mudford

Following on from my earlier link about AI etiquette, what Trys experienced here is utterly deflating:

I spent a couple of hours working through my notes and writing up a review before sending it to my manager, awaiting their equivalent review for me.

However, the review I received back was, quite simply, quintessential AI slop.

When slopagandists talk about “AI” boosting productivity, this is the kind of shite they’re talking about.

Butlerian Jihad

This page collects my blog posts on the topic of fighting off spam bots, search engine spiders and other non-humans wasting the precious resources we have on Earth.

It’s rude to show AI output to people | Alex Martsinovich

For the longest time, writing was more expensive than reading. If you encountered a body of written text, you could be sure that at the very least, a human spent some time writing it down. The text used to have an innate proof-of-thought, a basic token of humanity.

Now, AI has made text very, very, very cheap. … Any text can be AI slop. If you read it, you’re injured in this war. You engaged and replied – you’re as good as dead. The dead internet is not just dead it’s poisoned.

I think that realistically, our main weapon in this war is AI etiquette.

Vibe coding and Robocop

The short version of what I want to say is: vibe coding seems to live very squarely in the land of prototypes and toys. Promoting software that’s been built entirely using this method would be akin to sending a hacked weekend prototype to production and expecting it to be stable.

Remy is taking a very sensible approach here:

I’ve used it myself to solve really bespoke problems where the user count is one.

Would I put this out to production: absolutely not.

Frame of preference – Aresluna

Marcin has outdone himself this time. Not only has he created an exhaustive history of the settings controls in Apple interfaces, he’s gone and made them all interactive!

While it’s easy to be blown away by the detail of the interactive elements here, it’s also worth taking a moment to appreciate just how good the writing is too.

Bravo!

The Imperfectionist: Navigating by aliveness

Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.

More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.

Large Language Muddle • Jason Santa Maria

It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.

Critical questions for design leaders working with artificial intelligence, New York 2025 | Leading Design

AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.

This document is not a manifesto or an agenda. It is a series of prompts written by design leaders for design leaders, conceived to help us navigate these tricky waters.