Like

We use metaphors all the time. To quote George Lakoff, we live by them.

We use analogies some of the time. They’re particularly useful when we’re wrapping our heads around something new. By comparing something novel to something familiar, we can make a shortcut to comprehension, or at least, categorisation.

But we need a certain amount of vigilance when it comes to analogies. Just because something is like something else doesn’t mean it’s the same.

With that in mind, here are some ways that people are describing generative machine learning tools. Large language models are like…

Have you published a response to this? :

Responses

1 Share

# Shared by Maurizio Lupo on Monday, March 6th, 2023 at 11:44am

2 Likes

# Liked by Ethan Marcotte on Monday, March 6th, 2023 at 11:44am

# Liked by Matthias Ott on Monday, March 6th, 2023 at 1:23pm

Related posts

Tools

A large language model is as neutral as an AK-47.

Codewashing

Whether you’re generating slop or code, underneath it’s the same shoggoth with a smiley face.

Reason

Please read Miriam’s latest blog post.

Changing

I’m trying to be open to changing my mind when presented with new evidence.

The meaning of “AI”

Naming things is hard, and sometimes harmful.

Related links

Life Is More Than an Engineering Problem | Los Angeles Review of Books

A great interview with Ted Chiang:

Predicting the most likely next word is different from having correct information about the world, which is why LLMs are not a reliable way to get the answers to questions, and I don’t think there is good evidence to suggest that they will become reliable. Over the past couple of years, there have been some papers published suggesting that training LLMs on more data and throwing more processing power at the problem provides diminishing returns in terms of performance. They can get better at reproducing patterns found online, but they don’t become capable of actual reasoning; it seems that the problem is fundamental to their architecture. And you can bolt tools onto the side of an LLM, like giving it a calculator it can use when you ask it a math problem, or giving it access to a search engine when you want up-to-date information, but putting reliable tools under the control of an unreliable program is not enough to make the controlling program reliable. I think we will need a different approach if we want a truly reliable question answerer.

Tagged with

Against the protection of stocking frames. — Ethan Marcotte

I don’t think it’s controversial to suggest that LLMs haven’t measured up to any of the lofty promises made by their vendors. But in more concrete terms, consumers dislike “AI” when it shows up in products, and it makes them actively mistrust the brands that employ it. In other words, we’re some three years into the hype cycle, and LLMs haven’t met any markers of success we’d apply to, well, literally any other technology.

Tagged with

Every Reason Why I Hate AI and You Should Too

If I were to photocopy this article, nobody would argue that my photocopier wrote it and therefore can think. But add enough convolutedness to the process, and it looks a lot like maybe it did and can.

In reality, all we’ve created is a bot which is almost perfect at mimicking human-like natural language use, and the rest is people just projecting other human qualities on to it. Quite simply, “LLMs are doing reasoning” is the “look, my dog is smiling” of technology. In exactly the same way that dogs don’t convey their emotions via human-like facial expressions, there’s no reason to believe that even if computer could think, it’d perfectly mirror what looks like human reasoning.

Tagged with

The sound of inevitability | My place to put things

People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

Tagged with

The Imperfectionist: Navigating by aliveness

Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.

More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.

Tagged with

Previously on this day

4 years ago I wrote A bug with progressive web apps on iOS

Opening an external link in a web view appears to trigger a reload of the parent page without credentials.

4 years ago I wrote Both plagues on your one house

February, man.

7 years ago I wrote Unsolved Problems by Beth Dean

A presentation at An Event Apart Seattle 2019.

8 years ago I wrote Minimal viable service worker

Boosting performance with a general-purpose service worker script.

9 years ago I wrote Empire State

Non-humans of New York.

19 years ago I wrote Southby

It’s that time of year again: South by Southwest is almost upon us.

23 years ago I wrote They. They, they, they shine on.

Hidden away on the listings page for the Sussex Arts Club is the regular singer/songwriter Thursday night slot for March 20th.

23 years ago I wrote Call and response

I love it when the web works like this.

23 years ago I wrote Do not adjust your set

The colours really are that vivid.