Steam

Picture someone tediously going through a spreadsheet that someone else has filled in by hand and finding yet another error.

“I wish to God these calculations had been executed by steam!” they cry.

The year was 1821 and technically the spreadsheet was a book of logarithmic tables. The frustrated cry came from Charles Babbage, who channeled his frustration into a scheme to create the world’s first computer.

His difference engine didn’t work out. Neither did his analytical engine. He’d spend his later years taking his frustrations out on street musicians, which—as a former busker myself—earns him a hairy eyeball from me.

But we’ve all been there, right? Some tedious task that feels soul-destroying in its monotony. Surely this is exactly what machines should be doing?

I have a hunch that this is where machine learning and large language models might turn out to be most useful. Not in creating breathtaking works of creativity, but in menial tasks that nobody enjoys.

Someone was telling me earlier today about how they took a bunch of haphazard notes in a client meeting. When the meeting was done, they needed to organise those notes into a coherent summary. Boring! But ChatGPT handled it just fine.

I don’t think that use-case is going to appear on the cover of Wired magazine anytime soon but it might be a truer glimpse of the future than any of the breathless claims being eagerly bandied about in Silicon Valley.

You know the way we no longer remember phone numbers, because, well, why would we now that we have machines to remember them for us? I’d be quite happy if machines did that for the annoying little repetitive tasks that nobody enjoys.

I’ll give you an example based on my own experience.

Regular expressions are my kryptonite. I’m rubbish at them. Any time I have to figure one out, the knowledge seeps out of my brain before long. I think that’s because I kind of resent having to internalise that knowledge. It doesn’t feel like something a human should have to know. “I wish to God these regular expressions had been calculated by steam!”

Now I can get a chatbot with a large language model to write the regular expression for me. I still need to describe what I want, so I need to write the instructions clearly. But all the gobbledygook that I’m writing for a machine now gets written by a machine. That seems fair.

Mind you, I wouldn’t blindly trust the output. I’d take that regular expression and run it through a chatbot, maybe a different chatbot running on a different large language model. “Explain what this regular expression does,” would be my prompt. If my input into the first chatbot matches the output of the second, I’d have some confidence in using the regular expression.

A friend of mine told me about using a large language model to help write SQL statements. He described his database structure to the chatbot, and then described what he wanted to select.

Again, I wouldn’t use that output without checking it first. But again, I might use another chatbot to do that checking. “Explain what this SQL statement does.”

Playing chatbots off against each other like this is kinda how machine learning works under the hood: generative adverserial networks.

Of course, the task of having to validate the output of a chatbot by checking it with another chatbot could get quite tedious. “I wish to God these large language model outputs had been validated by steam!”

Sounds like a job for machines.

Have you published a response to this? :

Responses

Mark Root-Wiley

@adactio It strikes me that an LLM is not the best tool for validation. Wouldn’t a tool like RegExr.com that literally explains an expression for you with 100% accuracy (and provides a sweet testing tool!) work better for Step 2? Sometimes I feel like LLMs make me quickly forget about old special purpose tools that are more powerful in their tiny little domain (and may always be?).

6 Likes

# Liked by andi on Thursday, March 23rd, 2023 at 6:29pm

# Liked by Kristofer Joseph on Thursday, March 23rd, 2023 at 6:29pm

# Liked by Leonidas Tsementzis on Thursday, March 23rd, 2023 at 7:53pm

# Liked by Cynthia Teeters on Thursday, March 23rd, 2023 at 8:55pm

# Liked by Owen Gregory on Thursday, March 23rd, 2023 at 10:35pm

# Liked by Francesco Schwarz on Friday, March 24th, 2023 at 12:59pm

Related posts

Tools

A large language model is as neutral as an AK-47.

Denial

The best of the web is under continuous attack from the technology that powers your generative “AI” tools.

Design processing

Three designers I know have been writing about large language models.

Reason

Please read Miriam’s latest blog post.

Changing

I’m trying to be open to changing my mind when presented with new evidence.

Related links

When All You Have Is a Robots.txt Hammer – Pixel Envy

I write here for you, not for the benefit of building the machines producing a firehose of spam, scams, and slop. The artificial intelligence companies have already violated the expectations of even a public web. Regardless of the benefits they have created — and I do believe there are benefits to these technologies — they have behaved unethically. Defensive action is the only control a publisher can assume right now.

Tagged with

I Am An AI Hater | moser’s frame shop

I wanted to quote an excerpt of this post, but honestly I couldn’t choose just one part—the whole thing is perfect. You should read it for the beauty of the language alone.

(This is Anthony Moser’s first blog post. I fear he has created his Citizen Kane.)

Tagged with

Every Reason Why I Hate AI and You Should Too

If I were to photocopy this article, nobody would argue that my photocopier wrote it and therefore can think. But add enough convolutedness to the process, and it looks a lot like maybe it did and can.

In reality, all we’ve created is a bot which is almost perfect at mimicking human-like natural language use, and the rest is people just projecting other human qualities on to it. Quite simply, “LLMs are doing reasoning” is the “look, my dog is smiling” of technology. In exactly the same way that dogs don’t convey their emotions via human-like facial expressions, there’s no reason to believe that even if computer could think, it’d perfectly mirror what looks like human reasoning.

Tagged with

This website is for humans - localghost

This website is for humans, and LLMs are not welcome here.

Cosigned.

Tagged with

The sound of inevitability | My place to put things

People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.

This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.

Tagged with

Previously on this day

5 years ago I wrote Service worker weirdness in Chrome

Debugging an error message.

6 years ago I wrote Outlet

Tinkering with your website can be a fun distraction.

10 years ago I wrote The web on my phone

How do you solve a problem like Safari?

11 years ago I wrote 100 words 001

Day one.

12 years ago I wrote Notes from the edge

Thoughts prompted by the Edge Conference in London.

14 years ago I wrote Sharing pattern libraries

I, for one, welcome our new sharing and caring overlords of markup and CSS.

20 years ago I wrote Design, old and new

A panel at SXSW reminds me of one of the best non-web redesigns of recent times.

23 years ago I wrote Other People's Stories

Set aside some time and read through other people’s stories.

24 years ago I wrote Flo Control

Face recognition software is, it’s well known, crap.