Tags: auto

133

sparkline

Saturday, June 15th, 2024

Rise of the Ghost Machines - The Millions

This thing that we’ve been doing collectively with our relentless blog posts and pokes and tweets and uploads and news story shares, all 30-odd years of fuck-all pointless human chatterboo, it’s their tuning fork. Like when a guitarist plays a chord on a guitar and compares the sound to a tuner, adjusts the pegs, plays the chord again; that’s what has happened here, that’s what all my words are, what all our words are, a thing to mimic, a mockingbird’s feast.

Every time you ask AI to create words, to generate an answer, it analyzes the words you input and compare those words to the trillions of relations and concepts it has already categorized and then respond with words that match the most likely response. The chatbot is not thinking, but that doesn’t matter: in the moment, it feels like it’s responding to you. It feels like you’re not alone. But you are.

Wednesday, June 5th, 2024

Fine-tuning Text Inputs

Garrett talks through some handy HTML attributes: spellcheck, autofocus, autocapitalize, autocomplete, and autocorrect:

While they feel like small details, when we set these attributes on inputs, we streamline things for visitors while also guiding the browser on when it should just get out of the way.

Wednesday, May 29th, 2024

The Danger Of Superhuman AI Is Not What You Think - NOEMA

Once you have reduced the concept of human intelligence to what the markets will pay for, then suddenly, all it takes to build an intelligent machine — even a superhuman one — is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant.

By describing as superhuman a thing that is entirely insensible and unthinking, an object without desire or hope but relentlessly productive and adaptable to its assigned economically valuable tasks, we implicitly erase or devalue the concept of a “human” and all that a human can do and strive to become. Of course, attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.

Thursday, May 23rd, 2024

Generative AI is for the idea guys

Generative AI is like the ultimate idea guy’s idea! Imagine… if all they needed to create a business, software or art was their great idea, and a computer. No need to engage (or pay) any of those annoying makers who keep talking about limitations, scope, standards, artistic integrity etc. etc.

Thursday, May 16th, 2024

What Are We Actually Doing With A.I. Today? – Pixel Envy

The marketing of A.I. reminds me less of the cryptocurrency and Web3 boom, and more of 5G. Carriers and phone makers promised world-changing capabilities thanks to wireless speeds faster than a lot of residential broadband connections. Nothing like that has yet materialized.

Wednesday, May 15th, 2024

AI Safety for Fleshy Humans: a whirlwind tour

This is a terrificly entertaining level-headed in-depth explanation of AI safety. By the end of this year, all three parts will be published; right now the first part is ready for you to read and enjoy.

This 3-part series is your one-stop-shop to understand the core ideas of AI & AI Safety — explained in a friendly, accessible, and slightly opinionated way!

( Related phrases: AI Risk, AI X-Risk, AI Alignment, AI Ethics, AI Not-Kill-Everyone-ism. There is no consensus on what these phrases do & don’t mean, so I’m just using “AI Safety” as a catch-all.)

Saturday, May 4th, 2024

AI is not like you and me

AI is the most anthropomorphized technology in history, starting with the name—intelligence—and plenty of other words thrown around the field: learning, neural, vision, attention, bias, hallucination. These references only make sense to us because they are hallmarks of being human.

But ascribing human qualities to AI is not serving us well. Anthropomorphizing statistical models leads to confusion about what AI does well, what it does poorly, what form it should take, and our agency over all of the above.

There is something kind of pathological going on here. One of the most exciting advances in computer science ever achieved, with so many promising uses, and we can’t think beyond the most obvious, least useful application? What, because we want to see ourselves in this technology?

Meanwhile, we are under-investing in more precise, high-value applications of LLMs that treat generative A.I. models not as people but as tools.

Anthropomorphizing AI not only misleads, but suggests we are on equal footing with, even subservient to, this technology, and there’s nothing we can do about it.

Wednesday, May 1st, 2024

Tim Paul | Automation and the Jevons paradox

This is insightful:

AI and automation is often promoted as a way of handling complexity. But handling complexity isn’t the same as reducing it.

In fact, by getting better at handling complexity we’re increasing our tolerance for it. And if we become more tolerant of it we’re likely to see it grow, not shrink.

From that perspective, large language models are over-engineered bandaids. They might appear helpful at the surface-level but they’re never going to help tackle the underlying root causes.

Thursday, April 18th, 2024

AI isn’t useless. But is it worth it?

I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can’t do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs.

A very even-handed take.

I’m glad that I took the time to experiment with AI tools, both because I understand them better and because I have found them to be useful in my day-to-day life. But even as someone who has used them and found them helpful, it’s remarkable to see the gap between what they can do and what their promoters promise they will someday be able to do. The benefits, though extant, seem to pale in comparison to the costs.

Tuesday, April 16th, 2024

The dancing bear, part 1

I don’t believe the greatest societal risk is that a sentient artificial intelligence is going to kill us all. I think our undoing is simpler than that. I think that most of our lives are going to be shorter and more miserable than they could have been, thanks to the unchecked greed that’s fed this rally. (Okay, this and crypto.)

I like this analogy:

AI is like a dancing bear. This was a profitable sideshow dating back to the middle ages: all it takes is a bear, some time, and a complete lack of ethics. Today, our carnival barkers are the AI startups and their CEOs. They’re trying to convince you that if they can show you a bear that can dance, then you’ll believe it can draw, write coherent sentences, and help you with your app’s marketing strategy.

Part of the curiosity of a dancing bear is the implicit risk that it’ll remember at some point that it’s a bear, and maul whoever is nearby. The fear is a selling point. Likewise, some AI vendors have even learned that the product is more compelling if it’s perceived as dangerous. It’s common for AI startup execs to say things like, “of course there’s a real risk that an army of dancing bears will eventually kill us all. Anyway, here’s what we’re working on…” How brave of them.

Saturday, March 23rd, 2024

Conway’s Game of Hope

A beautifully Borgesian fable.

Tuesday, March 19th, 2024

The growing backlash against AI

You are not creative and then create something, you become creative by working on something, creativity is a byproduct of work.

In this way “AI” is deeply dehumanizing: Making the spaces and opportunities for people to grow and be human smaller and smaller. Applying a straitjacket of past mediocrity to our minds and spirits.

And that is what is being booed: The salespeople of mediocrity who’ve made it their mission to speak lies from power. The lie that only tech can and will save us. The lie that a bit of statistics and colonial, mostly white, mostly western data is gonna create a brilliant future. The lie that we have no choice, no alternatives.

Sunday, March 3rd, 2024

On Nielsen’s ideas about generative UI for resolving accessibility

Per Axbom quite rightly tears Jakob Nielsen a new one.

I particularly like his suggestion that you re-read Nielsen’s argument but replace the word “accessibility” with “usability”:

Assessed this way, the accessibilityusabiity movement has been a miserable failure.

AccessibilityUsability is too expensive for most companies to be able to afford everything that’s needed with the current, clumsy implementation.

Thursday, February 8th, 2024

How independent writers are turning to AI

I missed this article when it was first published, but I have to say this is some truly web-native art direction: bravo!

Thursday, January 25th, 2024

MastoFeed - Send your RSS Feeds to Mastodon

This looks like a handy RSS-to-Mastodon service.

Saturday, January 13th, 2024

Why Would I Buy This Useless, Evil Thing? - Aftermath

To be honest, you can skip the “review”, but I just had to link to this for the perfection of the opening three sentences, which sum of my feelings exactly:

I resent AI. Not AI itself–that’s just code, despite what tech guys with flashlights under their chins tell you. I resent the imposition, the idea that since LLMs exist, it follows that they should exist in every facet in my life.

Wednesday, January 10th, 2024

Sunday, January 7th, 2024

Clippy returned (as an unnecessary “AI”) | hidde.blog

Personally, I want software to push me not towards reusing what exists, but away from that (and that’s harder). Whether I’m producing a plan or hefty biography, push me towards thinking critically about the work, rather than offering a quick way out.

Wednesday, January 3rd, 2024

LLMs and Programming in the first days of 2024

What strikes me about my personal experience with LLMs is that I have learned precisely when to use them and when their use would only slow me down. I have also learned that LLMs are a bit like Wikipedia and all the video courses scattered on YouTube: they help those with the will, ability, and discipline, but they are of marginal benefit to those who have fallen behind. I fear that at least initially, they will only benefit those who already have an advantage.

Tuesday, December 19th, 2023

Don’t Let the Robots Get You Down

If you do work that is hard, kind of a grind sometimes, and involves lots of little and small decisions, I think you’re pretty safe for a while. As a computer person who has spent a lot of this year messing with AI, and someone who has kept an eye on AI promises for decades, the things they’re saying about the future seem really far away. There’s tons of progress ahead, but it’s not a mistake to get a mortgage or plan a vacation.