Link tags: auto

119

sparkline

Dynamic Datalist: Autocomplete from an API :: Aaron Gustafson

Great minds think alike! I have a very similar HTML web component on the front page of The Session called input-autosuggest.

Measured AI | Note to Self

It’s creepy to tell people they’ll lose their jobs if they don’t use AI. It’s weird to assume AI critics hate progress and are resisting some inevitable future.

The AI Gold Rush Is Cover for a Class War

Under the guise of technological inevitability, companies are using the AI boom to rewrite the social contract — laying off employees, rehiring them at lower wages, intensifying workloads, and normalizing precarity. In short, these are political choices masquerading as technical necessities, AI is not the cause of the layoffs but their justification.

The Programmer Identity Crisis ❈ Simon Højberg ❈ Principal Frontend Engineer

I prefer my tools to help me with repetitive tasks (and there are many of those in programming), understanding codebases, and authoring correct programs. I take offense at products that are designed to think for me. To remove the agency of my own understanding of the software I produce, and to cut connections with my coworkers. Even if LLMs lived up to the hype, we would still stand to lose all of that and our craft.

Decentralizing quality || Matt Ström-Awn, designer-leader

I’ve personally struggled to implement a decentralized approach to quality in many of my teams. I believe in it from an academic standpoint, but in practice it works against the grain of every traditional management structure. Managers want ‘one neck to wring’ when things go wrong. Decentralized quality makes that impossible. So I’ve compromised, centralized, become the bottleneck I know slows things down. It’s easier to defend in meetings. But when I’ve managed to decentralize quality — most memorably when I was running a small agency and could write the org chart myself — I’ve been able to do some of the best work of my career.

AI doesn’t need to think. We do! - craigabbott.co.uk

A good overview of how large language models work:

The words flow together because they’ve been seen together many times. But that doesn’t mean they’re right. It just means they’re coherent.

What I’ve learned about writing AI apps so far | Seldo.com

LLMs are good at transforming text into less text

Laurie is really onto something with this:

This is the biggest and most fundamental thing about LLMs, and a great rule of thumb for what’s going to be an effective LLM application. Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.

Depending how much of the hype around AI you’ve taken on board, the idea that they “take text and turn it into less text” might seem gigantic back-pedal away from previous claims of what AI can do. But taking text and turning it into less text is still an enormous field of endeavour, and a huge market. It’s still very exciting, all the more exciting because it’s got clear boundaries and isn’t hype-driven over-reaching, or dependent on LLMs overnight becoming way better than they currently are.

Why “AI” projects fail

“AI” is heralded (by those who claim it to replace workers as well as those that argue for it as a mere tool) as a thing to drop into your workflows to create whatever gains promised. It’s magic in the literal sense. You learn a few spells/prompts and your problems go poof. But that was already bullshit when we talked about introducing other digital tools into our workflows.

And we’ve been doing this for decades now, with every new technology we spend a lot of money to get a lot of bloody noses for way too little outcome. Because we keep not looking at actual, real problems in front of us – that the people affected by them probably can tell you at least a significant part of the solution to. No we want a magic tool to make the problem disappear. Which is a significantly different thing than solving it.

Does AI benefit the world? – Chelsea Troy

Our ethical struggle with generative models derives in part from the fact that we…sort of can’t have them ethically, right now, to be honest. We have known how to build models like this for a long time, but we did not have the necessary volume of parseable data available until recently—and even then, to get it, companies have to plunder the internet. Sitting around and waiting for consent from all the parties that wrote on the internet over the past thirty years probably didn’t even cross Sam Altman’s mind.

On the environmental front, fans of generative model technology insist that eventually we’ll possess sufficiently efficient compute power to train and run these models without the massive carbon footprint. That is not the case at the moment, and we don’t have a concrete timeline for it. Again, wait around for a thing we don’t have yet doesn’t appeal to investors or executives.

Aboard Newsletter: Why So Bad, AI Ads?

The human desire to connect with others is very profound, and the desire of technology companies to interject themselves even more into that desire—either by communicating on behalf of humans, or by pretending to be human—works in the opposite direction. These technologies don’t seem to be encouraging connection as much as commoditizing it.

Pop Culture

Despite all of this hype, all of this media attention, all of this incredible investment, the supposed “innovations” don’t even seem capable of replacing the jobs that they’re meant to — not that I think they should, just that I’m tired of being told that this future is inevitable.

The reality is that generative AI isn’t good at replacing jobs, but commoditizing distinct acts of labor, and, in the process, the early creative jobs that help people build portfolios to advance in their industries.

One of the fundamental misunderstandings of the bosses replacing these workers with generative AI is that you are not just asking for a thing, but outsourcing the risk and responsibility.

Generative AI costs far too much, isn’t getting cheaper, uses too much power, and doesn’t do enough to justify its existence.

Ideas Aren’t Worth Anything - The Biblioracle Recommends

The fact that writing can be hard is one of the things that makes it meaningful. Removing this difficulty removes that meaning.

There is significant enthusiasm for this attitude inside the companies that produce an distribute media like books, movies, and music for obvious reasons. Removing the expense of humans making art is a real savings to the bottom line.

But the idea of this being an example of democratizing creativity is absurd. Outsourcing is not democratizing. Ideas are not the most important part of creation, execution is.

How do we build the future with AI? – Chelsea Troy

This is the transcript of a fantastic talk called “The Tools We Still Need to Build with AI.”

Absorb every word!

The mainstreaming of ‘AI’ scepticism – Baldur Bjarnason

  1. Tech is dominated by “true believers” and those who tag along to make money.
  2. Politicians seem to be forever gullible to the promises of tech.
  3. Management loves promises of automation and profitable layoffs.

But it seems that the sentiment might be shifting, even among those predisposed to believe in “AI”, at least in part.

Because There’s No “AI” in “Failure”

My new favourite blog on Tumblr.

Rise of the Ghost Machines - The Millions

This thing that we’ve been doing collectively with our relentless blog posts and pokes and tweets and uploads and news story shares, all 30-odd years of fuck-all pointless human chatterboo, it’s their tuning fork. Like when a guitarist plays a chord on a guitar and compares the sound to a tuner, adjusts the pegs, plays the chord again; that’s what has happened here, that’s what all my words are, what all our words are, a thing to mimic, a mockingbird’s feast.

Every time you ask AI to create words, to generate an answer, it analyzes the words you input and compare those words to the trillions of relations and concepts it has already categorized and then respond with words that match the most likely response. The chatbot is not thinking, but that doesn’t matter: in the moment, it feels like it’s responding to you. It feels like you’re not alone. But you are.

Fine-tuning Text Inputs

Garrett talks through some handy HTML attributes: spellcheck, autofocus, autocapitalize, autocomplete, and autocorrect:

While they feel like small details, when we set these attributes on inputs, we streamline things for visitors while also guiding the browser on when it should just get out of the way.

The Danger Of Superhuman AI Is Not What You Think - NOEMA

Once you have reduced the concept of human intelligence to what the markets will pay for, then suddenly, all it takes to build an intelligent machine — even a superhuman one — is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant.

By describing as superhuman a thing that is entirely insensible and unthinking, an object without desire or hope but relentlessly productive and adaptable to its assigned economically valuable tasks, we implicitly erase or devalue the concept of a “human” and all that a human can do and strive to become. Of course, attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.

Generative AI is for the idea guys

Generative AI is like the ultimate idea guy’s idea! Imagine… if all they needed to create a business, software or art was their great idea, and a computer. No need to engage (or pay) any of those annoying makers who keep talking about limitations, scope, standards, artistic integrity etc. etc.