CSS Intelligence: Speculating On The Future Of A Smarter Language — Smashing Magazine
This is a really thoughtful look at the evolution of CSS and the ever-present need to balance power with learnability.
This is a really thoughtful look at the evolution of CSS and the ever-present need to balance power with learnability.
Benjamín Labatut draws a line from the Vedas to George Boole and Claude Shannon onward to Geoffrey Hinton and Frank Herbert’s Butlerian Jihad.
In the coming years, as people armed with AI continue making the world faster, stranger, and more chaotic, we should do all we can to prevent these systems from giving more and more power to the few who can build them.
Once you have reduced the concept of human intelligence to what the markets will pay for, then suddenly, all it takes to build an intelligent machine — even a superhuman one — is to make something that generates economically valuable outputs at a rate and average quality that exceeds your own economic output. Anything else is irrelevant.
By describing as superhuman a thing that is entirely insensible and unthinking, an object without desire or hope but relentlessly productive and adaptable to its assigned economically valuable tasks, we implicitly erase or devalue the concept of a “human” and all that a human can do and strive to become. Of course, attempts to erase and devalue the most humane parts of our existence are nothing new; AI is just a new excuse to do it.
Abeba Birhane has written an excellent historical overview of the original Artificial Intelligence movement, including Weizenbaum’s aboutface, and the current continuation of technological determinism.
Bosses have certain goals, but don’t want to be blamed for doing what’s necessary to achieve those goals; by hiring consultants, management can say that they were just following independent, expert advice. Even in its current rudimentary form, A.I. has become a way for a company to evade responsibility by saying that it’s just doing what “the algorithm” says, even though it was the company that commissioned the algorithm in the first place.
Once again, absolutely spot-on analysis from Ted Chiang.
I’m not very convinced by claims that A.I. poses a danger to humanity because it might develop goals of its own and prevent us from turning it off. However, I do think that A.I. is dangerous inasmuch as it increases the power of capitalism. The doomsday scenario is not a manufacturing A.I. transforming the entire planet into paper clips, as one famous thought experiment has imagined. It’s A.I.-supercharged corporations destroying the environment and the working class in their pursuit of shareholder value. Capitalism is the machine that will do whatever it takes to prevent us from turning it off, and the most successful weapon in its arsenal has been its campaign to prevent us from considering any alternatives.
I’ve mentioned before that I’m not a fan of initialisms and acronyms. They can be exclusionary.
It bothers me doubly when everyone is talking about AI.
First of all, the term is so vague as to be meaningless. Sometimes—though rarely—AI refers to general artificial intelligence. Sometimes AI refers to machine learning. Sometimes AI refers to large language models. Sometimes AI refers to a series of if
/else
statements. That’s quite a spectrum of meaning.
Secondly, there’s the assumption that everyone understands the abbreviation. I guess that’s generally a safe assumption, but sometimes AI could refer to something other than artificial intelligence.
In countries with plenty of pastoral agriculture, if someone works in AI, it usually means they’re going from farm to farm either extracting or injecting animal semen. AI stands for artificial insemination.
I think that abbreviation might work better for the kind of things currently described as using AI.
We were discussing this hot topic at work recently. Is AI coming for our jobs? The consensus was maybe, but only the parts of our jobs that we’re more than happy to have automated. Like summarising some some findings. Or perhaps as a kind of lorem ipsum generator. Or for just getting the ball rolling with a design direction. As Terence puts it:
Midjourney is great for a first draft. If, like me, you struggle to give shape to your ideas then it is nothing short of magic. It gets you through the first 90% of the hard work. It’s then up to you to refine things.
That’s pretty much the conclusion we came to in our discussion at Clearleft. There’s no way that we’d use this technology to generate outputs for clients, but we certainly might use it to generate inputs. It’s like how we’d do a quick round of sketching to get a bunch of different ideas out into the open. Terence is spot on when he says:
Midjourney lets me quickly be wrong in an interesting direction.
To put it another way, using a large language model could be a way of artificially injecting some seeds of ideas. Artificial insemination.
So now when I hear people talk about using AI to create images or articles, I don’t get frustrated. Instead I think, “Using artificial insemination to create images or articles? Yes, that sounds about right.”
AI becomes a stand-in term for whatever technologies and techniques are new, shiny, and just beyond the grasp of our understanding. We use it to gesture at a future we cannot fully comprehend or currently realise. As soon as we do, it will no longer be “AI.”
In this piece published a year ago, Ted Chiang pours cold water on the idea of a bootstrapping singularity.
How much can you optimize for generality? To what extent can you simultaneously optimize a system for every possible situation, including situations never encountered before? Presumably, some improvement is possible, but the idea of an intelligence explosion implies that there is essentially no limit to the extent of optimization that can be achieved. This is a very strong claim. If someone is asserting that infinite optimization for generality is possible, I’d like to see some arguments besides citing examples of optimization for specialized tasks.
Whatever the merit of the scientific aspirations originally encompassed by the term “artificial intelligence,” it’s a phrase that now functions in the vernacular primarily to obfuscate, alienate, and glamorize.
Do “cloud” next!
Caleb Scharf:
Wait a minute. There is no real difference between the dataome—our externalized world of books and computers and machines and robots and cloud servers—and us. That means the dataome is a genuine alternative living system here on the planet. It’s dependent on us, but we’re dependent on it too. And for me that was nerve-wracking. You get to the point of looking at it and going, Wow, the alien world is here, and it’s right under our nose, and we’re interacting with it constantly.
I like this Long Now view of our dataome:
We are constantly exchanging information that enables us to build a library for survival on this planet. It’s proven an incredibly successful approach to survival. If I can remember what happened 1,000 years ago, that may inform me for success today.
Black Mirror meets Henrietta Lacks in this short story by Erik Hoel who I had not heard of until today, when I came across his name here and also in a completely unrelated blog post by Peter Watts about the nature of dreams.
An excerpt from the book Rethinking Consciousness by Michael S. A. Graziano, which looks like an interesting companion piece to Peter Godfrey-Smith’s excellent Other Minds.
Also, can I just say how nice this reading experience is—the typography, the arresting image …I like it.
The televisual adaption of Game of Thrones wrapped up a few weeks ago, so I hope I can safely share some thoughts with spoilering. That said, if you haven’t seen the final season, and you plan to, please read no further!
There has been much wailing and gnashing of teeth about the style of the final series or two. To many people, it felt weirdly …off. Zeynep’s superb article absolutely nails why the storytelling diverged from its previous style:
For Benioff and Weiss, trying to continue what Game of Thrones had set out to do, tell a compelling sociological story, would be like trying to eat melting ice cream with a fork. Hollywood mostly knows how to tell psychological, individualized stories. They do not have the right tools for sociological stories, nor do they even seem to understand the job.
Let’s leave aside the clumsiness of the execution for now and focus on the outcomes.
The story finishes with Bran as the “winner”, in that he now rules the seve— six kingdoms. I have to admit, I quite like the optics of replacing an iron throne with a wheelchair. Swords into ploughshares, and all that.
By this point, Bran is effectively a non-human character. He’s the Dr. Manhattan of the story. As the three-eyed raven, he has taken on the role of being an emotionless database of historical events. He is Big Data personified. Or, if you squint just right, he’s an Artificial Intelligence.
There’s another AI in the world of Game of Thrones. The commonly accepted reading of the Night King is that he represents climate change: an unstoppable force that’s going to dramatically impact human affairs, but everyone is too busy squabbling in their own politics to pay attention to it. I buy that. But there’s another interpretation. The Night King is rogue AI. He’s a paperclip maximiser.
Clearly, a world ruled by an Artificial Intelligence like that would be a nightmare scenario. But we’re also shown that a world ruled purely by human emotion would be just as bad. That would be the tyrannical reign of the mad queen Daenerys. Both extremes are undesirable.
So why is Bran any better? Well, technically, he isn’t ruling alone. He has a board of (very human) advisors. The emotionless logic of a pure AI is kept in check by a council of people. And the extremes of human nature are kept in check by the impartial AI. To put in another way, humanity is augmented by Artificial Intelligence: Man-computer symbiosis.
Whether it’s the game of chess or the game of thrones, a centaur is your best bet.
What would Wiener think of the current human use of human beings? He would be amazed by the power of computers and the internet. He would be happy that the early neural nets in which he played a role have spawned powerful deep-learning systems that exhibit the perceptual ability he demanded of them—although he might not be impressed that one of the most prominent examples of such computerized Gestalt is the ability to recognize photos of kittens on the World Wide Web.
Thorough (and grim) research from Chris.
A terrific six-part series of short articles looking at the people behind the history of Artificial Intelligence, from Babbage to Turing to JCR Licklider.
The history of AI is often told as the story of machines getting smarter over time. What’s lost is the human element in the narrative, how intelligent machines are designed, trained, and powered by human minds and bodies.
We hoped for a bicycle for the mind; we got a Lazy Boy recliner for the mind.
Nicky Case on how Douglas Engelbart’s vision for human-computer augmentation has taken a turn from creation to consumption.
When you create a Human+AI team, the hard part isn’t the “AI”. It isn’t even the “Human”.
It’s the “+”.
Spot-on take by Ted Chiang:
I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations.
Related: if you want to see the paperclip maximiser in action, just look at the humans destroying the planet by mining bitcoin.
Questions prompted by the Clearleft gathering in Norway to discuss AI.
I like Richard’s five reminders:
- Just because the technology feels magic, it doesn’t mean making it understandable requires magic.
- Designers are going to need to get familiar with new materials to make things make sense to people.
- We need to make sure people have an option to object when something isn’t right.
- We should not fall into the trap of assuming the way to make machine learning understandable should be purely individualistic.
- We also need to think about how we design regulators too.