dead framework theory | AI Focus
This is depressing.
- Be skeptical of PR hype
- Question the training data
- Evaluate the model
- Consider downstream harms
This is depressing.
I love the small web, the clean web. I hate tech bloat.
And LLMs are the ultimate bloat.
So much truth in one story:
They built a machine to gentrify the English language.
They have built a machine that weaponizes mediocrity and sells it as perfection.
They are strip-mining your confidence to sell you back a synthetic version of it.
I suppose it’s not clear to me what a ‘good’ window into unreliable, systemically toxic systems accomplishes, or how it changes anything that matters for the better, or what that idea even means at all. I don’t understand how “ethical AI” isn’t just “clean coal” or “natural gas.” The power of normalization as four generations are raised breathing low doses of aerosolized neurotoxins; the alternative was called “unleaded”, but the poison was called “regular gas”.
There’s a real technology here, somewhere. Stochastic pattern recognition seems like a powerful tool for solving some problems. But solving a problem starts at the problem, not working backwards from the tools.
Delivering total nonsense, with complete confidence.
My mind boggles at the thought of using a generative tool based on a large language model to do any kind of qualatitive user research, so every single thing that Gregg says here makes complete sense to me.
Brian Eno on prototyping and fidelity.
A large language model is as neutral as an AK-47.
The best of the web is under continuous attack from the technology that powers your generative “AI” tools.
Naming things is hard, and sometimes harmful.
It’s almost as though humans prefer to use post-hoc justifications rather than being rational actors.