I’m eagerly waiting more studies on AI psychosis. Make sure to participate if you get the chance.
- 1 Post
- 804 Comments
* Poorly regulated capitalism
okamiueru@lemmy.worldto Linux@programming.dev•Debian Project Leader Addresses New Age Verification Laws22·3 days agoThere is no law that governs Linux development related to this, enywhere else. There is only a law in CA that requires this functionality (which would break any and all software infrastructure). Why would any maintainer of any Linux distribution, not actively dependent on following an untested law (from a legal PoV), even consider implementing it? This got a lot of headlines, because it’s absurd and stupid.
If maintainers wanted to comply, what the fuck would it actually entail? 99% of operating system doesn’t have any specific human users to identify. The only reasonable approach is to ignore it. If data centers in CA for Azure, AWS, GCP, or any other, wants to comply with this (which is impossible), either spend some of that tax free revenue to combat Meta’s suspected 2 billion USD effort in getting these online ID laws pushed through.
okamiueru@lemmy.worldto Linux@programming.dev•Debian Project Leader Addresses New Age Verification Laws41·3 days agoSo, what about an operating system is restricted material? That’s what this law requires.
Edit: wow, you’re all over the place here. Are you paid (perhaps run?) by Meta?
okamiueru@lemmy.worldto Linux@programming.dev•Parrot Linux Takes Stand Against Age Verification101·4 days agoThe weird thing about this is that this wouldn’t be against any law anywhere, except the state of California… So, why wouldn’t this be adequately solved by not giving a fuck?
If they didn’t use the trademarked name, they’d probably avoid it all. Surely, there is a way to do this without forcing the trademark owner to issue cease and desists?
okamiueru@lemmy.worldto Fuck AI@lemmy.world•"Cognitive surrender" leads AI users to abandon logical thinking, research finds8·5 days agoI saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.
I would find it some place between worrisome and you-should-lose your-job, depending on how important that firewall is. This might seem exaggerated, but if your colleague had showed that config to a child, and then asked them yes and no questions, a game to which the child happily participated in. I would consider that exactly as reasonable, and exactly as responsible, as asking an LLM. Imagine someone doing this, for an important firewall config… and taking the child’s answers at face value. It should be fair to think that this person is grossly unqualified, and showing a dangerous lack of judgment.
And, that’s just the issues I would have regarding using a bullshit generator as a source of truth. If the firewall config could be considered sensitive information, uploading that to a third party, would be grounds for dismissal for entirely separate reasons.
okamiueru@lemmy.worldto Fuck AI@lemmy.world•Microsoft says Copilot is for entertainment purposes only, not serious use — firm pushing AI hard to consumers and businesses tells users not to rely on it for important advice33·5 days agoI’m just so dumbfounded that this isn’t obvious to everyone who has 1. average intelligence, 2. a five minute explanation of how it works.
You should trust it exactly as much as a magic 8 ball. Alternatively, replace all source reference of “according to << favorite packaged LLM >> …” with “according to my 10 year old nephew who is playing a game of never-say-you-don’t-know…”.
Which isn’t to say that LLMs can’t be useful. But if you trust any fact based output from such a text generator, that you can’t (or don’t) verify yourself, you seem exactly as dumb and liable as if you said “but… but… the magic 8 ball said it would be fine!”.
okamiueru@lemmy.worldto Fuck AI@lemmy.world•CEO of America’s largest public hospital system says he’s ready to replace radiologists with AI1·6 days agoAmerican, I take it?
okamiueru@lemmy.worldto Fuck AI@lemmy.world•anthropic spams open source projects with AI slob2·6 days agoI hear you. I’m very much the same, both in trying to not pay too much attention for the same reasons, but also the trade, though perhaps not all that specialised.
Once the economic aspect of this comes to the conclusion we already know: it isn’t sustainable. I think we might start to see a more sensible approach to LLM usage.
The current status is as if people are asking LLMs if a mushroom they picked is safe to eat, and then serving the whole family. A more sensible approach would be to get a name suggestion from the LLM, then use that as an entry point to manually verify it.
The LLM user should always be the expert. I.e., don’t serve something potentially poisonous. Let it come with suggestions, by all means. But if you don’t know enough to verify the correctness of what it says, then you already lost. Unfortunately, this is how most people use it now. Followed by being shocked “it lied”.
okamiueru@lemmy.worldto Europe@feddit.org•Iran allows Spanish ships to use the Strait of Hormuz for freeEnglish2·6 days agoI was pleasantly surprised to learn how late this invention was. The stereotypicall pirate with their spyglasses seems not all that historically accurate.
okamiueru@lemmy.worldto Fuck AI@lemmy.world•anthropic spams open source projects with AI slob4·6 days agoIndeed.
Here is the article that lead me to it: https://acko.net/blog/the-l-in-llm-stands-for-lying/
When I listen to apocalyptic predictions as a result of AI (transformer based generative LLMs to be specific), they’re all based on assumption that it “adds value, but at a high energy cost”.
They don’t consider the destruction of human knowledge, where bullshit generators are “informing” decisions, and “curating” insights. Similar to how all steel made after the invention of nuclear weapons is useless for certain applications, so I find books written after the rise of LLMs.
If only it also didn’t come at the low cost of destroying the ability to reason (as numerous studies have shown). Silverlining is that it’s also absurdly energy demanding, and further pushing the climate past the point of no return. At the very least, we’re are in for a hefty and long recession when the bubble pops. What’s not to like?
okamiueru@lemmy.worldto Fuck AI@lemmy.world•anthropic spams open source projects with AI slob4·6 days agoThe issue has more to do with the burden of reviewing code, vs the ease in which a poor contribution can be made that isn’t worth reviewing. The signal to noise becomes so bad, that maintainer are in many cases, out of necessity, rejecting contributions that are made with LLMs. By hiding LLM tell-tales, as the prompt in question here aims to do, it compounds an unethical and arrogant take that the contributions would somehow become more useful. As if commit message structure, comments or otherwise discussion, was the problem (when it suggests its by LLMs), and not the low quality of code changes (as is, by and large, the case).
As you point out, that is a more general discussion, and not specific to Anthropic employees.
Your suggested solution leaves me wanting to sigh. That’s what many open source projects have needed to do. Reject all external contributions. Modern software is extensively based on open source, and the work done by millions of developers, for free. There is a good will here, and hard work, that has been carried out under a sense of “furthering humanity”, where you just hope that you are able to contribute, in some way. Spam wasn’t a problem before LLMs. The goal of spam is to pass filters, in order to cause some kind of harm. This takes effort for humans, but trivial for a bullshit generator. Which, is even worse than my take, which was that these contributions were well intended, but just delusional as to its usefulness. Though, I’m sure that motivation to sabotage projects exists. Not sure how “active and deliberate sabotage” would paint a better picture of Anthropic employees, but it seems like you actually get why we might find it particularly repulsive?
In any case, if we assume best intentions, and that there can be value in contributions made by, or with the help of, LLMs. Then lying about this in PRs, is both unethical, and in contradiction with the altruistic mindset of open source development. Thinking your LLM based contribution is special, as opposed to all the other slop, and thus not deserve being put in some low priority review queue, so you lie about it, and instruct your LLM to lie about it, etc, is exactly the kind of skill-issue and arrogant delusion that pisses people off. And, what a monumental disaster for humanity it is, that what LLMs have managed to do, is force many open source maintainers to reject external contributions, not just those “by AI”, but all external contributions, since it is too costly to find valuable contributions in a sea of slop
okamiueru@lemmy.worldto Fuck AI@lemmy.world•anthropic spams open source projects with AI slob16·7 days agoI’ve read some of your comments. You don’t seem to understand the underlying issue. Have a read through some of these: https://gist.github.com/bagder/07f7581f6e3d78ef37dfbfc81fd1d1cd Maybe that’ll help
okamiueru@lemmy.worldto Fuck AI@lemmy.world•CEO of America’s largest public hospital system says he’s ready to replace radiologists with AI3·7 days agoPeople in question in the US couldn’t give a single shit about things like children dying. Cuba is in crisis, and entirely manufactured by the US, while at the same time removing embargos on Russian oil. Mission accomplished, right Krasnov?
I’m not too worried about this. It sucks for the time being, and who knows how long they economy will suffer when the US fully collapses. The silver lining is that the actual cost of producing the hardware doesn’t match the inflated evaluation of it. The drivers of this won’t be able to sustain the hoarding. Don’t get me wrong, it’s bad. Anyone with a time machine would probably chose to give Peter and Sam a visit.
okamiueru@lemmy.worldto pics@lemmy.world•Iranian soccer team carries backpacks to protest the strikes on an elementary school in Iran1·9 days agoUS is much more involved in the former, and same w Israel in the latter. They aren’t exactly taking turns.
okamiueru@lemmy.worldto Fuck AI@lemmy.world•How anyone thinks they are productive with llm tools is baffling41·12 days agoIt also says a lot about their inability to identify bullshit
Is that what the LLM told you? I would like to know. Genuinely curious. Did you make it up on the spot? Is it motivated by malice, or stupidity? What’s your deal?
Per capita household income:
Ref: https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_income