[go: up one dir, main page]

  • 1 Post
  • 804 Comments
Joined 3 years ago
cake
Cake day: June 18th, 2023

help-circle



  • There is no law that governs Linux development related to this, enywhere else. There is only a law in CA that requires this functionality (which would break any and all software infrastructure). Why would any maintainer of any Linux distribution, not actively dependent on following an untested law (from a legal PoV), even consider implementing it? This got a lot of headlines, because it’s absurd and stupid.

    If maintainers wanted to comply, what the fuck would it actually entail? 99% of operating system doesn’t have any specific human users to identify. The only reasonable approach is to ignore it. If data centers in CA for Azure, AWS, GCP, or any other, wants to comply with this (which is impossible), either spend some of that tax free revenue to combat Meta’s suspected 2 billion USD effort in getting these online ID laws pushed through.






  • I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.

    I would find it some place between worrisome and you-should-lose your-job, depending on how important that firewall is. This might seem exaggerated, but if your colleague had showed that config to a child, and then asked them yes and no questions, a game to which the child happily participated in. I would consider that exactly as reasonable, and exactly as responsible, as asking an LLM. Imagine someone doing this, for an important firewall config… and taking the child’s answers at face value. It should be fair to think that this person is grossly unqualified, and showing a dangerous lack of judgment.

    And, that’s just the issues I would have regarding using a bullshit generator as a source of truth. If the firewall config could be considered sensitive information, uploading that to a third party, would be grounds for dismissal for entirely separate reasons.




  • I hear you. I’m very much the same, both in trying to not pay too much attention for the same reasons, but also the trade, though perhaps not all that specialised.

    Once the economic aspect of this comes to the conclusion we already know: it isn’t sustainable. I think we might start to see a more sensible approach to LLM usage.

    The current status is as if people are asking LLMs if a mushroom they picked is safe to eat, and then serving the whole family. A more sensible approach would be to get a name suggestion from the LLM, then use that as an entry point to manually verify it.

    The LLM user should always be the expert. I.e., don’t serve something potentially poisonous. Let it come with suggestions, by all means. But if you don’t know enough to verify the correctness of what it says, then you already lost. Unfortunately, this is how most people use it now. Followed by being shocked “it lied”.



  • Indeed.

    Here is the article that lead me to it: https://acko.net/blog/the-l-in-llm-stands-for-lying/

    When I listen to apocalyptic predictions as a result of AI (transformer based generative LLMs to be specific), they’re all based on assumption that it “adds value, but at a high energy cost”.

    They don’t consider the destruction of human knowledge, where bullshit generators are “informing” decisions, and “curating” insights. Similar to how all steel made after the invention of nuclear weapons is useless for certain applications, so I find books written after the rise of LLMs.

    If only it also didn’t come at the low cost of destroying the ability to reason (as numerous studies have shown). Silverlining is that it’s also absurdly energy demanding, and further pushing the climate past the point of no return. At the very least, we’re are in for a hefty and long recession when the bubble pops. What’s not to like?


  • The issue has more to do with the burden of reviewing code, vs the ease in which a poor contribution can be made that isn’t worth reviewing. The signal to noise becomes so bad, that maintainer are in many cases, out of necessity, rejecting contributions that are made with LLMs. By hiding LLM tell-tales, as the prompt in question here aims to do, it compounds an unethical and arrogant take that the contributions would somehow become more useful. As if commit message structure, comments or otherwise discussion, was the problem (when it suggests its by LLMs), and not the low quality of code changes (as is, by and large, the case).

    As you point out, that is a more general discussion, and not specific to Anthropic employees.

    Your suggested solution leaves me wanting to sigh. That’s what many open source projects have needed to do. Reject all external contributions. Modern software is extensively based on open source, and the work done by millions of developers, for free. There is a good will here, and hard work, that has been carried out under a sense of “furthering humanity”, where you just hope that you are able to contribute, in some way. Spam wasn’t a problem before LLMs. The goal of spam is to pass filters, in order to cause some kind of harm. This takes effort for humans, but trivial for a bullshit generator. Which, is even worse than my take, which was that these contributions were well intended, but just delusional as to its usefulness. Though, I’m sure that motivation to sabotage projects exists. Not sure how “active and deliberate sabotage” would paint a better picture of Anthropic employees, but it seems like you actually get why we might find it particularly repulsive?

    In any case, if we assume best intentions, and that there can be value in contributions made by, or with the help of, LLMs. Then lying about this in PRs, is both unethical, and in contradiction with the altruistic mindset of open source development. Thinking your LLM based contribution is special, as opposed to all the other slop, and thus not deserve being put in some low priority review queue, so you lie about it, and instruct your LLM to lie about it, etc, is exactly the kind of skill-issue and arrogant delusion that pisses people off. And, what a monumental disaster for humanity it is, that what LLMs have managed to do, is force many open source maintainers to reject external contributions, not just those “by AI”, but all external contributions, since it is too costly to find valuable contributions in a sea of slop




  • I’m not too worried about this. It sucks for the time being, and who knows how long they economy will suffer when the US fully collapses. The silver lining is that the actual cost of producing the hardware doesn’t match the inflated evaluation of it. The drivers of this won’t be able to sustain the hoarding. Don’t get me wrong, it’s bad. Anyone with a time machine would probably chose to give Peter and Sam a visit.