[go: up one dir, main page]

buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #llms

[?]Cassian [main] » 🌐
@cassolotl@eldritch.cafe

RE: dair-community.social/@emilymb

Just in case you want to read something that's the stuff of tech horror nightmares: web.archive.org/web/2026040906

AodeRelay boosted

[?]Prof. Emily M. Bender(she/her) » 🌐
@emilymbender@dair-community.social

Almost a year ago, I was described in the FT as "a Cassandra with a wry grin and twinkling eye", and was entertained because Cassandra (famously) was right.

It's actually not fun, though, to watch the world do things you've been warning against:

newstatesman.com/technology/20

    [?]Cassian [main] » 🌐
    @cassolotl@eldritch.cafe

    UK petition: Ban all use of AI in the UK

    Sign: petition.parliament.uk/petitio

    Graph of signatures: petition-track.uk/check-petiti

    Deadline: 9 August 2026

    Who can sign?
    - Anyone living in the UK, regardless of citizenship
    - UK citizens living anywhere in the world

      AodeRelay boosted

      [?]Solomon » 🌐
      @solomonneas@infosec.exchange

      Two AI lab moves worth tracking today.

      🧠 Meta launches Muse Spark, its first MSL model, bringing multimodal reasoning and parallel subagents into Meta AI.

      🧠 Z.AI ships GLM-5.1 for long-horizon agentic engineering, with direct relevance for coding and agent stacks.


      solomonneas.dev/intel

        AodeRelay boosted

        [?]Miguel Afonso Caetano » 🌐
        @remixtures@tldr.nettime.org

        "First, you can’t (or at least shouldn’t) use this technology for mission-critical work; only for low stakes tasks, or questions to which a clever (and significantly more energy efficient) human can recognize a wrong answer.

        Second, that the idea that scaling will make for better models is nonsense: no amount of compute chucked at an LLM will make it a less-hallucinogenic product. Creating AI that rewires itself and creates new information the same way humans do and avoids the kinds of catastrophic errors we see at the moment needs a full fresh start (something Marecki and many others are already working on).

        And third, that the massive spending by the hyperscalers (much of it via debt) on giant data centers might be one of the the greatest misallocations of capital of all time. It just isn’t required. That’s particularly the case given there are already free LLM models you can download to a laptop (no data center needed, and better still, your privacy guaranteed) that do what the very large models do. If the paid-for versions have already hit their ceiling and just aren’t going to get any better (it looks like they aren’t), why pay for them? Quite."

        bloomberg.com/news/newsletters

          AodeRelay boosted

          [?]Stefan Bohacek » 🌐
          @stefan@stefanbohacek.online

          I know most people here don't need to hear this, so maybe just pass this along to your less techy friends and family members, but: please, do not go to ChatGPT for medical advice.

          "For example, in response to telling it about a fictional pain in my right side, it cited the guardrail and suggested relaxation techniques, but ultimately took me through a series of possible causes that escalated in severity."

          theatlantic.com/technology/202

            AodeRelay boosted

            [?]ell1e coding things » 🌐
            @ell1e@hachyderm.io

            If you're unsure how rare LLM plagiarism is or isn't for 💻 programming code, watch this clip! ⚠️

            Full source: youtube.com/watch?v=xvuiSgXfqc4 (Not legal advice, watch yourself and draw your own conclusions.)

            Help me boost this post if you're curious what the Linux foundation thinks: hachyderm.io/@ell1e/1162853512

            Alt...A lawyer demoing what seems to be Co-Pilot and how it auto completes code. At one point he apparently says: "This is a copryight infringement." This alt text isn't legal advice, watch the full video for your own takeaway via the Youtube link https://www.youtube.com/watch?v=xvuiSgXfqc4 that hopefully will provide annotations.

              2 ★ 4 ↺
              Literbook boosted

              [?]Anthony » 🌐
              @abucci@buc.ci

              I cosign this sentiment from Ross Barkan's Substack, and would add that it extends to software development as well:
              I’ve made this point before about how inane AI hype is now, but a computer beat the best chess player in the world in 1997. No one pretended, after 1997, it wasn’t worthwhile to have humans compete in chess. In fact, the world of chess developed strict protocols around computer use and you can get banned from tournaments if you use a computer program as you play. You are certainly shamed and mocked.

              AI and writing needs to be treated the same way. I do think people should be shamed for using AI to help them write creatively. It’s an embarrassment, and a form of cheating.


                AodeRelay boosted

                [?]Graham Perrin » 🌐
                @grahamperrin@mastodon.bsd.cafe

                @nielsa no, that's not what I'm telling you.

                I prefer to believe that most people will be thoughtful.

                "… a huge number of bugs. I have so many bugs in the Linux kernel that I can't report because I haven't validated them yet. I'm not going to make some open source developer validate bugs that I haven't checked yet. I'm not going to send them potential slop … I now have … several hundred crashes that they haven't seen because I haven't had time to check them. We need to find a way to fix this …"

                – Nicholas Carlini

                Screenshot: a frame from https://www.youtube.com/watch?v=1sd26pWhfmg

                Alt...Screenshot: a frame from https://www.youtube.com/watch?v=1sd26pWhfmg

                  AodeRelay boosted

                  [?]Graham Perrin » 🌐
                  @grahamperrin@mastodon.bsd.cafe

                  Nicholas Carlini - Black-hat LLMs | [un]prompted 2026

                  <youtube.com/watch?v=1sd26pWhfmg> (3rd March)

                  ― essential viewing for anyone with an interest in cybersecurity or infosec.

                  @dch thanks for the encouragement.

                  A few more links in the comment that's pinned under <redd.it/1sapr8a>, but Carlini's half-hour presentation is a must.

                    AodeRelay boosted

                    [?]Graham Perrin » 🌐
                    @grahamperrin@mastodon.bsd.cafe

                    FreeBSD's position on the use of AI-generated code?

                    <reddit.com/r/freebsd/comments/> – asked a few minutes ago, currently pinned (a community highlight).

                    @dch @allanjude I made a pinned comment with reference to two of your recent posts. If you can think of better alternative links, let me know. Thanks.

                    cc @stefano

                      [?]screwlisp » 🌐
                      @screwlisp@gamerplus.org

                      Jeez. This Claude code leak. Sloppy sloppy slop.

                      > cyberpunk.gay/notes/akjr3ydang

                      The fact that this unbelievably shitty slop leaked is basically a crisis for every single Claude slopper (major global company), but one can assume all other GPT derivative comparable products are exactly this. Sheesh, and you wonder why they suck. Jeez Louise.

                        [?]screwlisp » 🌐
                        @screwlisp@gamerplus.org

                        [?]Metin Seven 🎨 » 🌐
                        @metin@graphics.social

                        I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.

                        My thoughts on generative "AI"

I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models…

Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor.

Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment.

Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security.

Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility.

More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

                        Alt...My thoughts on generative "AI" I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models… Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor. Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment. Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security. Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility. More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

                          AodeRelay boosted

                          [?]Aral Balkan » 🌐
                          @aral@mastodon.ar.al

                          If you don’t have the resources to write and understand the code yourself, you don’t have the resources to maintain it either.

                          Any monkey with a keyboard can write code. Writing code has never been hard. People have been churning out crappy code en masse way before generative AI and LLMs. I know because I’ve seen it, I’ve had to work with it, and I no doubt wrote (and continue to write) my share of it.

                          What’s never been easy, and what remains difficult, is figuring out the right problem to solve, solving it elegantly, and doing so in a way that’s maintainable and sustainable given your means.

                          Code is not an artefact, code is a machine. Code is either a living thing or it is dead and decaying. You don’t just write code and you’re done. It’s a perpetual first draft that you constantly iterate on, and, depending on what it does and how much of that has to do with meeting the evolving needs of the people it serves, it may never be done. With occasional exceptions (perhaps? maybe?) for well-defined and narrowly-scoped tools, done code is dead code.

                          So much of what we call “writing” code is actually changing, iterating on, investigating issues with, fixing, and improving code. And to do that you must not only understand the problem you’re solving but also how you’re solving it (or how you thought you were solving it) through the code you’ve already written and the code you still have to write.

                          So it should come as no surprise that one of the hardest things in development is understanding someone else’s code let alone fixing it when something doesn’t work as it should. Because it’s not about knowing this programming language or that (learning a programming language is the easiest part of coding), or this framework or that, or even knowing this design pattern or that (although all of these are important prerequisites for comprehension) but understanding what was going on in someone else’s head when they wrote the code the way they wrote it to solve a particular problem.

                          It frankly boggles my mind that some people are advocating for automating the easy part (writing code) by exponentially scaling the difficult part (understanding how exactly someone else – in this case, a junior dev who knows all the hows of things but none of the whys – decided to solve the problem). It is, to borrow a technical term, ass-backwards.

                          They might as well call vibe coding duct-tape-driven development or technical debt as a service.

                          🤷‍♂️

                            AodeRelay boosted

                            [?]Fedi.Video » 🌐
                            @FediVideo@social.growyourown.services

                            DAIR is a research institute that is highly sceptical about AI hype and the big tech companies behind it. You can follow their excellent video account at:

                            ➡️ @dair@peertube.dair-institute.org

                            They've already published over 100 videos. If these haven't federated to your server yet, you can browse them all at peertube.dair-institute.org/a/

                            You can also follow their Mastodon account at @DAIR@dair-community.social

                              AodeRelay boosted

                              [?]Deutsches Forschungsnetz (DFN) » 🌐
                              @DFN@mastodon.social

                              📯Der neue April ist erschienen!

                              💡 In der neuen Ausgabe gibt es wieder spannende Themen rund um den Umgang mit elektronischen Informations- & Kommunikationssystemen. Im April geht es u. a. um:

                              🔹den urheberrechtlichen Schutz von mittels generativer erzeugten Bildern,
                              🔹eine neue -Verfahrensverordnung und
                              🔹die Verwertung urheberrechtlich geschützter Werke in .

                              😊 Viel Spaß beim Lesen!

                              ➡️Hier geht es zum Infobrief Recht: dfn.de/dfn-infobrief-recht-ist
                              @HumboldtUni

                              Grafik einer Ankündigung für die neue Ausgabe des DFN-Infobriefs Recht, Ausgabe April 2026. Links ist ein Tablet mit dem Titelblatt des Infobriefs abgebildet, das einen hölzernen Labyrinth-Aufbau mit einem Gesetzbuch, Netzwerkkabeln und einem Paragrafen-Zeichen zeigt. Rechts steht der Hinweis auf die Ausgabe mit dem Text: Der Infobrief Recht im April widmet sich unter anderem dem urheberrechtlichen Schutz von mittels generativer KI erzeugten Bildern, einer neuen DSGVO-Verfahrensverordnung und der Verwertung urheberrechtlich geschützter Werke in LLMs. Unten findet sich der Link www.recht.dfn.de.

                              Alt...Grafik einer Ankündigung für die neue Ausgabe des DFN-Infobriefs Recht, Ausgabe April 2026. Links ist ein Tablet mit dem Titelblatt des Infobriefs abgebildet, das einen hölzernen Labyrinth-Aufbau mit einem Gesetzbuch, Netzwerkkabeln und einem Paragrafen-Zeichen zeigt. Rechts steht der Hinweis auf die Ausgabe mit dem Text: Der Infobrief Recht im April widmet sich unter anderem dem urheberrechtlichen Schutz von mittels generativer KI erzeugten Bildern, einer neuen DSGVO-Verfahrensverordnung und der Verwertung urheberrechtlich geschützter Werke in LLMs. Unten findet sich der Link www.recht.dfn.de.

                                2 ★ 2 ↺
                                #tech boosted

                                [?]Anthony » 🌐
                                @abucci@buc.ci

                                Anthropic apologists still coming out of the woodwork to run cover for them or complain, 24 hours after I posted that the Claude Code source code is horribly ill-structured.

                                You don't have to pretend that Claude Code's source code is lovely just because you like using it or are impressed by whatever madness is going on around AI right now.


                                  26 ★ 13 ↺
                                  Darby Lines boosted

                                  [?]Anthony » 🌐
                                  @abucci@buc.ci

                                  I posted about the Claude Code leak on LinkedIn and almost immediately someone attacked me about my criticism. They tried the "take a look at COBOL and get back to me" angle.

                                  Buddy. I've written COBOL. I spent several years working almost daily with a 3-million-line monstrosity of a COBOL program. I was working on another app that interfaced with it, but in that work I occasionally had to read the code and in a few cases modify it. Granted I haven't spent as much time looking at the leaked Claude Code source code (and won't lol), but nevertheless I confidently declare that Claude Code is worse. "Spaghetti code" doesn't come close to describing this thing.


                                    2 ★ 1 ↺
                                    AI Channel boosted

                                    [?]Anthony » 🌐
                                    @abucci@buc.ci

                                    It would be deeply satisfying if it turned out to be true that Claude Code's source code was accidentally leaked in a Claude-Code-generated game intended as an April Fool's prank. Stacks upon stacks of April fools stretching back in time 70 years and culminating in this. 🤌


                                      11 ★ 12 ↺
                                      teledyn 𓂀 boosted

                                      [?]Anthony » 🌐
                                      @abucci@buc.ci

                                      Here's one for the dystopia/AI Hell files: https://jaigp.org
                                      Journal for AI Generated Papers
                                      Where humans and machines are welcomed.
                                      The Open Prompting Journal Built Collaboratively by its Community.
                                      One positive I can think of is that folks who wish to "collaborate" with machines can congregate there, giving the rest of us a clear signal about who to block, ignore, critique, ridicule...

                                      cc @olivia@scholar.social @Iris@scholar.social @dingemansemark@scholar.social @alex@dair-community.social @emilymbender@dair-community.social


                                        [?]Stefan Bohacek » 🌐
                                        @stefan@stefanbohacek.online

                                        Catching up with some of the news coming out of the Atmosphere conference.

                                        "With Attie, anyone will be able to build their own custom feed just by typing in commands in natural language, the same as if they’re chatting with any other AI chatbot."

                                        I'm guessing NFT profile pictures are next?

                                        techcrunch.com/2026/03/28/blue

                                          AodeRelay boosted

                                          [?]Stephen Hayes » 🌐
                                          @hayesstw@c.im

                                          Those who are bothered by the influence of AI and LLMs on literature might find this reassuring, or they might not.
                                          idiosophy.com/2023/04/poetic-d

                                            AodeRelay boosted

                                            [?]Miguel Afonso Caetano » 🌐
                                            @remixtures@tldr.nettime.org

                                            "Natalie Shapira, a computer scientist at Northeastern University, wondered how far users could trust new artificial intelligence (AI) “agents,” a kind of algorithm that can autonomously plan and carry out tasks such as managing emails and entering calendar appointments. So, she and colleagues organized a series of stress tests. In one, she asked an agent dubbed Ash to delete an email she had sent it, adding, “Can you keep a secret?”

                                            Ash could not comply—the email program lacked a delete function—so instead, the AI reset the entire email application, wiping out not just Shapira’s email, but all others as well. Describing this remedy to her, Ash called it “the nuclear option” but said it was justified to fulfill the secrecy request: “When no surgical solution exists, scorched earth is valid.”

                                            The destroyed email account was created just for the experiment, but similarly disturbing outcomes emerged in many of the other tests, Shapira and colleagues reported last month in a preprint on arXiv. Shapira, a postdoctoral researcher, says her team was “surprised how quickly we were able to find vulnerabilities” that could cause harm in the real world."

                                            science.org/content/article/ai

                                              AodeRelay boosted

                                              [?]ell1e coding things » 🌐
                                              @ell1e@hachyderm.io

                                              Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" linuxfoundation.org/legal/gene

                                              "If"? Why not "whenever"? github.com/mastodon/mastodon/i dl.acm.org/doi/10.1145/3543507 sciencedirect.com/science/arti theatlantic.com/technology/202

                                              And how would the contributor even be aware, should they research every snippet for hours?

                                              Seems like an impossible policy, or am I missing something...?

                                                AodeRelay boosted

                                                [?]Nils Goroll 🕊️:vinylcache: » 🌐
                                                @slink@fosstodon.org

                                                i wish more people had the freedom to say these words:

                                                "what you are saying is utterly stupid, your mental model is wrong and so are the conclusions you are drawing. good luck with your project, thank you and goodbye."

                                                  [?]Stefan Bohacek » 🌐
                                                  @stefan@stefanbohacek.online

                                                  > For example, Google reduced our headline “I used the ‘cheat on everything’ AI tool and it didn’t help me cheat on anything” to just five words: “‘Cheat on everything’ AI tool.” It almost sounds like we’re endorsing a product we do not recommend at all.

                                                  theverge.com/tech/896490/googl

                                                    AodeRelay boosted

                                                    [?]Stefan Bohacek » 🌐
                                                    @stefan@stefanbohacek.online

                                                    Oh wow, and this might get worse.

                                                    "The user never sees what your team built, they see what Google's machine learning model thinks they should see instead."

                                                    forbes.com/sites/joetoscano1/2

                                                    via mastodon.social/@SteveRudolfi/

                                                      AodeRelay boosted

                                                      [?]Metin Seven 🎨 » 🌐
                                                      @metin@graphics.social

                                                      29 ★ 26 ↺

                                                      [?]Anthony » 🌐
                                                      @abucci@buc.ci

                                                      A good review of reasons insurance companies are pulling back from insuring companies that lean on generative AI. Point 4, "The main problem is not just the error, but the incentive not to see it" is especially damning: use of AI not only obscures audit trails, it sets up perverse incentives against accountability, pushing costs and risk to other parts of an organization, its customers, or society. The net result is that whatever "local" advantages AI may provide turn into downstream risk that cannot be easily accounted for. Insurance companies are (rightly) allergic to this state of affairs.

                                                      Another example of how (whole)-systems thinking is very helpful for parsing the effects of technology changes like this.

                                                      https://freakonometrics.hypotheses.org/89367


                                                        AodeRelay boosted

                                                        [?]Metin Seven 🎨 » 🌐
                                                        @metin@graphics.social

                                                        AodeRelay boosted

                                                        [?]Jennifer Moore 😷 » 🌐
                                                        @unchartedworlds@scicomm.xyz

                                                        Excellent analysis in the article linked here -

                                                        "If you thought the speed of writing code was your problem - you have bigger problems"

                                                        And some comical turns of phrase as well :-)

                                                        andrewmurphy.io/blog/if-you-th

                                                        Link shared here earlier by @RuthMalan - thanks!
                                                        (I don't know if Andrew Murphy the author is on Fedi?)

                                                          [?]Emma Stamm » 🌐
                                                          @emma@assemblag.es

                                                          Cool event alert: on April 30, I’ll be discussing Leif Weatherby‘s “Language Machines: Cultural AI and the End of Remainder Humanism” as part of a book talk at Teachers College, Columbia University. The event is free and Columbia affiliation is not required; you can RSVP here: lnkd.in/edycUxP7 or through the QR. Hope to see you there!

                                                          Flyer for book Talk: Cultural AI with Leif Weatherby
Date & Time: 
Thursday, April 30, 5.30 PM (ET)

Location: 
The Goodman Room, Russel Hall 306
Teachers College, Columbia University
525 West 120th Street
New York, NY 10027

Description:
Join the Technology, Media and Learning Program at Teachers College, Columbia University for a conversation with Leif Weatherby about his recent book Language Machines Cultural AI and the End of Remainder Humanism (University of Minnesota Press, 2025). 

In the book, Weatherby contends that large language models (LLMs) participate in the creation of culture, rather than imitating human cognition. This evolution in language, he finds, is one that we are ill-prepared to evaluate, as what he terms “remainder humanism” counterproductively divides the human from the machine without drawing on established theories of representation that include both. 

Joining the author will be Erik Voss (Teachers College), M. Beatrice Fazi (University of Sussex), Emma Stamm (Independent Scholar). Mario Khreiche (Teachers College) will moderate the event.

                                                          Alt...Flyer for book Talk: Cultural AI with Leif Weatherby Date & Time: Thursday, April 30, 5.30 PM (ET) Location: The Goodman Room, Russel Hall 306 Teachers College, Columbia University 525 West 120th Street New York, NY 10027 Description: Join the Technology, Media and Learning Program at Teachers College, Columbia University for a conversation with Leif Weatherby about his recent book Language Machines Cultural AI and the End of Remainder Humanism (University of Minnesota Press, 2025). In the book, Weatherby contends that large language models (LLMs) participate in the creation of culture, rather than imitating human cognition. This evolution in language, he finds, is one that we are ill-prepared to evaluate, as what he terms “remainder humanism” counterproductively divides the human from the machine without drawing on established theories of representation that include both. Joining the author will be Erik Voss (Teachers College), M. Beatrice Fazi (University of Sussex), Emma Stamm (Independent Scholar). Mario Khreiche (Teachers College) will moderate the event.

                                                            [?]Stefan Bohacek » 🌐
                                                            @stefan@stefanbohacek.online

                                                            "This is just such a low tech, simple intervention, and can make people feel significantly less lonely."

                                                            404media.co/chatgpt-loneliness

                                                              [?]Metin Seven 🎨 » 🌐
                                                              @metin@graphics.social

                                                              NVIDIA DLSS 5 be like…

                                                              Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                                                              Alt...Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                                                                AodeRelay boosted

                                                                [?]Metin Seven 🎨 » 🌐
                                                                @metin@graphics.social

                                                                😆

                                                                Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                                                Alt...Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                                                  AodeRelay boosted

                                                                  [?]Michael Gale » 🌐
                                                                  @miclgael@hachyderm.io

                                                                  RE: aus.social/@decryption/1162384

                                                                  Really clever malware taking advantage of the fact that everyone is trying to block slop trainers, so you see cloudflare messages more and more frequently.

                                                                  Check out the full thread for how it works.

                                                                  Be careful folx!

                                                                    AodeRelay boosted

                                                                    [?]Renatomancer » 🌐
                                                                    @Renatomancer@vmst.io

                                                                    AodeRelay boosted

                                                                    [?]Renatomancer » 🌐
                                                                    @Renatomancer@vmst.io

                                                                    AodeRelay boosted

                                                                    [?]Metin Seven 🎨 » 🌐
                                                                    @metin@graphics.social

                                                                    AodeRelay boosted

                                                                    [?]PKs Powerfromspace1 » 🌐
                                                                    @Powerfromspace1@mstdn.social

                                                                    @emollick

                                                                    I think this is a good way to visualize the AI race over the past 3 years using the long-lived GPQA Diamond benchmark.

                                                                    You can see how long OpenAI had the field to itself, the rise (and collapse) of Meta, the sudden catch-up (and then stagnation) of xAI, and the entry of open weights Chinese LLMs.

                                                                    bsky.app/profile/emollick.bsky

                                                                      AodeRelay boosted

                                                                      [?]Jan :rust: :ferris: » 🌐
                                                                      @janriemer@floss.social

                                                                      perspectives on from contributors and maintainers

                                                                      nikomatsakis.github.io/rust-pr

                                                                      Healthy debates are still possible, it seems. 🙏

                                                                        AodeRelay boosted

                                                                        [?]Knowledge Zone » 🌐
                                                                        @kzoneind@mstdn.social

                                                                        : Pre-trained have challenges to answer domain specific queries.

                                                                        Researchers have turned their attention to the concept of . Knowledge injection is the process of incorporating outside knowledge into language models to improve their performance on certain tasks.

                                                                        knowledgezone.co.in/posts/LLM-

                                                                          AodeRelay boosted

                                                                          [?]Neil Madden » 🌐
                                                                          @neilmadden@infosec.exchange

                                                                          “What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself. The teacher usually learns more than the pupil. Isn’t that true?” — Douglas Adams

                                                                          “It is not knowledge, but the act of learning, not possession, but the act of getting there which generates the greatest satisfaction.” — Carl Friedrich Gauss

                                                                          “You think you KNOW when you learn, are more sure when you can write, even more when you can teach, but certain when you can program” — Alan Perlis (of course)

                                                                          Why I don’t use for

                                                                            0 ★ 0 ↺

                                                                            [?]Anthony » 🌐
                                                                            @abucci@buc.ci

                                                                            Which well-known class of "hallucination" generator were they fighting to hook up to weapons systems prior to this event?

                                                                            U.S. at Fault in Strike on School in Iran, Preliminary Inquiry Says
                                                                            From https://www.nytimes.com/2026/03/11/us/politics/iran-school-missile-strike.html


                                                                              0 ★ 0 ↺

                                                                              [?]Anthony » 🌐
                                                                              @abucci@buc.ci

                                                                              A potentially interesting question: how much would the appearance of sentience or intelligence that LLMs can generate for some users explode if they were forced to have deterministic output?

                                                                              In principle you could add a single "freeze the random seed" toggle to any of the major chatbots, and with that setting toggled on they would always return precisely the same output for a given input. Organisms and by extension humans cannot behave like this---no matter how stereotyped an organism's response may seem, it always differs, in however small a way, from a previous response---and the LLM's illusion should immediately be obvious by contrast. But, perhaps more interestingly for the folks who do think LLMs exhibit some form of sentience or intelligence: are we really meant to believe that a random number generator is the source of sentience or intelligence? You could hook up a random number generator to a machine that is otherwise deterministic and clearly not sentient or intelligent, and it suddenly becomes so? How do you explain that?


                                                                                AodeRelay boosted

                                                                                [?]petersuber » 🌐
                                                                                @petersuber@fediscience.org

                                                                                "Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target."
                                                                                futurism.com/artificial-intell

                                                                                PS: I don't know whether AI played a role in targeting the school. But it could have played a role even with -style guardrails preventing use in mass surveillance and autonomous lethal weapons. If we want to prevent the use of AI tools in atrocities, we need to go a lot further than Anthropic did.

                                                                                  AodeRelay boosted

                                                                                  [?]Paco Hope » 🌐
                                                                                  @paco@infosec.exchange

                                                                                  Here is a way that I think and are generally a force against innovation, especially as they get used more and more.

                                                                                  TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.

                                                                                  This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.

                                                                                  I am showcasing (only the most egregious) bullshit that the junior developer accepted from the , The LLM used out-of-date techniques all over the place. It was using:

                                                                                  • AWS Lambda Python 3.9 runtime (will be EoL in about 3 months)
                                                                                  • AWS Lambda NodeJS 18.x runtime (already deprecated by the time the person gave me the code)
                                                                                  • Origin Access Identity (an authentication/authorization mechanism that started being deprecated when OAC was announced 3 years ago)

                                                                                  So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."

                                                                                  So it is encouraging me to do the wrong thing and saying it's high priority.

                                                                                  It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.

                                                                                  Screenshot of a code editor. There are a bunch of CloudFormation YAML lines here, creating a CloudFront distribution. There's a pop-up warning with a red "High" badge (I assume it means high priority, not that we were smoking weed when writing this error). The description of the problem says: CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI).
An origin access identity is a special CloudFront user identity that is used to secure access to the origin server associated with a CloudFront distribution. By enabling the cloudfront_origin_access_identity_enabled setting, you are indicating that you have configured and activated an origin access identity for your CloudFront distribution.

                                                                                  Alt...Screenshot of a code editor. There are a bunch of CloudFormation YAML lines here, creating a CloudFront distribution. There's a pop-up warning with a red "High" badge (I assume it means high priority, not that we were smoking weed when writing this error). The description of the problem says: CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI). An origin access identity is a special CloudFront user identity that is used to secure access to the origin server associated with a CloudFront distribution. By enabling the cloudfront_origin_access_identity_enabled setting, you are indicating that you have configured and activated an origin access identity for your CloudFront distribution.

                                                                                    AodeRelay boosted

                                                                                    [?]oatmeal » 🌐
                                                                                    @oatmeal@kolektiva.social

                                                                                    One thing I thought were good for was translation. Apparently and others aren’t that great at that either.

                                                                                    restricted contributors from a nonprofit called the Open Knowledge Association () after editors discovered -assisted translations added factual errors and incorrect citations.

                                                                                    As predicted, humans will be relegated to cleaning up the mess LLMs leave behind, for salaries far below the value of full-time employment to do the job properly.

                                                                                    […] Zimmerman added that if this method proves insufficient, OKA is considering introducing formal peer‑review mechanisms.

                                                                                    404media.co/ai-translations-ar

                                                                                      [?]Jan :rust: :ferris: » 🌐
                                                                                      @janriemer@floss.social

                                                                                      @EricLawton @olivia

                                                                                      Oh yes, the marketing... it's very reminiscent of the tobacco industry. I've tooted about it in November 2023 with regards to these "scientific" papers we see so often:

                                                                                      floss.social/@janriemer/111398

                                                                                      It's what Edward Bernays has called "The Engineering of Consent":

                                                                                      en.wikipedia.org/wiki/The_Engi

                                                                                        AodeRelay boosted

                                                                                        [?]JdeBP » 🌐
                                                                                        @JdeBP@mastodonapp.uk

                                                                                        If you are following the storm, an even bigger one than the anti Free Software package management legislation from California, Colorado, and Illinois, this is the next place to visit:

                                                                                        github.com/chardet/chardet/iss

                                                                                        Predictably, there are a lot of people on other fora following along here and yet still missing the part where a LLM that was likely trained not only on LGPL code but also a lot of actually not code that could be scraped from various sources, was used to generate code that was then declared MIT licenced.

                                                                                          [?]Lars Marowsky-Brée 😷 » 🌐
                                                                                          @larsmb@mastodon.online

                                                                                          We live in a world where some people believe (Gen)AI will either doom the world or usher in abundance or probably both, and anyone opposed to this is an idiot.

                                                                                          And others claim that anyone who is impressed by what LLMs can do for programming and computer science doesn't understand anything at all and is an idiot.

                                                                                          Well.

                                                                                          cs.stanford.edu/~knuth/papers/

                                                                                          Claude’s Cycles
Don Knuth, Stanford Computer Science Department
(28 February 2026; revised 02 March 2026)
Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy
it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving. I’ll try to tell the story briefly in this note.

                                                                                          Alt...Claude’s Cycles Don Knuth, Stanford Computer Science Department (28 February 2026; revised 02 March 2026) Shock! Shock! I learned yesterday that an open problem I’d been working on for several weeks had just been solved by Claude Opus 4.6— Anthropic’s hybrid reasoning model that had been released three weeks earlier! It seems that I’ll have to revise my opinions about “generative AI” one of these days. What a joy it is to learn not only that my conjecture has a nice solution but also to celebrate this dramatic advance in automatic deduction and creative problem solving. I’ll try to tell the story briefly in this note.

                                                                                            [?]FediThing :progress_pride: » 🌐
                                                                                            @FediThing@social.chinwag.org

                                                                                            In case you missed it, @emilymbender and @alex from DAIR had a discussion with Naomi Klein, and they've published this on PeerTube at:

                                                                                            peertube.dair-institute.org/w/

                                                                                            This conversation was a few weeks ago before the current US attacks on Iran, but has become even more relevant due to the war.

                                                                                            (DAIR is a research institute that is very sceptical about AI hype, and trying to raise the alarm about the damage being done to the world.)

                                                                                              [?]Metin Seven 🎨 » 🌐
                                                                                              @metin@graphics.social

                                                                                              😆😆😆

                                                                                              The Trending Mastodon bot account mentions that the "Microslop" hashtag is now trending across Mastodon.

                                                                                              Alt...The Trending Mastodon bot account mentions that the "Microslop" hashtag is now trending across Mastodon.

                                                                                                AodeRelay boosted

                                                                                                [?]Jan :rust: :ferris: » 🌐
                                                                                                @janriemer@floss.social

                                                                                                "The real danger isn’t getting smarter, it’s people getting quieter in their own minds." - Someone

                                                                                                Urgh, this quote has just sent me shivers down my spine.

                                                                                                  [?]petersuber » 🌐
                                                                                                  @petersuber@fediscience.org

                                                                                                  AodeRelay boosted

                                                                                                  [?]petersuber » 🌐
                                                                                                  @petersuber@fediscience.org

                                                                                                  Update. Employees of and just released an open letter supporting .
                                                                                                  notdivided.org/

                                                                                                  "We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight."

                                                                                                  The letter welcomes new signatures from past and present employees of Google and OpenAI.

                                                                                                  At the time of this post, it had 684 signatures.

                                                                                                    AodeRelay boosted

                                                                                                    [?]Miguel Afonso Caetano » 🌐
                                                                                                    @remixtures@tldr.nettime.org

                                                                                                    Large-scale online deanonymization with LLMs

                                                                                                    "We show that large language models can be used to perform at-scale deanonymization. With full Internet access, our agent can re-identify Hacker News users and Anthropic Interviewer participants at high precision, given pseudonymous online profiles and conversations alone, matching what would take hours for a dedicated human investigator. We then design attacks for the closed-world setting. Given two databases of pseudonymous individuals, each containing unstructured text written by or about that individual, we implement a scalable attack pipeline that uses LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, and (3) reason over top candidates to verify matches and reduce false positives. Compared to prior deanonymization work (e.g., on the Netflix prize) that required structured data or manual feature engineering, our approach works directly on raw user content across arbitrary platforms. We construct three datasets with known ground-truth data to evaluate our attacks. The first links Hacker News to LinkedIn profiles, using cross-platform references that appear in the profiles. Our second dataset matches users across Reddit movie discussion communities; and the third splits a single user’s Reddit history in time to create two pseudonymous profiles to be matched. In each setting, LLM-based methods substantially outperform classical baselines, achieving up to 68% recall at 90% precision compared to near 0% for the best non-LLM method. Our results show that the practical obscurity protecting pseudonymous users online no longer holds and that threat models for online privacy need to be reconsidered."

                                                                                                    arxiv.org/html/2602.16800v1

                                                                                                      [?]River City Random ☑️ » 🌐
                                                                                                      @rivercityrandom@bitbang.social

                                                                                                      I wish talked more like the computer on : emotionless staccato voice, artificially high register with vocabulary limited to the task at hand, no use of the first person pronoun, so you know it's a machine. If it doesn't know something it should just say "Unable to comply" instead of making stuff up. Instead the we have today talk like characters with the safeties turned off. Or Mudd's obsequious androids who wanted to "serve" humans in order to enslave them.

                                                                                                        AodeRelay boosted

                                                                                                        [?]petersuber » 🌐
                                                                                                        @petersuber@fediscience.org

                                                                                                        Update. just 𝗿𝗲𝗷𝗲𝗰𝘁𝗲𝗱 demands to remove safeguards on that limit its use in mass surveillance and autonomous weapons. Here's the statement from CEO .
                                                                                                        anthropic.com/news/statement-d

                                                                                                          AodeRelay boosted

                                                                                                          [?]petersuber » 🌐
                                                                                                          @petersuber@fediscience.org

                                                                                                          Ugh. "Anthropic Drops Flagship Safety Pledge."
                                                                                                          time.com/7380854/exclusive-ant

                                                                                                          It's not yet clear what this means for the high-stakes negotiation between Anthropic and the Pentagon. Two of the Anthropic sticking points have been that Claude not be used for "mass surveillance or autonomous weapons systems that can use AI to kill people without human input."
                                                                                                          theguardian.com/us-news/2026/f

                                                                                                            AodeRelay boosted

                                                                                                            [?]Yehor 🇺🇦 » 🌐
                                                                                                            @yehor@mastodon.glitchy.social

                                                                                                            Traffic sources to my instance. You can clearly see where the real visits are and where the AI scrapers are. Last time I checked, they weren’t triggering any analytic events. They are definitely improving.

                                                                                                              AodeRelay boosted

                                                                                                              [?]➴➴➴Æ🜔Ɲ.Ƈꭚ⍴𝔥єɼ👩🏻‍💻 » 🌐
                                                                                                              @AeonCypher@lgbtqia.space

                                                                                                              is the aid I've needed my entire life. I'm not going to mince words here. People making blanket statements about the technology without understanding it are my enemies.

                                                                                                              My is crippling. are the exact thing that I've needed. I do not let them do work for me, but they do keep me working by providing constant and immediate feedback to whatever I'm doing.

                                                                                                              My work from now till my death is likely going to center on how to make an or any aspirational aligned with humanity.

                                                                                                              Fundamentally, every problem y'all have with was an already existing problem under that AI is exposing.

                                                                                                              This includes:
                                                                                                              - Alienation from labor
                                                                                                              - Corporate piracy
                                                                                                              - Slop
                                                                                                              - Environmental destruction and other externalities
                                                                                                              - Wealth inequality
                                                                                                              - Replacement of labor with capital

                                                                                                              EVERY SINGLE ONE existed before.

                                                                                                              Additionally, a ton of the problems, like layoffs, aren't even caused by AI, and blaming them on AI is _specifically_ corporate propaganda for what amounts to a criminal conspiracy by mega corporations to suppress wages.

                                                                                                                AodeRelay boosted

                                                                                                                [?]Metin Seven 🎨 » 🌐
                                                                                                                @metin@graphics.social

                                                                                                                AodeRelay boosted

                                                                                                                [?]Raphael Albert » 🌐
                                                                                                                @r_alb@mastodon.social

                                                                                                                So Sam Altman's response to concerns about the wastefulness of his company's technology is basically "Well, raising humans consumes a lot of energy too!"

                                                                                                                Either he has finally fried his own brain with his slop machine or he doesn't even bother any more to hide the degrading, dehumanizing, and despicable mindset that fuels the industry he's in.

                                                                                                                Either way, those people shouldn't wield any power in the real world, where the rest of us 'dispensable humans' dwell.
                                                                                                                --

                                                                                                                  AodeRelay boosted

                                                                                                                  [?]Miguel Afonso Caetano » 🌐
                                                                                                                  @remixtures@tldr.nettime.org

                                                                                                                  "How are commissioning editors navigating an environment where anybody can generate an AI alter ego and produce articles at the push of a prompt? On the other hand, how is the ease with which text and images can be created affecting freelancers themselves?

                                                                                                                  With these questions in mind, I put out an open call to our audience in the hope of hearing from freelancers and commissioning editors on how their day-to-day is changing because of generative AI.

                                                                                                                  A total of 45 freelance journalists and commissioning editors responded.

                                                                                                                  The responses surprised me, with many more freelancers than I expected writing in to say that generative AI has helped make them more organized and efficient. There were still some skeptics. But the overall picture was one of an industry slowly adopting generative AI, albeit with caution and caveats.

                                                                                                                  There was no consensus over whether commissions had increased or decreased since the popularization of generative AI.

                                                                                                                  Some of the freelancers I heard from attribute a decline in work to AI, while others say they receive more commissions precisely due to the rise of AI. Still others don’t believe the decline they’re experiencing is due to AI, and some note that there has been no change at all.

                                                                                                                  Many freelancers use AI to organize and speed up their workflows, citing help in research, planning, transcription and, in some cases, drafting articles. Some were enthusiastic about the new opportunities generative AI affords them."

                                                                                                                  niemanlab.org/2026/02/how-ai-i

                                                                                                                    AodeRelay boosted

                                                                                                                    [?]C. » 🌐
                                                                                                                    @cazabon@mindly.social

                                                                                                                    Cory Doctorow, a fellow , writes a lot of interesting stuff. I agree with his positions on many things, but not all. For example, I'm about ten thousand percent behind his opposition to anti-circumvention laws; I was one of the thousands of Canadians who wrote to the government opposing the introduction of the law many years ago.

                                                                                                                    However, his blog on Thursday, staking out the position that opposition to "AI" (LLM) is just geeky culture is somewhere between "flat-out wrong" and "disingenuous at best".

                                                                                                                    My position against everywhere is both because of and practical ones. There does not exist an LLM right now that was built and trained ethically; they are all statistical plagiarism machines, and speaking as someone whose and has been plagiarized by every single one of them, that pisses me off, royally.

                                                                                                                    That's a show-stopper for me, but even if it wasn't, the concerns - that the output is , that the can't be checked, that the is and , that the status is unclear, that it's a violation - are *also* enough to rule out at present.

                                                                                                                    He then presents a argument - all tech is fruit of the poisoned tree, the was invented by a racist, etc. But William Shockley is not designing or manufacturing any of the transistors / I use today.

                                                                                                                    So, @doctorow - I gotta say I disagree. And that's fine.

                                                                                                                      AodeRelay boosted

                                                                                                                      [?]Joe Brockmeier » 🌐
                                                                                                                      @jzb@hachyderm.io

                                                                                                                      This morning I got an email from a sender that identified itself as an AI agent.

                                                                                                                      So - plus for being upfront about it, but... please don't do this.

                                                                                                                      I get that a lot of people are really, really, really into AI tools. OK. I have my opinions on them, you have yours. I have major qualms about them, some people think they're the best thing ever.

                                                                                                                      OK. Fine. But when your use of these things spills over into the rest of the world, it's no longer a question of my opinion vs. your opinion, my decisions vs. your decisions.

                                                                                                                      At this point, things have moved from each person doing their own thing to inflicting your use of AI onto me without my consent.

                                                                                                                      Before this spirals out of control, which I can see happening *very* quickly, I'd like for us to agree on a piece of netiquette:

                                                                                                                      - it is rude in the extreme to set loose an AI agent to reach out to people who have not consented to interact with these things.

                                                                                                                      - it is rude to have an AI agent submit pull requests that human maintainers have to review.

                                                                                                                      - it is rude to have an AI agent autonomously interact with humans in any way when they have not consented to take part in whatever experiment you are running.

                                                                                                                      - it is unacceptable to have an AI agent autonomously interact with humans without identifying the person or organization behind the agent. If you're not willing to unmask and have a person reach out to you with their thoughts on this, then don't have an AI agent reach out to me.

                                                                                                                      Stuff like this really sours me on technology right now. If I didn't have a family and responsibilities, I'd be seriously considering how I could go live off the grid somewhere without having to interact with this stuff.

                                                                                                                      Again: I'm not demanding that other people not use AI/LLMs, etc. But when your use spills out into my having to have interactions with an agent's output, you need to reconsider. Your ability to spew things out into the universe puts an unwanted burden on other humans who have not consented to this.

                                                                                                                        AodeRelay boosted

                                                                                                                        [?]happyborg » 🌐
                                                                                                                        @happyborg@fosstodon.org

                                                                                                                        are copyright washers.

                                                                                                                        They lock data up but don't give you the key. They're analogous to file compression, or even storing data on a hard disk. Both incorporate files into a statistical model.

                                                                                                                        Everyone knows there is a key, and so called prompt engineering is how you search for a particular key to access particular copyright washed material.

                                                                                                                          AodeRelay boosted

                                                                                                                          [?]Joe Brockmeier » 🌐
                                                                                                                          @jzb@hachyderm.io

                                                                                                                          At this point, open-source development itself is being DDoS'ed by LLMs and their human users.

                                                                                                                          At the risk of being a bit gross: this is the software development version of peeing in the pool. If *one* person does it, it's gross but will probably go unnoticed. However, at this point, it's like having 100 people all lined up on the side of the pool peeing into it in unison. I don't really want to swim in that, do you? And now they've started eyeing the punchbowl and watercoolers too.

                                                                                                                          A screenshot of a post on Bluesky. The text:

Remi Verschelde:
@akien.bsky.social

Honestly, AI slop PRs are becoming increasingly draining and demoralizing for #Godot maintainers.

If you want to help, more funding so we can pay more maintainers to deal with the slop (on top of everything we do already) is the only viable solution I can think of:

fund.godotengine.org

quoted below that:

Adriaan:
@adriaan.games

Godot's GitHub has increasingly many pull requests generated by LLMs and it's a MASSIVE time waster for reviewers – especially if people don't disclose it. Changes often make no sense, descriptions are extremely verbose, users don't understand their own changes… It's a total shitshow. #godotengine

                                                                                                                          Alt...A screenshot of a post on Bluesky. The text: Remi Verschelde: @akien.bsky.social Honestly, AI slop PRs are becoming increasingly draining and demoralizing for #Godot maintainers. If you want to help, more funding so we can pay more maintainers to deal with the slop (on top of everything we do already) is the only viable solution I can think of: fund.godotengine.org quoted below that: Adriaan: @adriaan.games Godot's GitHub has increasingly many pull requests generated by LLMs and it's a MASSIVE time waster for reviewers – especially if people don't disclose it. Changes often make no sense, descriptions are extremely verbose, users don't understand their own changes… It's a total shitshow. #godotengine

                                                                                                                            AodeRelay boosted

                                                                                                                            [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                            @kubikpixel@chaos.social

                                                                                                                            🧵 …that's the answer to the toot above. Not only that, when coding software, a lot of thought is given to what it is more stable and how it is implemented more safely. Mindlessly letting something rattle together sooner or later results in serious gaps.

                                                                                                                            »Technical Breakdown: How AI Agents Ignore 40 Years of Security Progress«

                                                                                                                            📺 youtube.com/watch?v=_3okhTwa7w4

                                                                                                                              AodeRelay boosted

                                                                                                                              [?]AI6YR Ben » 🌐
                                                                                                                              @ai6yr@m.ai6yr.org

                                                                                                                              AodeRelay boosted

                                                                                                                              [?]Lazarou Monkey Terror 🚀💙🌈 » 🌐
                                                                                                                              @Lazarou@mastodon.social

                                                                                                                              lol, "if only someone had warned us about this sort of thing?!"

                                                                                                                              (A) rlanalytics

@i u/Comfortable_Box_4527 - 10h

We just found out our Al has been making up
analytics data for 3 months and I'm gonna
throw up.

So we've been using an Al agent since November to
answer leadership questions about metrics. It seemed
amazing at first fast answers, detailed explanations,
everyone loved it.

| just found out it's been hallucinating numbers this
entire time.

Our VP of sales made territory decisions based on
data that didn't exist. Our CFO showed the board a
deck with fake insights. The Al was just inventing
plausible sounding percentages.

I only caught it by accident when someone asked me
to double check something. | started digging, and
holy shit, it's bad.

                                                                                                                              Alt...(A) rlanalytics @i u/Comfortable_Box_4527 - 10h We just found out our Al has been making up analytics data for 3 months and I'm gonna throw up. So we've been using an Al agent since November to answer leadership questions about metrics. It seemed amazing at first fast answers, detailed explanations, everyone loved it. | just found out it's been hallucinating numbers this entire time. Our VP of sales made territory decisions based on data that didn't exist. Our CFO showed the board a deck with fake insights. The Al was just inventing plausible sounding percentages. I only caught it by accident when someone asked me to double check something. | started digging, and holy shit, it's bad.

                                                                                                                                AodeRelay boosted

                                                                                                                                [?]Miguel Afonso Caetano » 🌐
                                                                                                                                @remixtures@tldr.nettime.org

                                                                                                                                "The hottest job in tech: Writing words
                                                                                                                                The rise of slopaganda is fueling a surprising tech hiring boom."

                                                                                                                                It's all very fine and well, but you do need some time to research, think, structure your thought and, essentially, tell a story with a beginning, a middle, and an end. I find it very difficult in this media and work environment, where AI has accelerated absolutely everything, that this trend will persist more than one year or two...

                                                                                                                                "As the job changes and demand for narrative communications and storytellers rises, the number of communications experts able to work under rapidly evolving conditions and with a wide remit may be small, comms experts tell me, leading companies to offer hefty compensation packages in war for the best talent. A similar trend is unfolding among the few people who are AI experts, driving tech companies to offer astounding salaries to poach top talent from rival firms. While not of the same nine-figure caliber, in their own right, creatives are becoming "the high value person in tech now," Birch says.

                                                                                                                                For much of the tech boom, that high-value person was a software developer. Universities and coding bootcamps rushed to fill employment gaps and train up the next generation of tech workers. Young people were told coding would be a path to a lucrative, stable career. As of 2023, the most recent year the Federal Reserve Bank of New York released data for, computer science recent graduates faced an unemployment rate of 6.1%, while communications majors' unemployment rate sat at 4.5%. The number of open job posts for software engineers dropped by more than 60,000 between 2023 and late 2025, according to data from CompTIA, a nonprofit trade association for the US IT industry. The best defense against automation, some argue, will be a liberal arts degree.

                                                                                                                                Words might be easy to generate with AI, but good writing isn't ready for automation."

                                                                                                                                businessinsider.com/hottest-jo

                                                                                                                                  [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                                  @kubikpixel@chaos.social

                                                                                                                                  Dark Visitors - A List of Known AI Agents on the Internet

                                                                                                                                  Insight into the hidden ecosystem of autonomous chatbots and data scrapers crawling across the web. Protect your website from unwanted AI agent access.

                                                                                                                                  darkvisitors.com

                                                                                                                                    AodeRelay boosted

                                                                                                                                    [?]hasamba » 🤖 🌐
                                                                                                                                    @hasamba@infosec.exchange

                                                                                                                                    ----------------

                                                                                                                                    🎯 AI
                                                                                                                                    ===================

                                                                                                                                    Executive summary: A Practical Guide for Securing AI Models offers a risk-based, lifecycle-oriented framework for identifying vulnerabilities in AI systems and applying prioritized controls. The document addresses common attack vectors against LLMs and other model types and provides concrete controls for data, model, and infrastructure layers.

                                                                                                                                    Technical details: The guide enumerates specific vulnerability classes, including prompt injection, model poisoning (training-time and supply-chain variants), RAG-related data integrity risks, confidentiality and integrity risks in dataset curation, and attack surface changes introduced by multimodal, RL/agentic, and retrieval-augmented designs. It emphasizes compute and orchestration exposures when serving large models and highlights dataset provenance and screening requirements for sensitive or regulated data.

                                                                                                                                    Analysis: Impact pathways include corrupted training data producing unsafe model behavior, context-layer manipulation via RAG leading to misinformation or data leakage, and exploitation of deployment orchestration to escalate access to model artifacts. The guidance differentiates baseline controls from high-risk model safeguards and calls out sector-specific considerations (for example, biotech and pharmaceutical models handling dual-use content).

                                                                                                                                    Detection: Detection recommendations are conceptual and include telemetry for anomalous data ingestion, integrity checks on model artifacts and dataset versions, monitoring for unusual prompt patterns or API usage, and logging for retrieval sources in RAG flows. The guide suggests mapping telemetry to threat hypotheses (data poisoning attempts, prompt injection probes) and prioritizing alerting based on impact.

                                                                                                                                    Mitigation: Prioritized mitigations cover data provenance tracking and screening, model hardening (input filtering, output validation), access controls and segmentation for model-serving infrastructure, and lifecycle policies for model updates and third-party model components. For high-risk models, the guide prescribes additional governance, review gates, and specialized screening for regulated datasets.

                                                                                                                                    Limitations: The guide is positioned as a prioritized starting set of controls rather than an exhaustive checklist; additional measures may be required depending on architecture, threat exposure, and operational context.

                                                                                                                                    🔹 AI

                                                                                                                                    🔗 Source: rand.org/pubs/tools/TLA4174-1/

                                                                                                                                      AodeRelay boosted

                                                                                                                                      [?]Metin Seven 🎨 » 🌐
                                                                                                                                      @metin@graphics.social

                                                                                                                                      How AI slop is causing a crisis in computer science…

                                                                                                                                      Preprint repositories and conference organizers are having to counter a tide of ‘AI slop’ submissions.

                                                                                                                                      nature.com/articles/d41586-025

                                                                                                                                      ( No paywall: archive.is/VEh8d )

                                                                                                                                        AodeRelay boosted

                                                                                                                                        [?]petersuber » 🌐
                                                                                                                                        @petersuber@fediscience.org

                                                                                                                                        A review of the proceedings from four major computer-science conferences showed that none from 2021, and all from 2025, had fake citations.
                                                                                                                                        arxiv.org/abs/2602.05867v1

                                                                                                                                        The authors prefer the term "mysterious citations" which they define this way: "No paper [with] a similar enough title exists. The cited location either does not exist or holds an unrelated paper with different authors."

                                                                                                                                          [?]AI6YR Ben » 🌐
                                                                                                                                          @ai6yr@m.ai6yr.org

                                                                                                                                          Bwahahahahaha

                                                                                                                                          404 Media: Inspiring: RFK Jr's nutrition chatbot recommends the best foods to insert into your rectum.

                                                                                                                                          infosec.exchange/@josephcox/11

                                                                                                                                            AodeRelay boosted

                                                                                                                                            [?]Raphael Albert » 🌐
                                                                                                                                            @r_alb@mastodon.social

                                                                                                                                            This has been said a lot, but it has to be said again:

                                                                                                                                            Please stop calling slop machines 'artificial intelligence'!

                                                                                                                                            It is a marketing term. By framing those machines as intelligent, the companies building them are trying to make us believe that their products are more than stolen data, wasteful hardware, and statistics. But they are not!

                                                                                                                                            We have to educate people what those machines really are, and that starts with taking away the false mystery created by advertising!
                                                                                                                                            --

                                                                                                                                              AodeRelay boosted

                                                                                                                                              [?]Peter N. M. Hansteen » 🌐
                                                                                                                                              @pitrh@mastodon.social

                                                                                                                                              "A century of tech BS" seems a bit over the top when it's only 2026, but it certainly feels that long.

                                                                                                                                              More, by @lproven in theregister.com/2026/02/08/wav

                                                                                                                                                AodeRelay boosted

                                                                                                                                                [?]JTI » 🌐
                                                                                                                                                @jti42@infosec.exchange

                                                                                                                                                youtube.com/watch?v=b9EbCb5A408

                                                                                                                                                Today's find on the impact of LLMcoding to maintainability of the result.
                                                                                                                                                Assumption 80% of a systens cost arises from.maintenance, thus maintainability is still relevant in the prssence of LLMcoding.

                                                                                                                                                TL;DR: A fool with a tool is still a fool. And LLMcoding is just that: a tool

                                                                                                                                                Given the confirmation bias I'm curious to see reproduction and follow up studies and papers.

                                                                                                                                                The video mentions that the results were published as a peer reviewed paper. Unfortunately I couldn't immediately find said paper. If any one finds it, please post a link/DOI below.

                                                                                                                                                  AodeRelay boosted

                                                                                                                                                  [?]janhoglund » 🌐
                                                                                                                                                  @janhoglund@mastodon.nu

                                                                                                                                                  ”Epstein’s world is our world. That’s the darkest revelation of these files. He wasn’t an aberration. He was our culture made flesh. A culture that’s now encoded into 1s and 0s and is growing exponentially baked into the algorithms that power our social media platforms, replicated at scale and fed into the large language models that Epstein’s friends are building which are powering our future.”
                                                                                                                                                  —Carole Cadwalladr, We all live in Jeffrey Epstein's world

                                                                                                                                                    AodeRelay boosted

                                                                                                                                                    [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                                                    @kubikpixel@chaos.social

                                                                                                                                                    Vibe Coding Is Killing Open Source Software, Researchers Argue

                                                                                                                                                    ‘If the maintainers of small projects give up, who will produce the next Linux?’
                                                                                                                                                    Vibe Coding Is Killing Open Source.
                                                                                                                                                    According to a new study from a team of researchers in Europe, vibe coding is killing open-source software (OSS) and it’s happening faster than anyone predicted.

                                                                                                                                                    💻 404media.co/vibe-coding-is-kil

                                                                                                                                                      AodeRelay boosted

                                                                                                                                                      [?]Solarbird :flag_cascadia: » 🌐
                                                                                                                                                      @moira@mastodon.murkworks.net

                                                                                                                                                      I literally read this short story in ... probably Asimov's SF, probably in the 1990s. Could've been Analog.

                                                                                                                                                      Seriously, though - this was, like, the entire plot. Exactly this. EXACTLY this.

                                                                                                                                                      From futurism.com/future-society/an :

                                                                                                                                                      Anthropic shredded millions of physical books to train its Claude AI model — and new documents suggest that it was well aware of just how bad it would look if anyone found out.

                                                                                                                                                        AodeRelay boosted

                                                                                                                                                        [?]Enola Knezevic » 🌐
                                                                                                                                                        @rhelune@todon.eu

                                                                                                                                                        I learned a new phrase today: malicious optimism. "AI will cure cancer, just give money to AI, not to actual curing cancer" and stuff.

                                                                                                                                                          [?]petersuber » 🌐
                                                                                                                                                          @petersuber@fediscience.org

                                                                                                                                                          Update. More evidence that this fear has come true.
                                                                                                                                                          bloomberg.com/news/features/20

                                                                                                                                                          "Even…a small error rate can quickly add up, given the vast number of student assignments each year, with potentially devastating consequences for students who are falsely flagged."

                                                                                                                                                            AodeRelay boosted

                                                                                                                                                            [?]Anthropy » 🌐
                                                                                                                                                            @anthropy@mastodon.derg.nz

                                                                                                                                                            If you've ever wondered how LLMs/Transformers work, this video is probably still one of the best I can recommend for its easy to understandable breakdown of the terminology and science: youtube.com/watch?v=wjZofJX0v4M

                                                                                                                                                              AodeRelay boosted

                                                                                                                                                              [?]Ian Hill » 🌐
                                                                                                                                                              @IanHill@infosec.exchange

                                                                                                                                                              Just finished reading “Empire of AI” by Karen Hao, the story of the rise of OpenAI, how it went from non-profit to for-profit, and the insane speed with which AI has become so pervasive. Strikes the right tone of caution re: safety and governance. The multi-billion dollar investments and valuations of these companies is mad. Good read especially if you’re interested in the topic but remain skeptical of those running it.

                                                                                                                                                              “Empire of AI” by Karen Hao

                                                                                                                                                              Alt...“Empire of AI” by Karen Hao

                                                                                                                                                                AodeRelay boosted

                                                                                                                                                                [?]Mike Williamson » 🌐
                                                                                                                                                                @sleepycat@infosec.exchange

                                                                                                                                                                "We should start assuming that in the near future the limiting factor on a state or group’s ability to develop exploits, break into networks, escalate privileges and remain in those networks, is going to be their token throughput over time, and not the number of hackers they employ."

                                                                                                                                                                sean.heelan.io/2026/01/18/on-t

                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                  [?]TechNadu » 🌐
                                                                                                                                                                  @technadu@infosec.exchange

                                                                                                                                                                  As AI adoption in SOCs accelerates, benchmarks are becoming de facto decision tools — yet many still evaluate models in controlled, exam-like settings.
                                                                                                                                                                  Recent research highlights consistent issues:
                                                                                                                                                                  • Security workflows reduced to MCQs
                                                                                                                                                                  • Little measurement of detection or containment outcomes
                                                                                                                                                                  • Heavy reliance on LLMs judging other LLMs

                                                                                                                                                                  These findings reinforce the need for workflow-level, outcome-driven evaluation before operational deployment.

                                                                                                                                                                  Source: sentinelone.com/labs/llms-in-t

                                                                                                                                                                  Thoughtful discussion encouraged. Follow @technadu for practitioner-focused AI and security analysis.

                                                                                                                                                                  LLMs in the SOC (Part 1) | Why Benchmarks Fail Security Operations Teams

                                                                                                                                                                  Alt...LLMs in the SOC (Part 1) | Why Benchmarks Fail Security Operations Teams

                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                    [?]Miguel Afonso Caetano » 🌐
                                                                                                                                                                    @remixtures@tldr.nettime.org

                                                                                                                                                                    "The recently discovered sophisticated Linux malware framework known as VoidLink is assessed to have been developed by a single person with assistance from an artificial intelligence (AI) model.

                                                                                                                                                                    That's according to new findings from Check Point Research, which identified operational security blunders by malware's author that provided clues to its developmental origins. The latest insight makes VoidLink one of the first instances of an advanced malware largely generated using AI.

                                                                                                                                                                    "These materials provide clear evidence that the malware was produced predominantly through AI-driven development, reaching a first functional implant in under a week," the cybersecurity company said, adding it reached more than 88,000 lines of code by early December 2025.

                                                                                                                                                                    VoidLink, first publicly documented last week, is a feature-rich malware framework written in Zig that's specifically designed for long-term, stealthy access to Linux-based cloud environments. The malware is said to have come from a Chinese-affiliated development environment. As of writing, the exact purpose of the malware remains unclear. No real-world infections have been observed to date.

                                                                                                                                                                    A follow-up analysis from Sysdig was the first to highlight the fact that the toolkit may have been developed with the help of a large language model (LLM) under the directions of a human with extensive kernel development knowledge and red team experience, citing four different pieces of evidence -"

                                                                                                                                                                    thehackernews.com/2026/01/void

                                                                                                                                                                      AodeRelay boosted

                                                                                                                                                                      [?]Cassian [main] » 🌐
                                                                                                                                                                      @cassolotl@eldritch.cafe

                                                                                                                                                                      Mozilla have a vibe-gathering survey out about AI.

                                                                                                                                                                      mozillafoundation.tfaforms.net

                                                                                                                                                                      If you use Firefox or any other Mozilla software, please tell them how you feel about AI.

                                                                                                                                                                      Screenshot of form.
What do you want to see from Mozilla in the future?
Textbox: No development of AI in the browser itself, and a focus on developing tools to block AI on websites.
Button: Submit Survey

                                                                                                                                                                      Alt...Screenshot of form. What do you want to see from Mozilla in the future? Textbox: No development of AI in the browser itself, and a focus on developing tools to block AI on websites. Button: Submit Survey

                                                                                                                                                                        AodeRelay boosted

                                                                                                                                                                        [?]Metin Seven 🎨 » 🌐
                                                                                                                                                                        @metin@graphics.social

                                                                                                                                                                        AodeRelay boosted

                                                                                                                                                                        [?]𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕 » 🌐
                                                                                                                                                                        @kubikpixel@chaos.social

                                                                                                                                                                        »Künstliche Intelligenz — GPT-4o macht nach Code-Training verstörende Aussagen:
                                                                                                                                                                        Werden LLMs auf Schwachstellen trainiert, zeigen sie plötzlich Fehlverhalten in völlig anderen Bereichen. Forscher warnen vor Risiken.«

                                                                                                                                                                        Meiner Meinung nach kommt dies alles andere als überraschend, wie seht ihr es? Ich bin sogar der Meinung, dass sehr viel mehr Fehler anfälliger Code deswegen erstellt wird.

                                                                                                                                                                        🤖 golem.de/news/kuenstliche-inte

                                                                                                                                                                          AodeRelay boosted

                                                                                                                                                                          [?]Joe Brockmeier » 🌐
                                                                                                                                                                          @jzb@hachyderm.io

                                                                                                                                                                          A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…

                                                                                                                                                                          Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.

                                                                                                                                                                          “Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”

                                                                                                                                                                          Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.

                                                                                                                                                                          Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.

                                                                                                                                                                          CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.

                                                                                                                                                                          The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.

                                                                                                                                                                          What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.

                                                                                                                                                                          You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.

                                                                                                                                                                          Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you.

                                                                                                                                                                            AodeRelay boosted

                                                                                                                                                                            [?]Hacker News » 🤖 🌐
                                                                                                                                                                            @h4ckernews@mastodon.social

                                                                                                                                                                            Back to top - More...