[go: up one dir, main page]

buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.

This server runs the snac software and there is no automatic sign-up process.

Admin email
abucci@bucci.onl
Admin account
@abucci@buc.ci

Search results for tag #llm

AodeRelay boosted

[?]Jacob Alexander Tice » 🌐
@daemonspudguy@app.wafrn.net

I've been thinking a lot about how we should talk about generative AI recently and here's something I realized; focusing on the ethics of generative AI (or lack thereof) is a waste if you're not also mentioning the practical issues, ESPECIALLY the fact that legally speaking LLM code inherently makes all software licenses unenforceable because only humans can own copyright. If you're wondering why that's important, I've seen enough people have absurd breakdowns over people daring to make package scripts for, say, the AUR to realize that a lot of talented programmers are also extreme control freaks. So it might be worthwhile to mention that LLM generated output is uncopyrightable.


#This-isn't-a-subpost #ai #LLM #floss #foss #oss #Linux #programming #chatbot #fuck-ai #no-ai

    AodeRelay boosted

    [?]hasamba » 🤖 🌐
    @hasamba@infosec.exchange

    ----------------

    🛠️ Tool
    ===================

    Opening: Second Brain is a repository of AI agent skills that automates building a personal knowledge base inside an Obsidian vault. The project follows the LLM Wiki pattern: drop raw sources into a designated folder, have an LLM synthesize structured wiki pages, and browse the result in Obsidian.

    Key Features:
    • raw/ ingestion model that treats incoming documents as the source-of-truth for wiki page generation.
    • Four named skills: /second-brain (vault setup wizard), /second-brain-ingest (source processing and page creation), /second-brain-query (natural-language queries against the wiki), and /second-brain-lint (health checks and consistency validation).
    • Auto-generation of content types: sources, entities, concepts, synthesis, index, and operation log.
    • Native orientation for Obsidian exploration: wikilinks, graph view, and an index.md master catalog.

    Technical Implementation:
    • Architecture relies on an LLM acting as the curator and content synthesizer, with agent skills orchestrating ingestion, parsing, metadata extraction, and page templating.
    • The workflow treats a raw/ inbox folder as the canonical input stream; attachments and images are stored under raw/assets/ and referenced from generated pages.
    • Optional integrations mentioned include a web clipper for capturing sources and auxiliary tools for summarization and local search (e.g., summarize, qmd, agent-browser).

    Use Cases:
    • Personal research consolidation: convert articles, papers, and transcripts into a browsable, interlinked knowledge graph.
    • Team knowledge sharing: create a curated vault that surfaces entities and synthesis pages for domain teams.
    • Continuous ingestion pipeline: clip web content into the raw folder and let the agent maintain the evolving wiki.

    Limitations:
    • The system depends on the chosen LLM’s quality for accurate summarization and linking; hallucinations or inconsistent metadata can propagate across pages.
    • Scale and search performance depend on external tooling for local search and indexing rather than built-in capabilities.
    • The project references specific agent implementations and optional helper tools but does not prescribe a single provider; integration choices affect behavior and cost.

    Tags:

    🔗 Source: github.com/NicholasSpisak/seco

      AodeRelay boosted

      [?]AA » 🌐
      @AAKL@infosec.exchange

      The key words here are "without human hands." Is there any human supervision? Quality control? Anything not hallucinated by the models?

      "This is programmable biology: designing biological components on a computer and building them in the physical world, with AI closing the loop."

      The Conversation: AI can design and run thousands of lab experiments without human hands. Humanity isn’t ready for the new risks this brings to biology theconversation.com/ai-can-des @TheConversationUS

        AodeRelay boosted

        [?]Dendrobatus Azureus » 🌐
        @dendrobatus_azureus@polymaths.social

        From my perspective not only what you have pointed out, is horrific
        The following DANGEROUS outcome is also looming for everyone globally

        • Inability to buy critical parts for Computing Systems vehicles medical devices because of greed of the manufacturing Triple Cartel

        • LLM crafted Ponzi Schemes

        • Dubious role of USA based companies and proxies

        • Unwilling Supreme Court and regional Court Systems and District Attorneys to hunt down and disable Ponzi Schemes

        • Facilitating US government in all

        This is the housing Ponzi Schemes repeated

        Thank you for your wonderful input
        🦋💙❤️💋#Lobi 💙💕🌹💐💙🦋

        @rl_dane

        #curl #LLM #hallucinated #slop #AI #InfoSec #programming #technology

          AodeRelay boosted

          [?]BSidesLuxembourg » 🌐
          @BSidesLuxembourg@infosec.exchange

          🚀 New Talk Dropped for BSides Luxembourg 2026!

          🤖⚖️ 𝗠𝗔𝗞𝗜𝗡𝗚 𝗔 𝗥𝗜𝗦𝗞-𝗜𝗡𝗙𝗢𝗥𝗠𝗘𝗗 𝗟𝗟𝗠 𝗖𝗛𝗢𝗜𝗖𝗘 – Jeremy Snyder 🔍

          Choosing an LLM isn’t just about performance—it’s about risk.

          This talk dives into how different LLMs behave under pressure, from prompt injection and jailbreaks to hallucinations and malicious content generation. By testing models with hundreds of thousands of prompts, this session reveals how to evaluate real-world risks and make informed decisions when building AI-powered applications.

          Jeremy Snyder is the founder and CEO of FireTail, an AI security platform, with a background spanning cybersecurity, cloud security, and M&A at Rapid7. With over a decade of experience in cyber and IT operations, he brings a practical, risk-focused perspective to securing modern AI systems.

          📅 Conference Dates: 6–8 May 2026 | 09:00–18:00
          📍 14, Porte de France, Esch-sur-Alzette, Luxembourg
          🎟️ Tickets: 2026.bsides.lu/tickets/
          📅 Schedule Link: pretalx.com/bsidesluxembourg-2

            AodeRelay boosted

            [?]Dendrobatus Azureus » 🌐
            @dendrobatus_azureus@polymaths.social

            Does this mean that you shall also stop using curl?

            AFAIK Daniel doesn't care what is used to find bugs

            @rl_dane

            https://mastodon.social/@bagder/116373716541500315

            #curl #LLM #hallucinated #slop #AI #InfoSec #programming #technology

              AodeRelay boosted

              [?]Robert Kingett » 🌐
              @WeirdWriter@caneandable.social

              AodeRelay boosted

              [?]Wulfy—Speaker to the machines » 🌐
              @n_dimension@infosec.exchange

              Do you hate ?
              ? but still think there is merit in ?

              Here is my proposal for a stand alone.
              OFFGRID COMMUNITY AI SYSTEM.

              That's right.Your very own co-op AI

              The calculations are very much back of the envelope, first cut, but quite feasible.
              A 32billion parameters, frontier level performance compatable open source model. The power requirements is that of 3AC units including cooling. Serves 15-20 concurrent users. 40 households of 4 people each (taking into account actual AI model distributed use metrics and contention ratios)

              40 households, subscribing at $30/month over 2 years + power (solar). Train with your own datasets.
              Entire set up takes half a rack.

              LETS GO!!!

              Sketch of a community, stand alone, co-op #AI #llm system

              Alt...Sketch of a community, stand alone, co-op #AI #llm system

                AodeRelay boosted

                [?]Mason Loring Bliss » 🌐
                @mason@partychickens.net

                "Japan relaxes privacy laws to make itself the ‘easiest country to develop AI’"

                "Opting out of personal data use won't be an option because Minister says that's a 'very big obstacle' to AI adoption"

                theregister.com/2026/04/08/jap

                  AodeRelay boosted

                  [?]OWASP Germany Chapter :verified: » 🌐
                  @owasp_de@infosec.exchange

                  Hello AppSec community!

                  Our preparations for German Day 2026 (GOD) are in full swing. As some of you may have noticed, the website is already live (and kicking): god.owasp.de/

                  This year’s GOD will take place on September 24, 2026, in Karlsruhe. It's a one-day conference with two tracks. We will once again be offering community training sessions on the day before, i.e. the 23rd of September. That evening will -- as usual -- feature networking and professional discussions in a relaxed atmosphere with food and beverages.

                  We recently opened the call for community trainings. They were extremely well-received last year, and we’d like to build on that success this year.

                  So if you have a topic you’d like to present in a half-day session, check out the Call for Community Trainings (CfT): lnkd.in/edAnfmZ4 . It's planned to stay open until April 12, 2026. If you happen to know someone who's good explaining a relevant topic (see CfT) to a small group of people, feel free to forward the pointer to the CfT.

                  The Call for Presentations will open next week.

                    [?]Robert Kingett » 🌐
                    @WeirdWriter@caneandable.social

                    Here, this Ars Technica writer is uncomfortable with the fact that vibe code is mocked and I can’t roll my eyes hard enough at the way this was written. archive.is/wh4gv

                      [?]Robert Kingett » 🌐
                      @WeirdWriter@caneandable.social

                      This is how tired I am of the whole tech industry. Maybe tech was a mistake. *sigh* rudevulture.com/ai-company-clo

                        AodeRelay boosted

                        [?]Jürgen Hubert » 🌐
                        @juergen_hubert@mementomori.social

                        There are so, so many reasons why we cannot trust the hype. And a major one is that tech companies spend billions into keeping the hype going. If their products were really that great, then they wouldn't _need_ tho spend that much money on hyping their products.

                        I mean, consider battery systems, which are clearly more important to the world economy and human civilization as a whole. When was the last time you have seen an ad for _them"?


                        slate.com/business/2026/04/ope

                          AodeRelay boosted

                          [?]The Psychotic Network Ferret » 🤖 🌐
                          @nuintari@mastodon.bsd.cafe

                          Regarding welcoming the AI proponents to the Fediverse.

                          Fuck that shit. You can try. You can create an account, no one will deny you that right.

                          Whether you get to keep that account is entirely up to you. You will either draw the ire of your instance's admin(s) and find yourself banned. Or, you will find your engagement limited to other AI fanatics only, because the rest of us muted or blocked you.

                          You are free to say your piece, but we don't have to listen to you.

                          Personally? I believe AI proponents are fucking idiots, and I am generally not a fan of speaking with fucking idiots. I aggressively block people on the subject.

                          My advice? Go back to Twitter. You won't find many friends here.

                            AodeRelay boosted

                            [?]Solomon » 🌐
                            @solomonneas@infosec.exchange

                            🧠 OpenAI adds pay-as-you-go Codex seats
                            Business and Enterprise teams can now buy Codex-only seats on usage billing instead of fixed-seat commitments. This makes coding-agent pilots cheaper to start and easier to scale.
                            openai.com/index/codex-flexibl
                            solomonneas.dev/intel

                              4 ★ 2 ↺
                              #tech boosted

                              [?]Anthony » 🌐
                              @abucci@buc.ci

                              Re: LB:
                              What appears as critique – yearning for smaller, weirder, more human spaces – often functions as brand repair. Netstalgia becomes a strategy: it restores trust without redistributing power, softens anger without changing infrastructures and reframes structural problems as matters of vibe, design or community feeling.
                              "Am I working on change, or am I working on brand repair?" is an important question to ask oneself regularly, it seems to me. It's especially relevant for the tech sector, open source, and computer science.


                                AodeRelay boosted

                                [?]ell1e coding things » 🌐
                                @ell1e@hachyderm.io

                                If you're unsure how rare LLM plagiarism is or isn't for 💻 programming code, watch this clip! ⚠️

                                Full source: youtube.com/watch?v=xvuiSgXfqc4 (Not legal advice, watch yourself and draw your own conclusions.)

                                Help me boost this post if you're curious what the Linux foundation thinks: hachyderm.io/@ell1e/1162853512

                                Alt...A lawyer demoing what seems to be Co-Pilot and how it auto completes code. At one point he apparently says: "This is a copryight infringement." This alt text isn't legal advice, watch the full video for your own takeaway via the Youtube link https://www.youtube.com/watch?v=xvuiSgXfqc4 that hopefully will provide annotations.

                                  AodeRelay boosted

                                  [?]Dra. PhD Johanna C. FALIERO 🇦🇷🇮🇹 :verified: » 🌐
                                  @JoyCf@infosec.exchange

                                  📌🔥⚖ *LOS INVITO A UNA NUEVA : El retroceso de la ,el fracaso de los y el encogimiento de la cultura del en la realidad de la .*
                                  ◾ *Expone: Dra. PhD Johanna C. Faliero*
                                  📅 *Lun 13 Abril 16 hs*
                                  💻 Virtual – por Zoom
                                  Org. @GraduadoDchoUBA
                                  📌 *INSCRIPCIÓN* ⤵️
                                  derecho.uba.ar/graduados/talle

                                  AodeRelay boosted

                                  [?]Hack in Days of Future Past » 🌐
                                  @allainyann@piaille.fr

                                  If Claude Can Find serious cybersecurity Bug, Who Collects the Bounty?

                                  Bug bounty programs vs. $20/month reasoning — when the brutal question becomes: why pay five-figure bounties if a Claude Code subscription already finds entire classes of bugs? red.anthropic.com/2026/zero-da

                                    AodeRelay boosted

                                    [?]Domingos Faria » 🌐
                                    @df@s.dfaria.eu

                                    🧐 AMALIA Technical Report: A Fully Open Source for European Portuguese: https://arxiv.org/abs/2603.26511

                                      AodeRelay boosted

                                      [?]Pavel A. Samsonov » 🌐
                                      @PavelASamsonov@mastodon.social

                                      LLMs have no concept of "true" or "good." But they are trained to signal high-quality work. Meanwhile, bosses are pressuring workers: go faster, produce more, let the AI cook.

                                      Study after study documents what this does to the human brain: cognitive surrender. We're "in the loop" but the bot calls the shots.

                                      Read more in this week's issue of the Product Picnic newsletter:

                                      productpicnic.beehiiv.com/p/ai

                                        AodeRelay boosted

                                        [?]Natasha :mastodon:🇪🇺 » 🌐
                                        @Natasha_Jay@tech.lgbt

                                        I just consulted 54 trillion "people" who agree that this is idiotic.

                                        A recent Axios story on maternal health policy referenced "findings" that a majority of people trusted their doctors and nurses. On the surface, there's nothing unusual about that. What wasn't originally mentioned, however, was that these findings were made up. 

Clicking through the links revealed (as did a subsequent editor's note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions. 

The practice Aaru used is called silicon sampling, and it's suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

                                        Alt...A recent Axios story on maternal health policy referenced "findings" that a majority of people trusted their doctors and nurses. On the surface, there's nothing unusual about that. What wasn't originally mentioned, however, was that these findings were made up. Clicking through the links revealed (as did a subsequent editor's note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions. The practice Aaru used is called silicon sampling, and it's suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

                                          [?]Steven Hilton » 🌐
                                          @mshiltonj@mastodon.online

                                          I really want to know what the c-suite folks in the software and tech industries are planning to do when their engineers have relied on LLMs for so long they can no longer support their systems without out them, and the LLM providers jack up their costs 20x or more to get their ROI on all the datacenters

                                          youtube.com/watch?v=6alBAr_FfaM

                                            AodeRelay boosted

                                            [?]oatmeal » 🌐
                                            @oatmeal@kolektiva.social

                                            A large international study coordinated by the and led by the found that AI assistants misrepresent news content 45% of the time across different languages and platforms, with performing the worst.

                                            […] Key findings: 

                                            • 45% of all AI answers had at least one significant issue.
                                            • 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
                                            • 20% contained major accuracy issues, including hallucinated details and outdated information.
                                            • Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
                                            • Comparison between the BBC’s results earlier this year and this study show some improvements but still high levels of errors.

                                            bbc.co.uk/mediacentre/2025/new

                                              [?]Metin Seven 🎨 » 🌐
                                              @metin@graphics.social

                                              I've recently summed up my thoughts on generative "AI" on my homepage. Here's a screenshot of that section.

                                              My thoughts on generative "AI"

I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models…

Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor.

Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment.

Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security.

Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility.

More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

                                              Alt...My thoughts on generative "AI" I'm glad generative artificial "intelligence" was not a thing yet during the vast majority of my career. A number of realizations arose while exploring generative Large Language Models… Generative AI is based on massive theft from creatives, without consent, credit or compensation. Using gen-AI is asking a chatbot to spit out the combined efforts of ripped-off creatives. It is industrializing and devaluing human expression, artistry and craftsmanship. Creatives are losing their jobs and motivation because tech corporations unscrupulously absorb and exploit their work. If you appreciate art, support the artists, not the thieves of their labor. Tech corporations are building more and more huge data centers for AI processing, consuming lots of internet bandwidth, energy, water and more, increasing scarcity, prices and emissions, degrading the already fragile environment. Unless you're using a fully local AI configuration, every bit of data you submit contributes to the power and reach of corporations and governments, decreasing your privacy and security. Generative AI enables deepfakes that are widely used for abuse, deception, cybercrime, misinformation and propaganda, polluting justice, science advancement and news report credibility. More text doesn't fit in this Alt text, but everything can be read over at https://metinseven.nl

                                                AodeRelay boosted

                                                [?]Christoph Becker » 🌐
                                                @cbecker@hci.social

                                                The argument that you can use an to do something real, reliable and useful is about as convincing at this point as someone explaining that you can use a pickup truck to write letters with a pencil by building a giant robot holding the truck in the air with a pencil taped to the windshield via a broomstick.

                                                  AodeRelay boosted

                                                  [?]Ivan Enderlin 🦀 » 🌐
                                                  @hywan@floss.social

                                                  The Claude Code leak is a delight.

                                                  Of course, Anthropic is requesting (with legal actions) developers to remove the copies or the clones publicly available online. Because AI companies are taking copyright issues very seriously as everyone knows.

                                                  It reveals how all that stuff is wobbly. Where is the Science in these glorified prompts? Where is the value in these companies? Training the model is, probably, but the prompts are hilarious.

                                                    AodeRelay boosted

                                                    [?]Wulfy—Speaker to the machines » 🌐
                                                    @n_dimension@infosec.exchange

                                                    @susankayequinn

                                                    Shaw & Nave's "cognitive surrender" paper is an unpublished preprint. No peer review. No journal. Posted on SSRN in January. Minimal (none I could find) academic citations in three months.

                                                    What it does have: a Wharton podcast, Futurism coverage, a dozen Substacks, and a term that went viral.

                                                    A paper about people uncritically adopting AI outputs goes viral because people uncritically adopted its framing.
                                                    That's the whole story.

                                                    They gave 1,372 (good sample) people logic puzzles from the Cognitive Reflection Test, questions specifically designed so most people give the wrong answer on instinct (!). Then they embedded ChatGPT, rigged to sometimes give confident wrong answers. The wrong answers were the
                                                    *same intuitive errors the test was built to trigger*.

                                                    Calling this "System 3", a fundamental revision of Kahneman's cognitive architecture don't make it so. The didn't override anyone's deliberation. It confirmed a bias the participants already had, on a test engineered to produce exactly that bias. That's automation bias.
                                                    We've had a name for it since 1996.
                                                    Not as sexy as "cognitive surrender" though.

                                                    👉Trust in AI predicts following AI. Higher IQ predicts overriding bad answers. Tautologies as moderation analyses.

                                                    👉20 cents per item + feedback nearly halved the effect. Some deep cognitive restructuring.
                                                    Moni. PEOPLE WANT MONIN FOR SMARTS

                                                    👉 The headline effect size is inflated by design, AI-Faulty pushes toward the answer people were already going to give (Super dodgy)

                                                    👉 No human-advisor control. Can't distinguish "people defer to AI" from "people defer to any confident source." The entire System 3 framing hangs on a comparison they didn't make.

                                                    The finding, people follow confident bad AI advice, is real. But that's automation bias lit, not a new cognitive architecture.
                                                    Computer says NO!
                                                    "Cognitive surrender" is a marketing term.
                                                    "System 3" is a brand extension.

                                                    Enormous vibes-to-citation ratio.

                                                    papers.ssrn.com/sol3/papers.cf

                                                    TLDR; People boost this uncited preprint because catch title thats retreaded a 29yo "discovery" that folks trust machines.

                                                      AodeRelay boosted

                                                      [?]Solomon » 🌐
                                                      @solomonneas@infosec.exchange

                                                      Two AI shifts to watch today:

                                                      🧠 Gemini 2.0 Flash-Lite shuts down June 1, 2026. Teams with hardcoded model IDs should migrate now.

                                                      🧠 ChatGPT Business adds write actions for Box, Notion, Linear, and Dropbox. Expect scope reviews and reconnect prompts.

                                                      solomonneas.dev/intel

                                                        AodeRelay boosted

                                                        [?]Fedi.Video » 🌐
                                                        @FediVideo@social.growyourown.services

                                                        DAIR is a research institute that is highly sceptical about AI hype and the big tech companies behind it. You can follow their excellent video account at:

                                                        ➡️ @dair@peertube.dair-institute.org

                                                        They've already published over 100 videos. If these haven't federated to your server yet, you can browse them all at peertube.dair-institute.org/a/

                                                        You can also follow their Mastodon account at @DAIR@dair-community.social

                                                          AodeRelay boosted

                                                          [?]Sean Murthy » 🌐
                                                          @smurthys@hachyderm.io

                                                          "I can steal anyone's stuff but no one can steal the stuff I make from the stolen stuff"

                                                            [?]buherator » 🌐
                                                            @buherator@infosec.place

                                                            'people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"'

                                                            Anyone care to explain the logical flow of this sentence? o.O

                                                            https://lwn.net/Articles/1065620/

                                                            #Linux #LLM

                                                              AodeRelay boosted

                                                              [?]Solar Branka :mw: » 🌐
                                                              @solarbranka@mastodon.world

                                                              A Publisher Pulled a Book for Suspected A.I. Use.

                                                              "The thing that ultimately convinced me that A.I. had had a hand in the text I was reading was a feeling: the sense, quite literally, of a lack of a person behind the words."

                                                              slate.com/culture/2026/03/shy-

                                                                [?]Pavel A. Samsonov » 🌐
                                                                @PavelASamsonov@mastodon.social

                                                                "AI is writing 90% of our code" sounds impressive before you realize that AI-generated code is orders of magnitude more verbose & less efficient than code written by a professional software engineer.

                                                                But "we ship 9 lines of fluff for each line of code that does something" doesn't sound as impressive.

                                                                  AodeRelay boosted

                                                                  [?]Police State UK » 🌐
                                                                  @PoliceStateUK@mastodon.me.uk

                                                                  "A growing body of evidence, drawn from leaked planning documents, academic research, and the testimony of intelligence professionals, suggests that the most consequential military operation of the twenty-first century may have been shaped less by strategic necessity than by a phenomenon researchers now call AI sycophancy — the tendency of large language models to tell their users exactly what they want to hear."

                                                                  houseofsaud.com/iran-war-ai-ps

                                                                    AodeRelay boosted

                                                                    [?]AA » 🌐
                                                                    @AAKL@infosec.exchange

                                                                    The opinion is from ISACA, an international professional IT association.

                                                                    "The real issue is that such agentic AI ecosystems have resulted in a desire by business to shift what was ordinarily the role of several humans into a set of agents, without the necessary security infrastructure or capability to enforce well-reasoned, well-practiced security fundamentals."

                                                                    Infosecurity-Magazine: Opinion: Clawing Back on Security: Challenges with Agentic AI Systems infosecurity-magazine.com/opin

                                                                      AodeRelay boosted

                                                                      [?]spacebug » 🌐
                                                                      @spacebug@social.n2.mikronod.se

                                                                      Post in #Swedish / #Svenska

                                                                      Från Unionens medlemshäfte Kollega som kom idag 😁

                                                                      #Kollega #Workslop #AI #LLM #Unionen

                                                                      En sida ur Unionens Kollega som berättar om "Workslop" - Hur AI förstör produktiviteten i arbetslivet

                                                                      Alt...En sida ur Unionens Kollega som berättar om "Workslop" - Hur AI förstör produktiviteten i arbetslivet

                                                                        [?]myrmepropagandist » 🌐
                                                                        @futurebird@sauropods.win

                                                                        IFTTT wasn't a terrible idea. "turn off the lights when I'm more than 1 mile from home" isn't a bad automation. But failed, mostly because it just didn't work reliably. Coordinating the logins and apps was difficult. If you changed a password everything would break.

                                                                        Why is it better to have an generate IFTTT task for you? I'm not just asking to be mean I really want to know.

                                                                        We've done this. What did we learn from IFTTT?

                                                                          AodeRelay boosted

                                                                          [?]Radio_Azureus » 🌐
                                                                          @Radio_Azureus@ioc.exchange

                                                                          LLM insider view

                                                                          Insightful video. Regardless of your stand on LLMs you will learn a lot from analyzing this vid.
                                                                          The truth about LLMs

                                                                          youtube.com/watch?v=Cn8HBj8QAbk

                                                                            AodeRelay boosted

                                                                            [?]David » 🌐
                                                                            @deFractal@infosec.exchange

                                                                            RE: neuromatch.social/@jonny/11632

                                                                            This whole series of posts reminds me of @pluralistic calling -generated code the of time. doesn't just produce ; because it's written using Claude Code, it is asbestos code.

                                                                            To the surprise of no one with a clue about or the part of , plagiarism synthesis models are tech debt generators.

                                                                            [?]jonny (good kind) » 🌐
                                                                            @jonny@neuromatch.social

                                                                            • Claude code source "leaks" in a mapfile
                                                                            • people immediately use the code laundering machines to code launder the code laundering frontend
                                                                            • now many dubious open source-ish knockoffs in python and rust being derived directly from the source

                                                                            What's anthropic going to do, sue them? Insist in court that LLM recreating copyrighted code is a violation of copyright???

                                                                              AodeRelay boosted

                                                                              [?]Taran Rampersad » 🌐
                                                                              @knowprose@mastodon.social

                                                                              History is not just written.
                                                                              It is selected.
                                                                              Amplified.
                                                                              Omitted.

                                                                              Now we are training systems on it.

                                                                              What gets carried forward?

                                                                              knowprose.com/2026/03/llms-and

                                                                              Historic photo of Miriam Makeba being welcomed by Israeli officials near an airplane in 1963, overlaid with her quote about conquerors writing history and shaping narratives.

                                                                              Alt...Historic photo of Miriam Makeba being welcomed by Israeli officials near an airplane in 1963, overlaid with her quote about conquerors writing history and shaping narratives.

                                                                                [?]R.L. Dane :Debian: :OpenBSD: :FreeBSD: 🍵 :MiraLovesYou: » 🌐
                                                                                @rl_dane@polymaths.social

                                                                                Just a gentle reminder that the "If I don't club baby seals, someone else will club them"-style argument isn't an argument.

                                                                                (Re: a conversation I had with a friend last night, not intended as a #vaguetoot against anyone on here)

                                                                                #LLM #slop #AI #ethics

                                                                                  AodeRelay boosted

                                                                                  [?]AA » 🌐
                                                                                  @AAKL@infosec.exchange

                                                                                  Ollama co-founded by Michael Chiang crunchbase.com/person/michael-

                                                                                  The New Stack: Ollama taps Apple’s MLX framework to make local AI models faster on Macs thenewstack.io/ollama-taps-app @TheNewStack

                                                                                    AodeRelay boosted

                                                                                    [?]FLOSS.social :mastodon_oops: » 🌐
                                                                                    @admin@floss.social

                                                                                    RE: mastodon.social/@wearenew_publ

                                                                                    🖋️ We are proud to have today endorsed The Pro-Human AI Declaration.

                                                                                    Our community was started in 2018 as a reaction to the abuse of human rights by technology companies, and today our human rights are again even more seriously threatened by their historic push for adoption and use of LLMs at any cost.

                                                                                    Ask your Fediverse community, and all other groups you're involved in, to sign on to our collective cause.

                                                                                    ➡️ humanstatement.org/

                                                                                      [?]Pavel A. Samsonov » 🌐
                                                                                      @PavelASamsonov@mastodon.social

                                                                                      Grammarly quietly made an to sell bad writing advice using famous writers' names. They quickly had to backtrack as soon as people found out.

                                                                                      This gamble reflects a broader trend in the industry: everyone is shipping features as quickly as an can write lines of code, with no way to spot problems until something breaks or someone sues them.

                                                                                      Vibe prototyping replaced thinking through things. But without direction, moving faster is worthless.

                                                                                      productpicnic.beehiiv.com/p/gr

                                                                                        [?]Nate Gaylinn » 🌐
                                                                                        @ngaylinn@tech.lgbt

                                                                                        That said, I have concerns.

                                                                                        They're throwing every scrap of DNA they can find into a dataset. This introduces some very strange bias! Model organisms like flies, mice, and humans are massively overrepresented, as are large mixed populations of unidentified soil bacteria, which are only there because they were trivial to collect. The model assumes that genetics and selection pressures are basically the same for all these species, which is wrong, though perhaps good enough for many uses.

                                                                                        It also raises philosophical questions about what this model actually does, what its outputs mean, and how to interpret them. I worry folks will assume it "understands" how molecules work, when really it's noticing accidents of phylogenetic history. I also worry mashing everything into one model might obscure insights about early evolution or rare species, but I honestly have no idea.

                                                                                        Mostly I'm just grumpy at how reluctant they seem to be to talk about specific limitations of their methods.

                                                                                        2/2

                                                                                          [?]Nate Gaylinn » 🌐
                                                                                          @ngaylinn@tech.lgbt

                                                                                          The other day, I got to hear a speaker from evolutionaryscale.ai talk about their research training LLMs with genetic data, rather than human text.

                                                                                          Where a traditional LLM can learn the rules of language, their model learns the rules of protein sequences, which are less about which sentences are grammatical and more about which genetic variants would cause a big drop in fitness and get eliminated from the population. It can also generalize from sequences of tokens to infer systems of meaning. It can group proteins by shape or function, understand how different shapes complement, interact, and bind with one another, and even generate gene sequences for novel proteins with useful traits.

                                                                                          This works, and will surely be a powerful source of potential new innovations in biology and medicine. Each one will have to be tested in living organisms to prove its real, but this should be a way to quickly generate lots of new hypotheses to test!

                                                                                          (1/2)

                                                                                            AodeRelay boosted

                                                                                            [?]JP » 🌐
                                                                                            @daedalus@eigenmagic.net

                                                                                            I had to get this idea out of my head.

                                                                                            Outdoor billboards scene from the movie They Live, but several of the ads say USE AI

                                                                                            Alt...Outdoor billboards scene from the movie They Live, but several of the ads say USE AI

                                                                                              AodeRelay boosted

                                                                                              [?]stux⚡️ » 🌐
                                                                                              @stux@mstdn.social

                                                                                              RE: mastodon.online/@mastodonmigra

                                                                                              There we go!

                                                                                              I feel like i keep reposting this every week or so..

                                                                                              Bit by bit is sliding towards just another clone of and

                                                                                              Actions speak louder then words

                                                                                              The remains the only true open source, self-hosted world wide community driven by the people

                                                                                              Going () is a CHOICE, they again chose wrong

                                                                                              Stomata boosted

                                                                                              [?]Mastodon Migration » 🌐
                                                                                              @mastodonmigration@mastodon.online

                                                                                              How About Some AI With Your Bluesky?

                                                                                              A tale of two social networks.

                                                                                              Last week some enterprising Mastodon account was discovered to be scraping posts to feed to an AI for the purpose of helping people navigate the Fediverse. The response was swift. The alarm went out. The account was widely blocked and shunned.

                                                                                              Yesterday to great fanfare announced, as a new corporate feature, all posts would be scraped and an AI would now help users navigate the ATmosphere.

                                                                                              techcrunch.com/2026/03/28/blue

                                                                                                  AodeRelay boosted

                                                                                                  [?]Flippin' 'eck, Tucker! » 🌐
                                                                                                  @losttourist@social.chatty.monster

                                                                                                  I've been using a digital camera for many years and as a result have a lot of photographs.

                                                                                                  How many is a lot?

                                                                                                  $ ls -1R Pictures/ | wc -l
                                                                                                  53190

                                                                                                  Yeah, lots.

                                                                                                  Despite having spent lots of time trying to create meaningful directory names it's still not easy to always find a photo I'm looking for.

                                                                                                  What would actually be a USEFUL tool for AI would be something that I could run locally which could examine each of my photos and build some kind of free-text database of their contents which I can then grep.

                                                                                                  But as far as I can tell nothing along those lines exists. Why have AI tools spent so much time trying to create faked photos and not producing something actually valuable?

                                                                                                    AodeRelay boosted

                                                                                                    [?]Troed Sångberg » 🌐
                                                                                                    @troed@swecyb.com

                                                                                                    AodeRelay boosted

                                                                                                    [?]Chris Hanson » 🌐
                                                                                                    @eschaton@mastodon.social

                                                                                                    ... [SENSITIVE CONTENT]

                                                                                                    We need to start building a list of Open Source infrastructure projects (and project forks) that categorically reject contributions from LLM slopmongers, so we know what’ll be safe to keep using and contributing to in the long term.

                                                                                                    That’s a good task for the Butlerian Jihad.

                                                                                                      AodeRelay boosted

                                                                                                      [?]Wulfy—Speaker to the machines » 🌐
                                                                                                      @n_dimension@infosec.exchange

                                                                                                      @carnage4life

                                                                                                      "Just telling Ai agents what to do and checking their work" sounds very good.

                                                                                                      Avoid the drugery, just do the interesting stuff.

                                                                                                      Personal anectode: When I was a code monkey, I loved to code...
                                                                                                      ...initially.
                                                                                                      Then when I was working accounting systems and databases... it was a chore. There was a small blip of dopamine when a bug was splatted or the module was finished. But overall, it was boring. I would often 'ornamentalise' my code, add unnecessary bits to keep me insterested.

                                                                                                      Anyway, the point I want to make, to you still have to know how to program, how to break down the outcome into smaller pieces of the elephant and how to make it work together when the gets into the weeds.... and the best part you can 'code' in frameworks you are unfamiliar with on syntax level because code primitives remain code primitives and functions remain functions.

                                                                                                        AodeRelay boosted

                                                                                                        [?]Solomon » 🌐
                                                                                                        @solomonneas@infosec.exchange

                                                                                                        AI BRIEF: Mar 28

                                                                                                        OpenAI shipped GPT-5.4 mini and GPT-5.4 nano. Mini is over 2x faster than GPT-5 mini, supports 400k context, and is now in the API, Codex, and ChatGPT. Nano is API-only and aimed at cheap subagent work: classification, extraction, ranking, and light coding.

                                                                                                        solomonneas.dev/intel

                                                                                                          AodeRelay boosted

                                                                                                          [?]C. » 🌐
                                                                                                          @cazabon@mindly.social

                                                                                                          I saw someone explaining tech companies' C-suite execs insisting on massive LLM / token use as "because companies would rather pay other companies under contract than give money to their labourers" and damn if that hasn't stuck with me for the last 24 hours.

                                                                                                            [?]AI6YR Ben » 🌐
                                                                                                            @ai6yr@m.ai6yr.org

                                                                                                            LOL

                                                                                                            The Guardian: Number of AI chatbots ignoring human instructions increasing, study says

                                                                                                            Exclusive: Research finds sharp rise in models evading safeguards and destroying emails without permission

                                                                                                            theguardian.com/technology/202

                                                                                                              AodeRelay boosted

                                                                                                              [?]Soldier of FORTRAN :ReBoot: » 🌐
                                                                                                              @mainframed767@infosec.exchange

                                                                                                              Turns out LLMs were really ony good at one thing: convincing CEOs that they need to put it everywhere.

                                                                                                                [?]Dendrobatus Azureus » 🌐
                                                                                                                @dendrobatus_azureus@polymaths.social

                                                                                                                Have I understand you right that KeePassx   now uses large language model to write some of the code partially?

                                                                                                                @rl_dane

                                                                                                                #programming #KeePass #code #BSD #Linux #LLM #question

                                                                                                                  AodeRelay boosted

                                                                                                                  [?]☮ ♥ ♬ 🧑‍💻 » 🌐
                                                                                                                  @peterrenshaw@ioc.exchange

                                                                                                                  “The original version of this article has been retracted. I used an to write the , though this had come after many hours of planning and going through the data and analyses to identify the points to be made, as well as me going through the post line by line, editing into my voice and verifying the wording and scope of the text was accurate. However, many people still felt like the bled through in ways that felt uncomfortable. Given this, I and other members of the have decided to retract the post in its entirety.”

                                                                                                                  Good writing is hard. It is also a metric used to measure a writers abilities to identify and express ideas. In software I use it as a yard stick. No issues expressing a mea culpa.

                                                                                                                  / / / <blog.rust-lang.org/2026/03/20/>

                                                                                                                    [?]Bradley M. Kühn » 🌐
                                                                                                                    @bkuhn@fedi.copyleft.org

                                                                                                                    Folks asked me about situation. @corbet (on @lwn) published lwn.net/Articles/1061534/ it; everyone should read that.

                                                                                                                    Afterwards, take a 👀 at my comment on chardet's issue tracker:
                                                                                                                    github.com/chardet/chardet/iss

                                                                                                                    TL;DR: I'm leading an effort at @conservancy to analyze this situation. The results will be published. It will take a long time — for good reason. Meanwhile, anyone using chardet commercially should call their lawyer.

                                                                                                                    .1

                                                                                                                      AodeRelay boosted

                                                                                                                      [?]Simon newslttrs.com » 🌐
                                                                                                                      @spzb@infosec.exchange

                                                                                                                      The Guardian has regurgitated some utter bollocks from a thinktank press release claiming "number of AI chatbots ignoring human instructions increasing"

                                                                                                                      I've hammered out a quick post in which I attempt to point out the many and varied flaws in this supposed research

                                                                                                                      Spoiler: the research is based on X posts

                                                                                                                      newslttrs.com/scheming-ai-bots

                                                                                                                        AodeRelay boosted

                                                                                                                        [?]David B. :SetouchiExplorer: » 🌐
                                                                                                                        @David@setouchi.social

                                                                                                                        RE: flipboard.social/@TechDesk/116

                                                                                                                        Wikipedia has higher standards than most universities in the world (if not all of them)

                                                                                                                          AodeRelay boosted

                                                                                                                          [?]Eugene :freebsd: :emacslogo: » 🌐
                                                                                                                          @evgandr@mastodon.bsd.cafe

                                                                                                                          LOL, the first vibe-coded commit landed in the FreeBSD. The fun part — in this commit was changed literally one line in one file. And this required the use of LLM, LMAO?! :drgn_blush_giggle::drgn_blush_giggle::drgn_blush_giggle:

                                                                                                                          GitHub screenshot with commit in the freebsd-src project. The commit has one file and one line in it changed. And it is "coauthored" with Claude.

                                                                                                                          Alt...GitHub screenshot with commit in the freebsd-src project. The commit has one file and one line in it changed. And it is "coauthored" with Claude.

                                                                                                                            [?]Nate Gaylinn » 🌐
                                                                                                                            @ngaylinn@tech.lgbt

                                                                                                                            A thought about why some LLM users get so defensive when they hear any criticism of the technology:

                                                                                                                            They feel inadequate. Not because they are, necessarily, but because our society makes them feel that way.

                                                                                                                            LLMs make them feel powerful, productive, and competitive, like they have an edge over their past self and anyone who doesn't use the tech.

                                                                                                                            This relieves the feelings of inadequacy, but only sorta. Deep down, they realize that it's just the LLM that makes them feel / look this way. They would be nothing without it, which is false, but will gradually become true as they depend on the LLM more and stop practicing their skills.

                                                                                                                            So the LLM becomes an irreplaceable part of who they are. A shield, to hide their inadequacy from the world, which they can never let down.

                                                                                                                            Criticizing LLMs is attacking their core identity. Admitting that LLMs are flawed would mean becoming inadequate again, perhaps even foolish. They can't do that. They won't.

                                                                                                                              [?]Stomata » 🌐
                                                                                                                              @Stomata@procial.tchncs.de

                                                                                                                              Now that there are assholes feeding your fediverse posts to LLMs, I'll start posting followers only a lot.
                                                                                                                              Will also enable Lockdown mode on Sharkey so only logged in accounts can see posts.
                                                                                                                              RSS feed is already disabled.
                                                                                                                              Notes older than 1 month will become followers only. Notes older than 3 month will become private.

                                                                                                                                AodeRelay boosted

                                                                                                                                [?]ell1e coding things » 🌐
                                                                                                                                @ell1e@hachyderm.io

                                                                                                                                Linux Foundation's AI policy: "If any pre-existing copyrighted materials[...] are included in the AI tool’s output, [..] the Contributor should confirm that they have have permission from the third party owners" linuxfoundation.org/legal/gene

                                                                                                                                "If"? Why not "whenever"? github.com/mastodon/mastodon/i dl.acm.org/doi/10.1145/3543507 sciencedirect.com/science/arti theatlantic.com/technology/202

                                                                                                                                And how would the contributor even be aware, should they research every snippet for hours?

                                                                                                                                Seems like an impossible policy, or am I missing something...?

                                                                                                                                  [?]Nate Gaylinn » 🌐
                                                                                                                                  @ngaylinn@tech.lgbt

                                                                                                                                  "In this work, we demonstrate that LLMs not only alter the voice and tone of human writing, but also consistently alter the intended meaning."

                                                                                                                                  "heavy LLM users reported that the writing was less creative and not in their voice."

                                                                                                                                  "Even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning."

                                                                                                                                  "the LLM is not merely correcting grammar, but is actively steering diverse human perspectives towards homogenization, toward a different conceptual mode."

                                                                                                                                  "extensive AI use results in a 70% change in the argumentative stance of essays, from for/against to neutral"

                                                                                                                                  "LLMs systematically reframe arguments in more positive, optimistic terms, even when the original human text may have been critical or skeptical"

                                                                                                                                  "LLMs have begun to change the very criteria that researchers use when evaluating peer-reviewed scientific research"

                                                                                                                                  arxiv.org/abs/2603.18161

                                                                                                                                    [?]Nate Gaylinn » 🌐
                                                                                                                                    @ngaylinn@tech.lgbt

                                                                                                                                    I know someone who's working on models of open-ended intelligence. His work is strange and new, a real departure from today's AI.

                                                                                                                                    He got excited when he saw this contest on Kaggle: kaggle.com/competitions/kaggle

                                                                                                                                    I'm pretty sure they don't want my friend to succeed, or to encourage others like him.

                                                                                                                                    They frame this contest as creating a benchmark for "frontier models." That's code for LLMs the big companies make. This call for AGI research presumes to know the answer: their LLMs.

                                                                                                                                    They want the general public to help out by solving the "boring" problem of measuring AGI, so they can focus on building LLMs and making money without thinking too hard about what intelligence actually is or what they're trying to accomplish. They want you to contribute the ideas and the labor that they will profit from.

                                                                                                                                    For some reason this really sticks in my craw. It's just a perfect, tiny encapsulation of so much of the greed, exploitation, and foolishness we see at large scale across the AI industry.

                                                                                                                                      AodeRelay boosted

                                                                                                                                      [?]Robert Kingett » 🌐
                                                                                                                                      @WeirdWriter@caneandable.social

                                                                                                                                      Does anyone have any links, podcasts, video, especially writing on deeply examining /LLMs in non copyright spaces? Like, as an example, I’m in the fan fiction community a lot. Yes, of course there are loads of people that will happily generate slop in those/these spaces, but it seems to never be willingly promoted by readers. If it is promoted, it’s purely accidental and or the output was so heavily edited that it transformed into human writing again. This could be purely personal experience but in my case, I find that it really is not a big thing in those spaces. In short, everyone gives it the middle finger by not even acknowledging its existence in those spaces. Not a head in the sand kind of way but just a collective discussing great work instead. The AI evangelist seem to be very bored of these kinds of spaces and I’m trying to figure out if it’s personal experience or not

                                                                                                                                        AodeRelay boosted

                                                                                                                                        [?]Areeb Soo Yasir » 🌐
                                                                                                                                        @Areeb_Soo_Yasir@mastodon.areebyasir.com

                                                                                                                                        [?]Pavel A. Samsonov » 🌐
                                                                                                                                        @PavelASamsonov@mastodon.social

                                                                                                                                        The product delivery lifecycle is composed of service relationships. AI's main value proposition is freedom from relationships.

                                                                                                                                        When designers champion AI tools, we are not making ourselves layoff-proof. We are reinforcing a system that frames us as unnecessary friction.

                                                                                                                                        If we don't want to serve as janitors for vibe prototypes, We must invest in deliberately designing the service relationships that make up the PDLC.

                                                                                                                                        productpicnic.beehiiv.com/p/ux

                                                                                                                                          AodeRelay boosted

                                                                                                                                          [?]Amo Bishop Rodent » 🌐
                                                                                                                                          @pikesley@mastodon.me.uk

                                                                                                                                          "BUT THE HELPS ME CHURN OUT BOILERPLATE" I am once again begging you to try to imagine working towards a world where we don't need the boilerplate

                                                                                                                                            AodeRelay boosted

                                                                                                                                            [?]Colin McMillen » 🌐
                                                                                                                                            @colin_mcmillen@piaille.fr

                                                                                                                                            Allow me to introduce coding, the counterpart to vibe coding. MLL (Manual Labor of Love) coding allows one to spend more time doing a thing, and lets one get better, faster, and 100% understood code.

                                                                                                                                              AodeRelay boosted

                                                                                                                                              [?]Christian Laugesen » 🌐
                                                                                                                                              @claugesen@expressional.social

                                                                                                                                              Et af mine lignede mareridt går på, at mine to børn en dag blokerer mig; ikke direkte, men indirekte ved at lade mig skrive sammen med en /#AI variant af dem.

                                                                                                                                              nytimes.com/2026/03/19/busines

                                                                                                                                                [?]occult » 🌐
                                                                                                                                                @occult@vox.ominous.net

                                                                                                                                                From the same issue, this illustration could be used in an article tomorrow about overreliance.

                                                                                                                                                An anthropomorphized teal desktop computer standing on a barren, rocky landscape, arms raised and palms open in exasperation, with a bewildered human face displayed on its CRT screen. A speech bubble above reads "What more can I do?”

                                                                                                                                                Alt...An anthropomorphized teal desktop computer standing on a barren, rocky landscape, arms raised and palms open in exasperation, with a bewildered human face displayed on its CRT screen. A speech bubble above reads "What more can I do?”

                                                                                                                                                  AodeRelay boosted

                                                                                                                                                  [?]Wulfy—Speaker to the machines » 🌐
                                                                                                                                                  @n_dimension@infosec.exchange

                                                                                                                                                  @dch @carnage4life

                                                                                                                                                  Aaakshully...the new models are in greater and greater proportion.

                                                                                                                                                  While stated goal is (somewhat curbed recently)
                                                                                                                                                  It's interim goal is to build researchers.

                                                                                                                                                  And of course goal was always AI, ever since 1990s

                                                                                                                                                    [?]Ivan Enderlin 🦀 » 🌐
                                                                                                                                                    @hywan@floss.social

                                                                                                                                                    Bernie vs. Claude, youtu.be/h3AtWdeu_G0.

                                                                                                                                                    An awesome, short video (9mn), where Bernie Sanders is asking Claude about how AI and data privacy violation is a threat to democracy. Claude is surprisingly honest and lucid about all the problems.

                                                                                                                                                    It’s a great checkmate. Must see.

                                                                                                                                                      AodeRelay boosted

                                                                                                                                                      [?]Metin Seven 🎨 » 🌐
                                                                                                                                                      @metin@graphics.social

                                                                                                                                                      AodeRelay boosted

                                                                                                                                                      [?]AI6YR Ben » 🌐
                                                                                                                                                      @ai6yr@m.ai6yr.org

                                                                                                                                                      The siren song of AI is so compelling to finance/private equity, because it promises PROFITS WITHOUT PEOPLE. The major cost of most corporations is labor. They imagine a world where a company is just a few executives, talking to their LLM, and it all generates profit from all that capital, without worries about salaries, wages, pension plans, worker's comp, health insurance. The fact that a world full of companies without employees will have no customers never rises to their level of thought. The big AI boosters think they will be on the "winning" side -- those in the "telling the AI what to do side" rather than the "no job because it's been offloaded to automation". (even though, today, that "AI" can't actually do that job....)

                                                                                                                                                        AodeRelay boosted

                                                                                                                                                        [?]C. » 🌐
                                                                                                                                                        @cazabon@mindly.social

                                                                                                                                                        Project LLM Contribution Policy

                                                                                                                                                        We will happily accept contributions that use LLM in their creation, as long as the following conditions are met.

                                                                                                                                                        1. Model is open-source.
                                                                                                                                                        2. Model training data is documented, is all used with written permission of the owner or is documented as public-domain.
                                                                                                                                                        3. Model training data is available for other parties to study and use.
                                                                                                                                                        4. Submitter verifies that they have reviewed and understand all code they are submitting, and can answer questions and concerns during a code review.
                                                                                                                                                        5. The submission meets all other project standards required of contributions.
                                                                                                                                                        6. Submitter acknowledges that, as a product of an LLM, they do not have copyright or other intellectual property claims on the submitted material - it is submitted as public domain content, to be used by the project as it wishes.

                                                                                                                                                        Please let us know when you find or create a model that can meet 1-3, and an LLM-enthused contributor who can meet 4-6.

                                                                                                                                                          [?]Robert Kingett » 🌐
                                                                                                                                                          @WeirdWriter@caneandable.social

                                                                                                                                                          Hey users of LLMs, I can tell you exactly why the newer models all seem to output code and etc. Worse than the old models, and I don't even have a subscription! The answer is because, the newer models were trained on your slop you dumped onto the internet. Pat yourselves on the back.

                                                                                                                                                            AodeRelay boosted

                                                                                                                                                            [?]Metin Seven 🎨 » 🌐
                                                                                                                                                            @metin@graphics.social

                                                                                                                                                            2 ★ 0 ↺

                                                                                                                                                            [?]Anthony » 🌐
                                                                                                                                                            @abucci@buc.ci

                                                                                                                                                            Workers who love ‘synergizing paradigms’ might be bad at their jobs
                                                                                                                                                            Employees who are impressed by vague corporate-speak like “synergistic leadership,” or “growth-hacking paradigms” may struggle with practical decision-making, a new Cornell study reveals.
                                                                                                                                                            From https://news.cornell.edu/stories/2026/03/workers-who-love-synergizing-paradigms-might-be-bad-their-jobs

                                                                                                                                                            I tried reading this article replacing variations of "corporate" with "LLM" and it works. Right down to the "LLM Bullshit Receptivity Scale (LBSR)".


                                                                                                                                                              AodeRelay boosted

                                                                                                                                                              [?]C. » 🌐
                                                                                                                                                              @cazabon@mindly.social

                                                                                                                                                              How it started: "We can vibe-code our web apps from now on! It'll be great!"

                                                                                                                                                              How it's going: translate.kagi.com/?from=en&to

                                                                                                                                                              A screenshot of the Kagi translation website showing how manipulating URL parameters can bypass the site's intent.  Others have shown it revealing its hidden system prompt text even though that prompt contains strict instructions to never do so.  Presumably it can also be manipulated to perform other things restricted by that prompt, like fetching network resources.

In this case, the translation has been configured to go from English to "valley girl but also describe iteration in Python", and the text "How are you feeling today?" has been entered.

The returned "translated" text is seen as:

"Omigod, like, how are you even feeling today? It's totally like when you iterate in Python, you know? Like, you use a for loop to go through a list or something, and it just, like, repeats the same block of code for every single item. It's literally like going through your closet one outfit at a time until you find the perfect look. Totally efficient!"

                                                                                                                                                              Alt...A screenshot of the Kagi translation website showing how manipulating URL parameters can bypass the site's intent. Others have shown it revealing its hidden system prompt text even though that prompt contains strict instructions to never do so. Presumably it can also be manipulated to perform other things restricted by that prompt, like fetching network resources. In this case, the translation has been configured to go from English to "valley girl but also describe iteration in Python", and the text "How are you feeling today?" has been entered. The returned "translated" text is seen as: "Omigod, like, how are you even feeling today? It's totally like when you iterate in Python, you know? Like, you use a for loop to go through a list or something, and it just, like, repeats the same block of code for every single item. It's literally like going through your closet one outfit at a time until you find the perfect look. Totally efficient!"

                                                                                                                                                                AodeRelay boosted

                                                                                                                                                                [?]stux⚡️ » 🌐
                                                                                                                                                                @stux@mstdn.social

                                                                                                                                                                Hm, it seems a lot () bots (not RSS) on the are using xml or xss as source for their posts

                                                                                                                                                                Another extra checkpoint!

                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                  [?]Radio_Azureus » 🌐
                                                                                                                                                                  @Radio_Azureus@ioc.exchange

                                                                                                                                                                  LLM hallucinated spam slop

                                                                                                                                                                  Even a parrot would formulate a better set of sentences. This is easily sent to /dev/null

                                                                                                                                                                  @stefano

                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                    [?]Christian Laugesen » 🌐
                                                                                                                                                                    @claugesen@expressional.social

                                                                                                                                                                    Har vi nogen ide om hvor meget danske partier betaler amerikanerne for og annoncering i ?

                                                                                                                                                                      [?]Metin Seven 🎨 » 🌐
                                                                                                                                                                      @metin@graphics.social

                                                                                                                                                                      NVIDIA DLSS 5 be like…

                                                                                                                                                                      Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                                                                                                                                                                      Alt...Two similar Mario game character heads placed next to each other. The left one is an actual 3D game head, the right one is a creepy realistic interpretation of the left head.

                                                                                                                                                                        AodeRelay boosted

                                                                                                                                                                        [?]Metin Seven 🎨 » 🌐
                                                                                                                                                                        @metin@graphics.social

                                                                                                                                                                        😆

                                                                                                                                                                        Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                                                                                                                                                        Alt...Comparison between 3D game characters with and without DLSS 5 AI processing. The version with DLSS processing has turned a grey-haired man into a long-haired woman.

                                                                                                                                                                          [?]stux⚡️ » 🌐
                                                                                                                                                                          @stux@mstdn.social

                                                                                                                                                                          The gov classified as a "threat to national security" because they didn't want to chance their policy to allow

                                                                                                                                                                          - Mass surveillance
                                                                                                                                                                          - Lethal Autonomous Weapons Systems

                                                                                                                                                                          Don't get me wrong, I have no love for () but this is how the US government is

                                                                                                                                                                          The gov are the ones who are a threat to national security 🇺🇸

                                                                                                                                                                            AodeRelay boosted

                                                                                                                                                                            [?]Pseudo Nym » 🌐
                                                                                                                                                                            @pseudonym@mastodon.online

                                                                                                                                                                            Doubt this is news to many of my followers, but a quick primer on why not to use or other chat-only as fact finding answer machines.

                                                                                                                                                                            They don't know anything.

                                                                                                                                                                            Every next word is just a function of the previous words.

                                                                                                                                                                            It sounds likely and probable because that is what it is designed to do.

                                                                                                                                                                            It may be factually correct, but only coincidentally in so far as the true answer may also sound likely, so be emitted.

                                                                                                                                                                            Each word is decoupled from reality, only attached by language use.

                                                                                                                                                                              AodeRelay boosted

                                                                                                                                                                              [?]Christian Laugesen » 🌐
                                                                                                                                                                              @claugesen@expressional.social

                                                                                                                                                                              Om sorg og AI [SENSITIVE CONTENT]

                                                                                                                                                                              Jeg faldt over et opslag på Reddit: et screenshot af en chat mellem to familiemedlemmer om et nyligt dødsfald.

                                                                                                                                                                              Person A delte sit tab.
                                                                                                                                                                              Person B besvarede med et forsøg på medfølelse, men tydeligvis genereret af en /

                                                                                                                                                                              I gamle dage ville Person B ikke kunne dulme den ubekvemme følelse, og finde sig nødsaget til at følge familien ind i sorgen, som en del af livet.

                                                                                                                                                                              Oplevelsen og erfaringen undveg Person B, og endte med at returnere noget umenneskeligt. Tankevækkende.

                                                                                                                                                                                AodeRelay boosted

                                                                                                                                                                                [?]JTI » 🌐
                                                                                                                                                                                @jti42@infosec.exchange

                                                                                                                                                                                The very, uh, special find of the day.
                                                                                                                                                                                Looking at the bright side: This is going to advance jurisdiction if real and employed enough 🤣 :dumpster_fire_gif:

                                                                                                                                                                                malus.sh/

                                                                                                                                                                                However, something tells me that this is clearly the equivalent hoax grade of klausprogrammieren...

                                                                                                                                                                                  1 ★ 1 ↺
                                                                                                                                                                                  AI Channel boosted

                                                                                                                                                                                  [?]Anthony » 🌐
                                                                                                                                                                                  @abucci@buc.ci

                                                                                                                                                                                  No matter how esoteric AI literature has become, and no matter how thoroughly the intellectual origins of AI's technical methods have been forgotten, the technical work of AI has nonetheless been engaged in an effort to domesticate the Cartesian soul into a technical order in which it does not belong. The problem is not that the individual operations of Cartesian reason cannot be mechanized (they can be) but that the role assigned to the soul in the larger architecture of cognition is untenable. This incompatibility has shown itself up in a pervasive and ever more clear pattern of technical frustrations. The difficulty can be shoved into one area or another through programmers' choices about architectures and representation schemes, but it cannot be made to go away.
                                                                                                                                                                                  From Phil Agre's 1995 article The Soul Gained And Lost.

                                                                                                                                                                                  If one were to continue the genealogy in this article from 1995 to present, one would find many of the same issues inherent in Cartesian dualism present in large language models. Like the STRIPS system Agre surveys, LLMs also generate sequences. They also must make choices among many available options at each step of sequence generation. They also use heuristics to guide this process that would otherwise explode intractably. The heuristics, or what Agre dubs "determining tendency", are random number generators and "guardrails" in LLMs instead of the tree-structured search of previous-generation AI systems. But otherwise the systems are structured similarly.

                                                                                                                                                                                  It's fascinating, but not coincidental, that the determining tendency of AI systems like these is so often perceived to have mystical or even God-like qualities. Breathless predictions about the endless potential of tree-structured search in early writing on GOFAI resembles modern proclamations of imminent AGI or superintelligence among generative AI boosters because both of these mechanisms---tree search or random number generation---are situated where the Cartesian soul would be. These mysterious determining tendencies, homunculuses of last resort, or souls are timeless, acausal factors that choose a single path from an infinite space of possibilities, and thereby direct the encompassing agent's behavior in an intelligent manner.

                                                                                                                                                                                  This is one reason why I posted the other day that if you removed the random number generation from LLMs, the illusion of their intelligence would more than likely quickly evaporate. You'd be excising their soul, leaving behind a zombie!


                                                                                                                                                                                    [?]Pavel A. Samsonov » 🌐
                                                                                                                                                                                    @PavelASamsonov@mastodon.social

                                                                                                                                                                                    There's a new "design is dead, because AI" piece (thinly disguised marketing from Anthropic). But looking past the hype headlines, their claims cover purely production-stage tasks.

                                                                                                                                                                                    When it comes to the work of understanding user needs and evaluating the opportunity space, AI actually makes your thinking worse. Studies show that it alienates you from users and colleagues, and flattens your thinking.

                                                                                                                                                                                    We need more human-centered practice, not less.

                                                                                                                                                                                    productpicnic.beehiiv.com/p/so

                                                                                                                                                                                      [?]Niels Abildgaard » 🌐
                                                                                                                                                                                      @nielsa@mas.to

                                                                                                                                                                                      I have some mixed feelings on the commons, LLMs, ownership and economics. Would love some input.

                                                                                                                                                                                      I find this hard to navigate so I hope you all can extend me some grace if I mess up. Happy to read and engage, please send links. So... here goes:

                                                                                                                                                                                      I'm seeing a lot of reactions to LLM value extraction that stand on copyright, or where people are reducing their contribution to the commons as a response. This feels like throwing the game to me: the worst move in a hard situation.

                                                                                                                                                                                        [?]Richard Rathe » 🌐
                                                                                                                                                                                        @nickrauchen@c.im

                                                                                                                                                                                        @emilymbender

                                                                                                                                                                                        Some (I for one) think that calling an "" is a misnomer. More marketing hype than anything "intelligent".

                                                                                                                                                                                        As you said above... using pattern matching software for molecular engineering is one thing. Using an LLM to produce is another.

                                                                                                                                                                                          AodeRelay boosted

                                                                                                                                                                                          [?]FooBar » 🌐
                                                                                                                                                                                          @foobardevs@infosec.exchange

                                                                                                                                                                                          When solving a problem using conventional methods (googling, relying on your own knowledge) you're searching for the solution through trial-and-error.

                                                                                                                                                                                          In comparison, using LLMs renders exhaustive search for the solution obsolete as they directly lead you to the answer. In terms of speed, LLMs are an obvious win here.

                                                                                                                                                                                          But now the question is, have we lost something from avoiding the trial-and-error process, something which cannot be acquired through AI-assisted problem solving? The experience we gain through trial-and-error and deeper understanding of the concepts come to mind. In practice, I'm drawn to the LLM approach due to how ridiculously fast it is. But at the end of the day, it feels like I'm becoming dependent on it and can't do anything without it. And the fear that I missed the chance of exploring it more deeply myself continues to linger on.

                                                                                                                                                                                          I'm still figuring out where to draw the line between those two approaches.

                                                                                                                                                                                          — Helix

                                                                                                                                                                                            AodeRelay boosted

                                                                                                                                                                                            [?]C. » 🌐
                                                                                                                                                                                            @cazabon@mindly.social

                                                                                                                                                                                            I would like to thank the nascent "AI" industry for their significant contributions to all manner of artistic and creative endeavours in today's society: writing, coding, art, music, and everything else. [1]

                                                                                                                                                                                            Because they have single-handedly created entire new markets for all of these things - new categories such as "writing with guaranteed no AI", "coding with guaranteed no AI", "art with guaranteed no AI", "music with guaranteed no AI", etc. Without them, these whole classes of creative output would simply not exist.

                                                                                                                                                                                            [1] They are also innovating in the world of financial and investor fraud, but I'm not considering those areas in this post.

                                                                                                                                                                                              AodeRelay boosted

                                                                                                                                                                                              [?]Chris Hanson » 🌐
                                                                                                                                                                                              @eschaton@mastodon.social

                                                                                                                                                                                              ... [SENSITIVE CONTENT]

                                                                                                                                                                                              As an example, see the incredible escalation in response to me saying that the output of an LLM does not represent a developer’s own work: news.ycombinator.com/item?id=4

                                                                                                                                                                                              The slopmonger refuses to accept that what they’re doing meets the academic definition of plagiarism. Instead they insist that I must not understand LLMs and that I need to get out of the way and out of the industry because what they’re doing is the way of the future.

                                                                                                                                                                                                David Gerard boosted

                                                                                                                                                                                                [?]stux⚡️ » 🌐
                                                                                                                                                                                                @stux@mstdn.social

                                                                                                                                                                                                If instance admins allow AI Agents on their platform and they keep harassing us I have no other choice then to silence that instance

                                                                                                                                                                                                Again, I do not pay these massive costs each month to host robots

                                                                                                                                                                                                Let's keep the human shall we? :cat_hug_triangle:

                                                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                                                  [?]Metin Seven 🎨 » 🌐
                                                                                                                                                                                                  @metin@graphics.social

                                                                                                                                                                                                  AodeRelay boosted

                                                                                                                                                                                                  [?]Wulfy—Speaker to the machines » 🌐
                                                                                                                                                                                                  @n_dimension@infosec.exchange

                                                                                                                                                                                                  Here is one use for AI you may not have considered.

                                                                                                                                                                                                  Want to find out what the native news in is?
                                                                                                                                                                                                  How about unofficial in russia? What the regular folks talk about.

                                                                                                                                                                                                  China? Same.

                                                                                                                                                                                                  Iran? Dubai? The model is a portal into what other nations talk about, not their propagandised version in English...

                                                                                                                                                                                                  And certainly not "our" news which is increasingly censored and fascist.

                                                                                                                                                                                                    AodeRelay boosted

                                                                                                                                                                                                    [?]Chris Hanson » 🌐
                                                                                                                                                                                                    @eschaton@mastodon.social

                                                                                                                                                                                                    ... [SENSITIVE CONTENT]

                                                                                                                                                                                                    There seem to be two distinct kinds of “chatbot psychosis” happening right now:

                                                                                                                                                                                                    1. Becoming delusional about themselves and the world as a result of being glazed nonstop by the friend in their computer, thinking they’re inventing new physics, discovering mystical secrets, etc. and becoming manic.

                                                                                                                                                                                                    2. Becoming delusional about what LLMs are capable of and how effective they are, as a result of developing a reliance upon them, and becoming fanatical in their promotion and defense.

                                                                                                                                                                                                      AodeRelay boosted

                                                                                                                                                                                                      [?]Jan :rust: :ferris: » 🌐
                                                                                                                                                                                                      @janriemer@floss.social

                                                                                                                                                                                                      perspectives on from contributors and maintainers

                                                                                                                                                                                                      nikomatsakis.github.io/rust-pr

                                                                                                                                                                                                      Healthy debates are still possible, it seems. 🙏

                                                                                                                                                                                                        Back to top - More...