[go: up one dir, main page]

    • Jhex@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      1 day ago

      it’s worse than you probably think… this is the claim the garbage company Axios hired for this:

      Our simulations go beyond predicting outcomes — they shape them.

      sauce

      So it’s basically, tell me what you want the survey results to be

    • Hazor@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Right, like, … I can imagine how some of the sociopathic fools who tend to find themselves in executive positions could be fooled into thinking this was a sensible cost-saving measure… But anyone who’s capable of an ounce of reasoning, or who has any basic understanding of generative AI or statistics, shouldn’t need more than a few seconds to realize why this could not ever provide output that would reliably emulate a survey of actual humans.

  • merc@sh.itjust.works
    link
    fedilink
    arrow-up
    16
    ·
    2 days ago

    Axios updated the story:

    Editor’s note: This story has been updated to note that Aaru is an AI simulation research firm.

    But still stands by their claim:

    New findings by Aaru, an AI simulation research firm, for Heartland Forward show that a majority of people trust their own doctors and nurses

    What kind of bullshit “fact checking” is this?

    “New findings by Smegma, an Xbox chatroom research firm, show that your mother is a woman of loose morals who has had sexual intercourse with dozens of Xbox gamers.”

    • Phoenixz@lemmy.ca
      link
      fedilink
      arrow-up
      7
      ·
      2 days ago

      Pretty much this

      Also, expect much more of this, if not the vast majority of opinion polls to be like this

  • DragonTypeWyvern@midwest.social
    link
    fedilink
    arrow-up
    55
    ·
    2 days ago

    “the idea is tantalizing”

    No the fuck it isn’t, and that’s not even a Fuck AI type opinion just basic fucking scientific principles

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      Lying, cheating, stealing, exploitation and propaganda all sound “tantalizing” when you’re a criminally corrupt sociopath.

      We’re just lucky capitalism doesn’t reward sociopaths with wealth and power /s

  • Clent@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    1
    ·
    2 days ago

    In relayed news, a recent study that concluded I am, not just the smartest person in the universe but also the smartest that has every been or will ever be.

  • hansolo@lemmy.today
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    Yes, but how much of the training data is synthetic data? Because I expect this startup has no idea. Microsoft uses ML to crawl files on OneDrive to build aggregate models of document types, then use that for LLM training.

    It’s just all slop all the way down, huh? Just a fuzzy picture of a fuzzy picture hit with the “sharpen” filter 20 times?

  • I Cast Fist@programming.dev
    link
    fedilink
    arrow-up
    16
    ·
    2 days ago

    It’s ironic that the survey companies, who I thought wanted to avoid noise and bullshit, would pay for noise and bullshit that any RNG could fill.

  • Jankatarch@lemmy.worldOP
    link
    fedilink
    arrow-up
    26
    ·
    2 days ago

    Alt text.

    A recent Axios story on maternal health policy referenced “findings” that a majority of people trusted their doctors and nurses. On the surface, there’s nothing unusual about that. What wasn’t originally mentioned, however, was that these findings were made up.

    Clicking through the links revealed (as did a subsequent editor’s note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.

    The practice Aaru used is called silicon sampling, and it’s suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

  • Tarogar@feddit.org
    link
    fedilink
    arrow-up
    22
    ·
    2 days ago

    They were so busy thinking about the fact that they could that they didn’t stop to think if they should. How much of an idiot can you be?

    • Burninator05@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      22 hours ago

      I dont know the Axis was ever the most trustworthy source out there but if they’re doing this then less trustworthy sources are also doing it.

  • dadarobot@lemmy.ml
    link
    fedilink
    arrow-up
    11
    ·
    2 days ago

    Wasnt it axios that had that controversy recently where some github admin ended up in a flame war with an ai, and axios made up quotes?

    Or was that someone else?

  • Retail4068@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    2
    ·
    2 days ago

    I’m still convinced axios made up the “truck owners don’t use their shit right” back in 2018 and it caused 75% of the internet hate for trucks. To this day after asking repeatedly I still have not found a single lock of evidence outside one of their hit pieces.

      • Retail4068@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        Cite one source. I bet when you Google whatever random website to support your already made view it leads back to nowhere or axios.

        • michaelmrose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          I see very little hauling, TONS of trucks in the city, and trucks parked at people’s office job. It’s kind of painfully obvious.

          • Retail4068@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            39 minutes ago

            I thought today was the day somebody might have an ounce of data instead of regurgitating retarded observations with biases and not a metric in sight. Not today I guess.