[go: up one dir, main page]

AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned.

In most test scenarios, large language models (LLMs) – the technology behind platforms such as ChatGPT – successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted.

The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a “fundamental reassessment of what can be considered private online”.

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a “Dolores park”.

In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence.

Study link - https://arxiv.org/abs/2602.16800

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    Ah, ‘The Guardian’ accidentally wrote “Hackers” instead of “US Oligarchy” or “Corporations”. Better to hide that fact by deferring the actor to "those pesky ‘hackers’ " - they are always anonymous.

    The Guardian are totally removed information for the Oligarchs - as usual…

  • surewhynotlem@lemmy.world
    link
    fedilink
    English
    arrow-up
    28
    ·
    9 hours ago

    My girlfriend used to spend weeks stalking through years old comments to unearth info about a person. Now she’s being replaced by AI.