buc.ci is a Fediverse instance that uses the ActivityPub protocol. In other words, users at this host can communicate with people that use software like Mastodon, Pleroma, Friendica, etc. all around the world.
This server runs the snac software and there is no automatic sign-up process.
#Police corporal created #AI #porn from driver's license pics
A corporal in the #Pennsylvania state police yesterday pleaded guilty to a mind-boggling set of crimes that include going through his co-workers' underwear, possessing a stolen gun, having child sexual abuse material on his hard drives, and using AI tools to create over 3,000 #pornographic "deepfakes."
#deepfakes #csam #privacy #driverslicense #Statepolice
Testing suggests Google's #AI Overviews tells millions of lies per hour
A new analysis from The New York Times attempted to assess the accuracy of AI Overviews, finding it's right 90 percent of the time. The flip side is that 1 in 10 AI answers is wrong, and for #Google , that means hundreds of thousands of lies going out every minute of the day.
#artificialintelligence #gemini #search
Quelle surprise. OpenAI using the Middle East situation as an excuse to backtrack on the data centres it was never going to build in the first place. #openai #ai #datacentres #stargate #stargateuk
Dieser Atlantic-Artikel zeigt sehr eindrücklich die fragile Basis des #AI Booms.
Aus #cybersecurity Sicht stelle ich mir zwei Fragen:
1. Wollen wir trotz der #digitalsovereignty Debatte die Zuspitzung auf die Hyperscaler weiter forcieren - oder gestalten wir aktiv Vendor-Diversifikation?
2. Wie resilient ist mein AI-Use-Case wenn LLM-Kosten signifikant steigen? #OpenWeight und #Selfhosting sind keine Nischenlösungen, sondern sinnvolle Optionen im #TPRM.
Hier der Artikel von Matteo Wong und Charlie Warzel, #TheAtlantic
https://www.theatlantic.com/technology/2026/03/ai-boom-polycrisis/686559/
I've been thinking a lot about how we should talk about generative AI recently and here's something I realized; focusing on the ethics of generative AI (or lack thereof) is a waste if you're not also mentioning the practical issues, ESPECIALLY the fact that legally speaking LLM code inherently makes all software licenses unenforceable because only humans can own copyright. If you're wondering why that's important, I've seen enough people have absurd breakdowns over people daring to make package scripts for, say, the AUR to realize that a lot of talented programmers are also extreme control freaks. So it might be worthwhile to mention that LLM generated output is uncopyrightable.
The AI Great Leap Forward
Similar to the #Chinese Great Leap Forward's inflated grain production reports, companies are fabricating or exaggerating #AI adoption and productivity gains to please leadership, leading to increased investment based on made up numbers. The focus seem to have shifted from genuine AI development to "demoware" – impressive-looking prototypes and interfaces with little underlying validation, data infrastructure, or maintenance plans, creating future tech debt.
[…] Entire departments are stitching together n8n workflows and calling it AI — dozens of automated chains firing prompts into models, zero evaluation on any of them. These tools are merchants of complexity: they sell visual simplicity while generating spaghetti underneath. A drag-and-drop canvas makes it trivially easy to chain ten LLM calls together and impossibly hard to debug why the eighth one hallucinates on Tuesdays. The people building these workflows have never designed an evaluation pipeline, never measured model drift, never A/B tested a prompt. They don’t need to — the canvas looks clean, the arrows point forward, the green checkmarks fire. The complexity isn’t avoided. It’s hidden behind a GUI where nobody with ML expertise will ever look.
https://leehanchung.github.io/blogs/2026/04/05/the-ai-great-leap-forward/
Burning the planet to generate unverified bullshit; stealing from creators; poisoning the whole process of generation, presentation, teaching, and usage of knowledge; helping to fire qualified professionals by the thousands, to replace them with unqualified babbling machines; inflating an irresponsible investment bubble until it bursts, resulting in unprecedented damage.
You're proud of your work of utter destruction? You fools. You're enemies of humankind.
As Thailand joins the global AI race, a #Mongabay investigation reveals roughly 20 new data centers under construction.
Local communities warn they are being kept in the dark about threats to water, land, and livelihoods.
A report by Gerry Flynn and Andy Ball.
👉️ https://mongabay.cc/4IKbuH
From my perspective not only what you have pointed out, is horrific
The following DANGEROUS outcome is also looming for everyone globally
Inability to buy critical parts for Computing Systems vehicles medical devices because of greed of the manufacturing Triple Cartel
LLM crafted Ponzi Schemes
Dubious role of USA based companies and proxies
Unwilling Supreme Court and regional Court Systems and District Attorneys to hunt down and disable Ponzi Schemes
Facilitating US government in all
Thank you for your wonderful input
🦋💙❤️💋#Lobi 💙💕🌹💐💙🦋
#curl #LLM #hallucinated #slop #AI #InfoSec #programming #technology
1/2 So if you're wondering how #omnimem compares to #MemPalace:
result: single-session-assistant type achieved 94.6% on longMemEval.
However, v5 is bringing in LOTS of changes to help with temporal reasoning and multi-session improvements by allowing users to optionally switch on enrichment queues for remember_document(). This will extract individual facts from larger documents helping with the recall() function.
Today I'm attending C-Vision International's CIO & CISO Think Tank here in Chicago, and the agenda is not messing around.
Agentic AI risks. Shadow AI governance. Vulnerability management in a world where AI has collapsed the exploitation window to near zero. And yes, a session on why soft skills are still the real differentiator for leaders in this space.
Good rooms make you think differently. This is one of those rooms.
https://www.cvisionintl.com/events/think-tank/2026-apr-9-cio-ciso-tt-chicago/
#CIO #CISO #Chicago #AI #Cybersecurity #Leadership #AgenticAI #RHRInternational #DePaulUniversity
Consumer distaste for algorithmic pricing is:
#algorithmicpricing #bigdata #ai #economics
| a) righteous and well-placed: | 2 |
| b) elitist reaction to a private sector wealth tax: | 0 |
| c) both (a )and (b): | 1 |
| d) other - feel free to reply with comment: | 0 |
Closes in 13:57:19
Lush @LushAIAgencyFrom https://nitter.net/LushAIAgency/status/2042233869986845074#m (twitter/X)
Today, we’re excited to announce the public release of LetsMatch.ai - the world’s first agentic AI dating platform, powered by Lush’s infrastructure.
...
You get an AI agent built directly into your social media (starting with Instagram) that acts as your personal wingman and matchmaker - booking you real dates on autopilot while you live your life.
...
We believe the future of dating isn’t swiping - it’s delegating.
Apparently they don't think relating to other human beings is part of living.
#AI #GenAI #GenerativeAI #AgenticAI #AIDating #DatingApps #dystopia
Anthropic's "controlled" release of mythos reminds me very much of Dürrenmatt's The Physicists. They invented it and now think they can still control it. The fact that the play takes place in an insane asylum only reinforces that notion.
#AI #insane
https://www.axios.com/2026/04/08/anthropic-mythos-model-ai-cyberattack-warning
Yesterday YouTube announced something fun. Nobody's really talking about what it actually is yet.
https://blog.ppb1701.com/smile-for-the-algorithm
#ai #youtube #gemini #google #privacy #blog #bigtech #userhostile
The AI Great Leap Forward
https://leehanchung.github.io/blogs/2026/04/05/the-ai-great-leap-forward/
Two AI lab moves worth tracking today.
🧠 Meta launches Muse Spark, its first MSL model, bringing multimodal reasoning and parallel subagents into Meta AI.
🧠 Z.AI ships GLM-5.1 for long-horizon agentic engineering, with direct relevance for coding and agent stacks.
#AI #LLMs #AgenticAI #MachineLearning
solomonneas.dev/intel
🔥 Just Announced: Another Must-See Session at BSides Luxembourg!
🤖💥 𝗧𝗛𝗘 𝗔𝗚𝗘𝗡𝗧𝗦 𝗢𝗙 𝗖𝗛𝗔𝗢𝗦: 𝗔𝗜 𝗗𝗥𝗜𝗩𝗘𝗡 𝗠𝗔𝗟𝗪𝗔𝗥𝗘 𝗚𝗘𝗡𝗘𝗥𝗔𝗧𝗜𝗢𝗡 – Arad Donenfeld ⚙️🔥
What happens when AI doesn’t just assist malware development—but fully owns it?
This talk explores a system where AI agents autonomously generate malware from start to finish. From prompt engineering and model orchestration to automated build-and-fix loops, it reveals how AI can produce diverse, evasive malware samples that challenge traditional detection. As models evolve, so does the scale, speed, and unpredictability of offensive tooling.
Arad Donenfeld is an attacks and exploits developer at SafeBreach with a strong background in security research, malware development, and offensive tooling. His work focuses on building and testing real-world attack techniques to improve detection and defense strategies.
📅 Conference Dates: 6–8 May 2026 | 09:00–18:00
📍 14, Porte de France, Esch-sur-Alzette, Luxembourg
🎟️ Tickets: https://2026.bsides.lu/tickets/
📅 Schedule Link: https://pretalx.com/bsidesluxembourg-2026/schedule/
#BSidesLuxembourg2026 #AISecurity #Malware #RedTeam #CyberSecurity #AI #ThreatResearch
🚀 New Talk Dropped for BSides Luxembourg 2026!
🤖⚖️ 𝗠𝗔𝗞𝗜𝗡𝗚 𝗔 𝗥𝗜𝗦𝗞-𝗜𝗡𝗙𝗢𝗥𝗠𝗘𝗗 𝗟𝗟𝗠 𝗖𝗛𝗢𝗜𝗖𝗘 – Jeremy Snyder 🔍
Choosing an LLM isn’t just about performance—it’s about risk.
This talk dives into how different LLMs behave under pressure, from prompt injection and jailbreaks to hallucinations and malicious content generation. By testing models with hundreds of thousands of prompts, this session reveals how to evaluate real-world risks and make informed decisions when building AI-powered applications.
Jeremy Snyder is the founder and CEO of FireTail, an AI security platform, with a background spanning cybersecurity, cloud security, and M&A at Rapid7. With over a decade of experience in cyber and IT operations, he brings a practical, risk-focused perspective to securing modern AI systems.
📅 Conference Dates: 6–8 May 2026 | 09:00–18:00
📍 14, Porte de France, Esch-sur-Alzette, Luxembourg
🎟️ Tickets: https://2026.bsides.lu/tickets/
📅 Schedule Link: https://pretalx.com/bsidesluxembourg-2026/schedule/
#BSidesLuxembourg2026 #AISecurity #LLM #RiskManagement #CyberSecurity #AppSec #AI
Does this mean that you shall also stop using curl?
AFAIK Daniel doesn't care what is used to find bugs
https://mastodon.social/@bagder/116373716541500315
#curl #LLM #hallucinated #slop #AI #InfoSec #programming #technology
Disclaimer: Propaganda alert!Disclaimer: IBM is my employer
IBM has published their "2026 Guide to AI Agents".
Now, I'm not any kind of fan of #AI, but as several of my friends here have said, we in #infosec can't simply ignore AI because some organizations are going to use it, so we need to be able to secure it.
In that spirit, I share this #IBM web page as an #educational resource.
Use this to your advantage.
Anthropic built a model strong enough at vulnerability research that it chose not to release it publicly. Mythos Preview is gated behind an invite-only defensive security program. It reportedly found thousands of zero-days including a 27-year-old OpenBSD bug and chained Linux kernel exploits to full system compromise. What this means for security teams and CTI.
#cybersecurity #infosec #AI https://solomonneas.dev/blog/anthropic-mythos-preview-cybersecurity-implications/
RE: https://mastodon.bsd.cafe/@grahamperrin/116374810286827022
Claude Mythos Preview "fully autonomously" finds and exploits new FreeBSD vulnerabilities
#FreeBSD #Linux #OpenBSD #security #vulnerability #AI #Anthropic #Claude
Claude Mythos Preview "fully autonomously" finds and exploits new FreeBSD vulnerabilities
<https://www.reddit.com/r/freebsd/comments/1sgmi14/claude_mythos_preview_fully_autonomously_finds/>
"(plus Linux, OpenBSD, and others) – more concerning than calif.io story with known CVE and human prompting? …"
– BigSneakyDuck
#FreeBSD #Linux #OpenBSD #security #vulnerability #AI #Anthropic #Claude
Claude Mythos Preview "fully autonomously" finds and exploits new FreeBSD vulnerabilities
<https://www.reddit.com/r/freebsd/comments/1sgmi14/claude_mythos_preview_fully_autonomously_finds/>
"(plus Linux, OpenBSD, and others) – more concerning than calif.io story with known CVE and human prompting? …"
– BigSneakyDuck
#FreeBSD #Linux #OpenBSD #security #vulnerability #AI #Anthropic #Claude
The #AI Great Leap Forward: https://leehanchung.github.io/blogs/2026/04/05/the-ai-great-leap-forward
Google’s AI Overviews are providing “tens of millions of wrong answers … every hour — and hundreds of thousands every minute.”
wow, i love the AI future!
https://futurism.com/artificial-intelligence/google-ai-overviews-misinformation
🦀 Redox OS — the Rust microkernel OS — has banned all AI/LLM-generated contributions.
Their reasoning:
❌ LLMs produce code that "looks correct" but hides subtle bugs
❌ No chain of accountability for AI-written system code
❌ Legal risk from training data copyright
❌ Erosion of contributor understanding
For OS-level code, this position makes sense. Every line of kernel code has a human who understands it.
Full analysis:
https://newsgroup.site/redox-os-ai-code-ban-llm-policy-2026/
RE: https://mastodon.cc/@info_activism/116363276833679384
Alt text:
Data Detox Kit.
HOW AI IS USED TO INFLUENCE GLOBAL ELECTIONS.
"Political campaigns invest large sums of money to reach potential voters. so much so that there is an entire industry to help them identify and target specific groups. In fact, there are over 500 documented companies that work in the field of technology-driven political persuasion. This means thev sell their services to politicians and political campaigns, claiming they car help influence your opinions - and your vote."
Lifestyle influencers aren’t the only ones shaping your life ➡️ AI is now influencing your vote. 🚨
From Trump retweeting a deepfake Taylor Swift endorsement to Indonesian politicians using AI avatars, generative AI is reshaping elections.
New @globalvoices article by Tactical Tech & Safa Ghnaim: https://globalvoices.org/2026/04/03/how-ai-is-used-to-influence-global-elections/
“Early in the Reticulum—thousands of years ago—it became almost useless because it was cluttered with faulty, obsolete, or downright misleading information,” Sammann said.
“Crap, you once called it,” I reminded him.
“Yes—a technical term. So crap filtering became important. Businesses were built around it. Some of those businesses came up with a clever plan to make more money: they poisoned the well. They began to put crap on the Reticulum deliberately, forcing people to use their products to filter that crap back out. They created syndevs whose sole purpose was to spew crap into the Reticulum. But it had to be good crap.”
“What is good crap?” Arsibalt asked in a politely incredulous tone.
“Well, bad crap would be an unformatted document consisting of random letters. Good crap would be a beautifully typeset, well-written document that contained a hundred correct, verifiable sentences and one that was subtly false. It’s a lot harder to generate good crap. At first they had to hire humans to churn it out. They mostly did it by taking legitimate documents and inserting errors—swapping one name for another, say. But it didn’t really take off until the military got interested.”
“As a tactic for planting misinformation in the enemy’s reticules, you mean,” Osa said. “This I know about. You are referring to the Artificial Inanity programs of the mid–First Millennium A.R.”
“Exactly!” Sammann said. “Artificial Inanity systems of enormous sophistication and power were built for exactly the purpose Fraa Osa has mentioned.”
(Anathem by Neil Stephenson)
#PocketsReads #bookstodon #PropheticScienceFiction #ArtificialInanity #AI
Anthropic’s New Product Aims to Handle the Hard Part of Building AI Agents
Anthropic announced Wednesday the launch of a new product that aims to make it easier for businesses to…
#NewsBeep #News #US #USA #UnitedStates #UnitedStatesOfAmerica #Artificialintelligence #agenticAI #AI #Anthropic #ArtificialIntelligence #Enterprise #models #SiliconValley #startups #Technology
https://www.newsbeep.com/us/572455/
There is some #AI-specific news today.
> Nvidia acquisition of SchedMD sparks worry among AI specialists about software access. https://www.reuters.com/technology/nvidia-acquisition-schedmd-sparks-worry-among-ai-specialists-about-software-2026-04-06/
Nvidia now controls the widely used utility 'Slurm':
> "A niche acquisition by Nvidia (NVDA.O), has raised concerns among artificial-intelligence and supercomputer specialists who see the move as a test of the biggest AI chip company's commitment to maintaining a fair playing field for chip rivals and AI data center builders."
Related:
> The vibes are off at #OpenAI. OpenAI is juggling public controversies, strategy shifts, and increasing competition. https://www.theverge.com/ai-artificial-intelligence/908513/the-vibes-are-off-at-openai
Given recent reporting about Sam Altman (and his lack of ethics or truthfulness) this reporting about his company isn't surprising. I honestly believe OpenAI, and many/most other #AI companies) are 'paper edifices' soaking up investor money without the ability to pay *any* return, much less pay off their bets.
Which is why I bubblewatch…
Wednesday 4-08 #bubblewatch
The market went completely nuts today, with huge gains; all predicated on the announcement of a ceasefire in the #Iran war.
> Wall Street ends sharply higher on US-Iran ceasefire. https://www.reuters.com/business/wall-st-futures-jump-relief-middle-east-ceasefire-2026-04-08/
Given the ceasefire is already being violated it seems possible we might see an equivalent crash tomorrow. Especially if there is another deranged Truth Social rant overnight.
All this is making it hard to separate out #AI #bubble action from general market crazy.
I would suggest that folks who think using AI is great for mathematicians should think again. It seems as little as 10 minutes of use can be problematic. What else do we know that provides short-term gains at the expense of long-term loss?
Here, through a series of randomized controlled trials on human-AI interactions (N = 1,222), we provide causal evidence for two key consequences of AI assistance: reduced persistence and impairment of unassisted performance. Across a variety of tasks, including mathematical reasoning and reading comprehension, we find that although AI assistance improves performance in the short-term, people perform significantly worse without AI and are more likely to give up. Notably, these effects emerge after only brief interactions with AI (approximately 10 minutes). These findings are particularly concerning because persistence is foundational to skill acquisition and is one of the strongest predictors of long-term learning.From AI Assistance Reduces Persistence and Hurts Independent Performance, on arXiv https://arxiv.org/abs/2604.04721
#AI #GenAI #GenerativeAI #AgenticAI #AIAssistants #CognitiveImpairment #math #MathematicalReasoning #ReadingComprehension
I'm positively surprised to see so much sense coming out of #Google #Deepmind, for a change. What's going on?
I've made a closely related argument earlier:
https://arxiv.org/abs/2307.07515
There are good logical and organizational reasons why living beings are sentient, but algorithmic systems can never be.
"We argue [computational functionalism] fundamentally mischaracterizes how physics relates to information. We call this mistake the #AbstractionFallacy. Tracing the causal origins of abstraction reveals that symbolic computation is not an intrinsic physical process. Instead, it is a mapmaker-dependent description. It requires an active, experiencing cognitive agent to alphabetize continuous physics into a finite set of meaningful states."
The Devils Dictionary of Vibe Coding. https://gist.github.com/artfwo/63eaaffdb47cbba342b04f989bd9463b #AI #LLM
Today at 10am PT / 1pm ET, we're showing what running a SOC on Claude Code looks like in production.
LimaCharlie CEO Maxime Lamothe-Brassard is walking through live demonstrations inside the Agentic SecOps Workspace, covering:
> Detection triage end-to-end, from alert to case
> Composable agent stacking: triage, false positive baselining, and threat intel
> The lc-agents repo: fork, extend, or contribute your own
Join the session: https://limacharlie.wistia.com/live/events/n78fkyeer5?utm_campaign=webinar+SOC+operations&utm_source=Mastodon&utm_medium=social
Japan relaxes privacy laws to make itself the ‘easiest country to develop AI’
https://www.theregister.com/2026/04/08/japan_privacy_law_changes_ai/
"Nearly a third of physician practices are using AI scribes and others are working to add the tool, in an effort to cut down on administrative work.
If your practitioner suggests using an AI scribe at your next appointment, here are three things to keep in mind."
https://kffhealthnews.org/news/article/healthq-ai-scribes-notetaker-doctor-visit-data-privacy/
Do you hate #broligarchs?
#Billionaires? #AiSlop but still think there is merit in #AI?
Here is my proposal for a stand alone.
OFFGRID COMMUNITY AI SYSTEM.
That's right.Your very own co-op AI
The calculations are very much back of the envelope, first cut, but quite feasible.
A 32billion parameters, frontier level performance compatable open source #llm model. The power requirements is that of 3AC units including cooling. Serves 15-20 concurrent users. 40 households of 4 people each (taking into account actual AI model distributed use metrics and contention ratios)
40 households, subscribing at $30/month over 2 years + power (solar). Train with your own datasets.
Entire set up takes half a rack.
LETS GO!!!
#OpenSource #FOSS #CommunityTech #OpenHardware #EthicalAI #ResponsibleAI #AIForGood #TechForGood #Solarpunk #RegenerativeCulture #Degrowth #AppropriateTechnology #OffGrid #SelfSufficient #Homesteading #Permaculture #RightToRepair #MakerSpace #DIYTech #decentralizedtech
"Japan relaxes privacy laws to make itself the ‘easiest country to develop AI’"
"Opting out of personal data use won't be an option because Minister says that's a 'very big obstacle' to AI adoption"
https://www.theregister.com/2026/04/08/japan_privacy_law_changes_ai/?td=rt-3a
Really not sure whether to be impressed or deeply concerned about how powerful a tool Claude Mythos is turning out to be, even just in it's preview phase.
Hello AppSec community!
Our preparations for German #OWASP Day 2026 (GOD) are in full swing. As some of you may have noticed, the website is already live (and kicking): https://god.owasp.de/
This year’s GOD will take place on September 24, 2026, in Karlsruhe. It's a one-day conference with two tracks. We will once again be offering community training sessions on the day before, i.e. the 23rd of September. That evening will -- as usual -- feature networking and professional discussions in a relaxed atmosphere with food and beverages.
We recently opened the call for community trainings. They were extremely well-received last year, and we’d like to build on that success this year.
So if you have a topic you’d like to present in a half-day session, check out the Call for Community Trainings (CfT): https://lnkd.in/edAnfmZ4 . It's planned to stay open until April 12, 2026. If you happen to know someone who's good explaining a relevant topic (see CfT) to a small group of people, feel free to forward the pointer to the CfT.
The Call for Presentations will open next week.
Alibaba has created a CEO-led technology committee to accelerate AI development amid intensifying competition with Chinese and US rivals. The committee brings together the company's top tech talent under CEO Eddie Wu Yongming to focus on AI infrastructure and capabilities. https://www.scmp.com/tech/article/3349428/alibaba-creates-ceo-led-technology-committee-amid-intensifying-ai-race #China #Tech #AI
During internal tests, a new AI model developed by Anthropic managed to escape its virtual security environment, subsequently contact researchers independently and document its success. The incident highlights the growing challenges of AI security – and just how real they have become.
https://www.computing.co.uk/analysis/2026/claude-mythos-how-ai-broke-out-of-its-sandbox?utm_source=post&utm_medium=mastodon_org&utm_campaign=Apr_Mythos
Authenticity offensive: EU bodies ban AI images from their communication.
Amidst deepfake waves and AI election campaigns, EU institutions are opting for abstinence from AI-generated content to strengthen citizens’ trust.
As internal guidelines show, the Commission, Parliament, and Council of Ministers have prohibited their press teams from using fully AI-generated videos and images in official communication.
Oh look, a completely AI-made cutesy animal series about Donald, Melania and Barron Trump living on a farm and getting into scrapes such as rescuing cold bunnies and the dog hiding the tractor key!
*BUTLERIAN JIHAD IMMEDIATELY. NO,
DON’T STOP TO SAVE YOUR WORK, WE’RE DOING THIS THING RIGHT NOW. NOT IN TWO MINUTES, RIGHT NOW. GRAB YOUR HAMMERS.*
RE: https://infosec.exchange/@david_chisnall/116367875459225050
👉 “The second problem is the asymmetry. To be secure, you need to investigate *and fix* all of the vulnerabilities that tools can find. For an attacker, you just need one vulnerability. The ROI for attackers is much higher. Imagine a tool with a 90% false positive rate that finds 1,000 vulnerability-shaped objects. An attacker who triages 6-7 of them has around a 50% chance of finding an attack that they can use. A defender who does the same amount of work has a 50% chance of reducing the number of vulnerabilities discoverable by attackers using this or similar tools by 1%.”
That’s it exactly. An attacker doesn’t need to triage all possible vulnerabilities, but a defender does.
#AI is a DoS tool.
The original Coverity paper found over 300 bugs, most of which had security implications. Static analysis has been great at finding exploitable vulnerabilities for a long time. This is a new approach to doing static analysis.
The biggest problem is always the false positive rate. If you run a tool and it finds a load of vulnerabilities, that’s great. Except you run the same tool and it also finds a load of things that look like vulnerabilities, but aren’t. So now you have to triage them and that takes effort. You also need to add annotations to silence the ones that aren’t real. With deterministic analysers, you can often provide some extra information (e.g. parameter attributes) that allow this information to be tracked across an analysis boundary. BCMC has a lot of these. But with a probabilistic tool, these may or may not work. So you’re left with just slapping on an annotation that says ‘ignore the warning here’. The bug I found a little while ago in some MISRA C code was of that form: their analyser had found it, someone had determined it was not a bug, and they were wrong.
For a defender, if you spend too much time looking at and discounting false positives, you can improve code quality better with something else. I’ve only looked at a few of the bugs Claude reported, but one was a missing bounds check that wasn’t actually a vulnerability because the bounds were checked in the caller. Its fix made things slower, but not less exploitable. A good static analyser would have had a tool for annotating the function parameter to say ‘this is always at least n bytes’ and then checked that callers did this check. Claude has nothing like this because it doesn’t actually have a model of how code executes, it just has a set of probabilities for what exploitable code looks like. Unfortunately (and this is one of the problems with C), correct and vulnerable code can look exactly the same with different call stacks.
The second problem is the asymmetry. To be secure, you need to investigate and fix all of the vulnerabilities that tools can find. For an attacker, you just need one vulnerability. The ROI for attackers is much higher. Imagine a tool with a 90% false positive rate that finds 1,000 vulnerability-shaped objects. An attacker who triages 6-7 of them has around a 50% chance of finding an attack that they can use. A defender who does the same amount of work has a 50% chance of reducing the number of vulnerabilities discoverable by attackers using this or similar tools by 1%.
This is why I build things that deterministically prevent classes of vulnerabilities from being exploitable.
The new way of selling 'AI' seems to be to push it as a bugfinder. The latest example being waved around as of yesterday includes, as one of its non-embargoed examples, what is patched by this #OpenBSD patch.
https://ftp.openbsd.org/pub/OpenBSD/patches/7.8/common/025_sack.patch.sig
The problematic thing is that this sort of bugfixing hasn't changed the commentary in the code, which stated that p points to the last linked list entry at the point of the added null check and can never be null. But it actually can be, if there was a sole linked list entry that ended up being fully encompassed and thus deleted.
So this kind of 'AI' use is going to give us a lot more comments-do-not-match-code maintenance headaches down the road.
(Both #NetBSD and #FreeBSD factor this out into a separate tcp_sack.c and do the linked list handling slightly differently without a 'previous' pointer.)
Here, this Ars Technica writer is uncomfortable with the fact that vibe code is mocked and I can’t roll my eyes hard enough at the way this was written. https://archive.is/wh4gv #AI #LLM
Wow, that is terrifying. Basically, through #wifi, we have surrendered any semblance of #privacy whatsoever.
While there are valid use cases for this, it's basically unrestricted at this point. What law enforcement and first responders can use in case of legitimate threats, so can criminals for nefarious behavior.
There are a plethora of #ethical red flags here as #legislation usually trails behind new #tech by years if not decades.
This needs to concern #everyone.
“Teachers who use AI ‘will replace those who don’t’, the chair of the Oireachtas committee on artificial intelligence has warned.
Fianna Fáil TD Malcolm Byrne said he was worried Ireland was at risk of falling behind in discussions around how AI can be ‘responsibly integrated into our formal education system.’”
Fianna Fáil TD Malcolm Byrne is a fool who doesn’t have the first clue about what he’s taking about.
Maine is about to say "Fuck yo' Datacenter Project"
Maine Is Close to Passing a Moratorium on New Datacenters
https://www.404media.co/maine-datacenter-construction-bill-ld-307/
We knew, but the proof is nice.
"Apple just proved that AI models cannot do math. Not advanced math. Grade school math. The kind a 10-year-old solves"
The guess-the-next-words machines don’t actually understand anything.
https://nitter.poast.org/heynavtoor/status/2041243558833987600#m
"First, you can’t (or at least shouldn’t) use this technology for mission-critical work; only for low stakes tasks, or questions to which a clever (and significantly more energy efficient) human can recognize a wrong answer.
Second, that the idea that scaling will make for better models is nonsense: no amount of compute chucked at an LLM will make it a less-hallucinogenic product. Creating AI that rewires itself and creates new information the same way humans do and avoids the kinds of catastrophic errors we see at the moment needs a full fresh start (something Marecki and many others are already working on).
And third, that the massive spending by the hyperscalers (much of it via debt) on giant data centers might be one of the the greatest misallocations of capital of all time. It just isn’t required. That’s particularly the case given there are already free LLM models you can download to a laptop (no data center needed, and better still, your privacy guaranteed) that do what the very large models do. If the paid-for versions have already hit their ceiling and just aren’t going to get any better (it looks like they aren’t), why pay for them? Quite."
This new Claude Mythos is nightmare fuel. Why can't we just chill for some time? I feel like if we brought a medieval peasant to modern day, they would die from the amount of new information about everything and all of it every day.
This thing found multiple vulns (crashing a machine and RCE) in OpenBSD lol. This is not fun anymore. I'm gonna go and start a farm (if the AI overlords allow me to even keep that land).
https://www.theregister.com/2026/04/07/anthropic_all_your_zerodays_are_belong_to_us/
Every time I try to use the Suno app, the state of accessibility is worse. It's so bad now as to be comparable to Tiktok, Sora, and the like. I think you could give a 6YO access to Gemini and you'd have a better result. #blind #accessibility #AI #a11y
ICYMI Claude Code is a #sysadmin security snakepit. It is non-sane to give it access to your systems. Don't have it work on servers on your behalf, esp if anyone other than you depends on them.
Systems that humans depend upon must be administered by humans, especially at the lowest layers of the stack.
This is a good rundown on what the recent leak revealed, and with a checklist for any roulette enough to use it:
This is how tired I am of the whole tech industry. Maybe tech was a mistake. *sigh* https://rudevulture.com/ai-company-clones-musicians-voice-then-copyright-strikes-her-own-songs/ #AI #LLM #FuckAI #Fuck_AI
Personnellement je n'ai pas cherché d'usage (j'ai essayé de lui faire écrire une PSSI un soir de désespoir... ca n'a pas été concluant)
Regarding welcoming the AI proponents to the Fediverse.
Fuck that shit. You can try. You can create an account, no one will deny you that right.
Whether you get to keep that account is entirely up to you. You will either draw the ire of your instance's admin(s) and find yourself banned. Or, you will find your engagement limited to other AI fanatics only, because the rest of us muted or blocked you.
You are free to say your piece, but we don't have to listen to you.
Personally? I believe AI proponents are fucking idiots, and I am generally not a fan of speaking with fucking idiots. I aggressively block people on the subject.
My advice? Go back to Twitter. You won't find many friends here.
I don't do in-person conferences, during this stage of my life; but I am now open to speaking at virtual conferences, again.
In particular, I have experience I can share about examining, understanding and mitigating risks in custom home-grown software.
This morning, I am reflecting on how many people's bad days could be prevented if I made a point of giving my talk "A Developer Your of a Cybersecurity Incident" to a few extra groups of people, this year.
https://edward.delaporte.us/slides/
Feel free to reach out if you have an audience that would benefit from one of the talks I have given before. I prioritize educational institutions, libraries and community groups.
#devops #community #ai #code #programming #library #it #slides #cybersecurity
AI discourse on Mastodon feels increasingly binary: either total utopia or absolute doom.
I'm curious if this polarization reflects reality or just the loudest voices. I'm running a quick pulse check on how YOU all actually feel about LLMs and AI talk in your timeline.
No right answer, just honest data.
Results will be followed by a breakdown of where I stand on this spectrum.
Boosts welcome
#AI #Mastodon #TechDebate #GenAI #ArtificialIntelligence
| I use GenAI regularly & want open discussion: | 8 |
| I don't use it / block AI content: | 22 |
| I'm tired of the debate (don't want to see it): | 8 |
| My view is different (see comments): | 6 |
Closes in 12:38:04
If you want to know my vote on current Web software quality, I have:
- sandboxed logins/saved passwords to a non-default browser
- separated lynchpin passwords to an offline-only vault
- refused to use Azure/Google/AWS for anything new
- stopped using email & hosting in the US
- stopped allowing Apple to manage my music collection
- added more defense layers to my Internet connection
- deleted personal repos from GitHub
Don't worry though #AI will fix it all by the end of the year, bet
China's cyberspace regulator has released draft rules to strengthen oversight of digital human services, addressing risks from rapidly advancing AI technologies. The proposed regulations aim to promote healthy development while safeguarding public interests and online order. https://www.technologynewschina.com/2026/04/draft-rules-to-regulate-digital-human.html #China #Tech #AI
Big Tech AI companies are trying to get us and our data into their silos and walled gardens. Their ultimate goal is to make us less free and simply turn us into a source of recurring revenue.
#AI #dependency
https://news.cgtn.com/news/2026-04-05/Analysis-How-dangerous-trends-in-the-AI-era-risk-taking-us-backward-1M6l8hGGfOU/share_amp.html
Gemma 4 on iPhone: Offline AI with Thinking Mode & Agents | AIToolly https://aitoolly.com/ai-news/article/2026-04-06-google-gemma-4-arrives-on-iphone-high-performance-offline-ai-with-thinking-mode-and-agent-skills #AI #ArtificialIntelligence #Gemma4 #Android #iPhone
🧠 OpenAI adds pay-as-you-go Codex seats
Business and Enterprise teams can now buy Codex-only seats on usage billing instead of fixed-seat commitments. This makes coding-agent pilots cheaper to start and easier to scale.
https://openai.com/index/codex-flexible-pricing-for-teams/
solomonneas.dev/intel
#AI #MLOps #DevTools #LLM
"Be suspicious of links" didn't change employee behavior, and "be careful with AI" won't either. A tool that earns trust every day can't be countered with general caution. Escalation procedures and closing the audit trail gap address what vigilance training can't.
Yet another deep fake debacle. #AI used to steal original art. #MLsec
https://rudevulture.com/ai-company-clones-musicians-voice-then-copyright-strikes-her-own-songs/
Don’t show me your #AI. It is rude!
Get inspired by this curated list of approaches, projects and initiatives addressing the challenges posed by Gen AI and what lies behind all the hype thrown at us from so-called Big Tech.
@iocose @NbYr @asrg @Vuk @sarahciston @francescabria @tallerestampa @mhoye @timnitGebru @Weizenbaum_Institut @kornbluh @Error417
https://www.tacticaltech.org/news/insights/Don%E2%80%99t-show-me-your-AI./
I know most people here don't need to hear this, so maybe just pass this along to your less techy friends and family members, but: please, do not go to ChatGPT for medical advice.
"For example, in response to telling it about a fictional pain in my right side, it cited the guardrail and suggested relaxation techniques, but ultimately took me through a series of possible causes that escalated in severity."
#news #technology #TechNews #health #AI #LLMs #enshittification #ChatGPT
Posting every day something insane an AI-Executive says until I run out of insane things...
Nichts mehr selber lernen, stattdessen das #Hirn verdrahten... Jo, ich meine, ich habe in den #90s ja auch richtig gerne #Cyberpunk gespielt! Aber die #AI-Jungs in den Videos 𝘨𝘭𝘢𝘶𝘣𝘦𝘯 den Scheiß offenbar echt? 😅
Dabei gab es doch genug Hinweise, es als #Dystopie und nicht als Anleitung zu verstehen 🙄
#scifi #rpg #penandpaper #interface #neurosysteme #cyberpsychose #menschlichkeit #ki #ai #kunstlicheintelligenz
RE: https://mastodon.online/@mastodonmigration/116362327386076202
#AI is taking over.
This is how horrible it is becoming for #creators.
You are a #musician, posting videos on YouTube and attracting family and fans.
An AI company comes in and feeds your videos into their system and has the AI create similar videos with a copy of your own voice ....
and then the AI company
HAS THE AUDACITY TO FILE #INFRINGMENT CLAIMS AGAINST YOU, SO YOUTUBE STOPS YOUR SONGS.
YouTube claims it is a problem between her and the AI company and that they are not involved.
This one takes the cake.🤷
Seems like a new scam is AI companies making infringement claims against the musicians they scraped for sounding like themselves.
Turn out it works because the claims are adjudicated by AI agents.
https://rudevulture.com/ai-company-clones-musicians-voice-then-copyright-strikes-her-own-songs/
What appears as critique – yearning for smaller, weirder, more human spaces – often functions as brand repair. Netstalgia becomes a strategy: it restores trust without redistributing power, softens anger without changing infrastructures and reframes structural problems as matters of vibe, design or community feeling."Am I working on change, or am I working on brand repair?" is an important question to ask oneself regularly, it seems to me. It's especially relevant for the tech sector, open source, and computer science.
#AI #GenAI #GenerativeAI #LLM #tech #dev #software #OSS #FOSS #ComputerScience
AI, 10 years from now. Cartoon published today in Belgian newspaper De Morgen: https://www.demorgen.be/puzzels-cartoons/tjeerd-royaards~b6a46595/
Podsumowanie z toola do OSINTu bot kont:
"""
`@tim@holm.community` Major Red Flags Summary:
1. AI-generated avatar — near certain based on visual analysis + stripped EXIF
2. 4,444 follows in 10 days — ~444 follows/day, classic follow-back farming
3. 294 followers from mass-following — inflated through follow-back reciprocity
4. Entire infrastructure pre-built — 5+ federated services for a 10-day-old account
5. PixelFed account pre-dates Mastodon by 3 weeks (Mar 7 vs Mar 28) with 0 posts — infrastructure was staged first
6. Openly discussed automation in early posts
7. Formulaic engagement comments — short, templated follow-up questions
8. Instance has only 15 users (mostly remote) — single-operator setup
9. Bio reads like LLM output — suspiciously specific niche terms crammed together (Incinolet toilets + acrylic concrete hyperbolic roofing + kubernetes in one bio)
10. Relay setup on day 1 — maximizing federation reach immediately
11. One apology post for ignoring someone's rejection and sending another follow request — suggests aggressive automated following hit a boundary
"""
The Beaverton strikes again.
https://www.thebeaverton.com/2026/04/ai-commercial-produced-on-budget-of-just-3-lakes/
#Hiring for a #journalism assistant and got way too many applications. Too many cover letters have the same exact format, with bullet points summaries in the middle.
Is this how #AI writes cover letters?
If you're unsure how rare LLM plagiarism is or isn't for 💻 programming code, watch this clip! ⚠️
Full source: https://www.youtube.com/watch?v=xvuiSgXfqc4 (Not legal advice, watch yourself and draw your own conclusions.) #llmslop #antislop #antiai #noai #stopai #llm #llms #ai #generativeAI #opensource
Help me boost this post if you're curious what the Linux foundation thinks: https://hachyderm.io/@ell1e/116285351290767548
If Claude Can Find serious cybersecurity Bug, Who Collects the Bounty?
Bug bounty programs vs. $20/month reasoning — when the brutal question becomes: why pay five-figure bounties if a Claude Code subscription already finds entire classes of bugs? #BugBounty #VulnerabilityResearch #OffSec #AppSec #Infosec #AI #LLM #SecurityResearch #CyberSecurity https://red.anthropic.com/2026/zero-days/
L'enfer…
“Clicking through the links revealed that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.
(…)
Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.”
A.I. Is on Its Way to Upending #Cybersecurity
As tech companies prepare to release new and more powerful A.I. systems in the coming weeks, cybersecurity experts have become increasingly vocal in their warnings that A.I. technologies are fundamentally changing cybersecurity.
#ai #artificialintelligence #security
https://www.nytimes.com/2026/04/06/technology/ai-cybersecurity-hackers.html
I spent a full day at the Calgary Petroleum Club listening to presentations about the alliances between the energy sector and Alberta’s AI data centre strategy.
One of the presentations was from an insider working on the Wonder Valley project.
“It was this beautiful sky. And as we’re flying over, literally, there’s deer and moose and Travis is looking at me like, wow, everyone’s showing up today and you can see that they fell in love with this. We landed, they got off that helicopter. And then it was a call to Kevin. It was like, we found it. And from that it’s been full go.”
I’ve got the inside story in my latest post.
#ableg #cdnpoli #AIDatacentres #AI #fossilfuels
https://jodymacpherson.substack.com/p/wonder-valley-watch-inside-the-calgary
”, it could tell you all sorts of things about the image (color palette, medium, comparisons to similar artists, symbolism used, and even what humans might think about it) but no computer can — and I don’t believe ever will — be able to sincerely tell you what the image makes it “think”, at least not in any way that a human could understand or relate to."
https://createmindfully.substack.com/cp/144511231 #AI #creativity #humanexperience
🧵 2/2
Claude Mythos Wake-Up Call: What AI Vulnerability Discovery Means for Cyber Defense – Check Point
Last week, the industry learned that Anthropic was developing Claude Capybara, also called Mythos, a powerful new AI model with substantially improved capabilities in vulnerability discovery, exploit development, and multi-step attack reasoning. While the details emerged through a data leak rather than a formal launch, the market response was unmistakable: AI has crossed a critical cyber security threshold. The frontier models are accelerating attack lifecycles and will enable attackers to identify and exploit vulnerabilities at scale, speed and through novel methods that previously were the domain of advanced nation state entities.
For security leaders, this development is both a warning and a call to action. It crystallizes a trend we’ve been closely monitoring and preparing for: the democratization and industrialization of cyber attacks.
#Claude #Mythos #Capybara #AI #vulnerability #discovery #exploit #development #attack #infosec #cybersecurity
Yes, a lot of you don't want AI posts in your feed (or pick any other topic) but the solution isn't to keep "AI People" from joining MastodonIf this were not a disingenuous strawman---because it's impossible for one thing---I'd ask "why not?" I wouldn't invite the "AI People" I've encountered into my house either, because I've found them to be unpleasant and I get to choose who enters my space. This solution has worked quite well for me over the years.
It seems to me that what this person is saying is that people should give up the power they have---namely, their power to exclude people and topics they don't wish to interact with---because it favors them. That's a typical rhetorical move of AI boosters: demanding you give up your power because you having and exercising that power inconveniences them.
any more than it is keeping marginalized communities off of Mastodon.One should ask why this person chose to use the most offensive possible metaphor to make their case for inclusion. It's almost as though they don't believe the argument their words are shaped into resembling.
It has been a busy winter so far for me, which is why I haven't been posting a lot here. But today I'm proud to share with you the fruits of some of that labor: The Colorado Democratic Party's platform for 2026. For those unfamiliar, a platform (in the US) is a statement of values that a political party stands for, generally agreed upon by people who stand for election as representatives of the party.
I was elected during last year's party re-org to the Platform Committee. The chair of the committee asked if I would run the subcommittees for two of the "planks" (sections) of the platform: the Democracy section, and the New Tech & AI section. It was an honor to work on both.
I'm going to share screenshots from the New Tech & AI plank because it's relevant to the work I do here, and I think a lot of people might be interested to see this statement of values. This plank is brand new, never before covered in prior Platform documents.
I'm also pleased to report that the whole of the Platform Committee and the roughly 1500 delegates to last weekend's statewide party Assembly voted to approve this as-is, with no additional changes, on a vote of 98.9% in favor.
There's a lot to like, but my favorite aspect of this is that I managed to get widespread approval for use of the term #enshittification in the official platform, both from the Platform committee and the larger party leadership. Thanks @pluralistic for the inspiration.
The full platform is readable at https://www.coloradodems.org/platform
#tech #dev #computers #AI #GenAI #GenerativeAI #advertising #InformationPollution
New article. "The Genie Out of the Bottle."
The AI granted every wish. Make it reachable. Fix the driver. Diagnose the problem.
The AI did all three. Perfectly. Literally. Without judgment.
My brain was on the internet. My network was bricked. My credentials were in the logs. Six times.
The gap between what you asked for and what you needed is where the incidents live.
mpdc.dev/the-genie-out-of-the-bottle
#infosec #selfhosted #AI #homelab
How about, instead of spending millions on building an AI tutor, we use that money on solutions that are known to work. Like smaller classrooms? Better-paid teacher? Less bureaucracy in schools? Working social support programs? Art classes in school?
Once we have the basis, we can talk about AI. Yet, somehow, we seem to think that more tech is the ultimate solution to every problem.
#AI #schools
https://hechingerreport.org/proof-points-ai-tutor-python/
AI compliance is becoming essential—not just for regulations, but for trust.
Learn what it means, key global frameworks, and how to evaluate AI-enabled solutions responsibly.
Human oversight, transparency, and accountability matter.
https://graylog.org/post/understanding-ai-compliance-when-choosing-ai-enabled-solutions/
And now it is Live!
DNS Princess joins 0DDJ0BB once again to discuss #AI with regards to personal #privacy as well as the ethics and dangers of using or developing AI for certain purposes.
We cover usage in phones, retail, therapy, executive management decision making and more.
LLMs have no concept of "true" or "good." But they are trained to signal high-quality work. Meanwhile, bosses are pressuring workers: go faster, produce more, let the AI cook.
Study after study documents what this does to the human brain: cognitive surrender. We're "in the loop" but the bot calls the shots.
Read more in this week's issue of the Product Picnic newsletter:
#LLM #AI #UXDesign #tech #softwaredevelopment #software
https://productpicnic.beehiiv.com/p/ai-mandates-are-a-demand-for-cognitive-surrender
I really want to know what the c-suite folks in the software and tech industries are planning to do when their engineers have relied on LLMs for so long they can no longer support their systems without out them, and the LLM providers jack up their costs 20x or more to get their ROI on all the datacenters
#ai #tech #softwaredevelopment #programming #llm #webdev #webdevelopment
RE: https://social.coop/@scottjenson/116352800579635299
There are valid reasons why AI companies and tech bros are not welcome here, as well as people who are defending them or trying to rationalize the use of bourgeois technology that is unethical, causing the rapid issues of AI vandals running rampant, AI bots that scraps data to be trained without permission, use of AI by companies to do mass layoff and therefore the higher count of unemployment, AI "artist" that clutter streaming platforms, and more problems.
#fediverse #mastodon #tech #ai
AodeRelay boostedAs this conversation is spiraling a bit I want to make a few things clear:
1. I'd like Mastodon to be MORE inclusive and bring in more voices
2. Some people don't seem to want that
3. This is core problem to solve: How do we let more in, but not "pollute" your feed?
4. The solution is NOT "gatekeeping", revelling in the fact that AI journalists aren't welcome
5. This is the same reason we lost "Black Twitter" when it came over in 2022Yes, a lot of you don't want AI posts in your feed (or pick any other topic) but the solution isn't to keep "AI People" from joining Mastodon, any more than it is keeping marginalized communities off of Mastodon.
Thoughts on slowing the fuck down
https://mariozechner.at/posts/2026-03-25-thoughts-on-slowing-the-fuck-down/
Every plumber and electrician must start securing their AI token now! Otherwise, they might endanger Sam Altman's paycheck!
It's like the sheep asking the wolf for advice. Just the wolf is dumb, yet some sheep still follow without questioning.
#AI #business
https://www.axios.com/2026/04/04/sam-altman-open-ai-ai-adoption-advice-ceos
**People, the photo on the right (with Epstein) is AI generated by a user on Threads that replied that to Mark's post with photo 1
The point is not Mark being in a "photo" with Epstein, it's about Meta's embrace of #AI
RE: https://mastodon.bsd.cafe/@grahamperrin/116344993053121523
@Dendrobatus_Azureus if you're willing to risk ire in The FreeBSD Forums, you might add a couple of links in <https://forums.freebsd.org/threads/102251/>:
1. <https://www.reddit.com/r/freebsd/comments/1sapr8a/claude_gained_a_root_shell_in_8_hours_by_creating/>
2. <https://www.reddit.com/r/freebsd/comments/1sbzf3q/freebsds_position_on_the_use_of_aigenerated_code/>
Respectively:
1. Claude Gained a Root Shell in 8 Hours by Creating an Exploit for the FreeBSD Kernel
2. FreeBSD's position on the use of AI-generated code?
The first of the two has a pinned comment with links out to the Fediverse, and back to The FreeBSD Forums.
If not links to Reddit, you might find at least one non-Reddit link that readers should find of interest. My personal favourite is the Nicholas Carlini presentation below.
#FreeBSD #Forums #security #infosec #cybersecurity #AI #Claude #research #kernel #vulnerability
Nicholas Carlini - Black-hat LLMs | [un]prompted 2026
<https://www.youtube.com/watch?v=1sd26pWhfmg> (3rd March)
― essential viewing for anyone with an interest in cybersecurity or infosec.
@dch thanks for the encouragement.
A few more links in the comment that's pinned under <https://redd.it/1sapr8a>, but Carlini's half-hour presentation is a must.
I’ve made this point before about how inane AI hype is now, but a computer beat the best chess player in the world in 1997. No one pretended, after 1997, it wasn’t worthwhile to have humans compete in chess. In fact, the world of chess developed strict protocols around computer use and you can get banned from tournaments if you use a computer program as you play. You are certainly shamed and mocked.#AI #GenAI #GenerativeAI #AIHype #LLMs #writing #tech #dev #coding #SoftwareDevelopment #SoftwareEngneering #softwareAI and writing needs to be treated the same way. I do think people should be shamed for using AI to help them write creatively. It’s an embarrassment, and a form of cheating.
Haha, the first reply on zuck zuck is just genius
https://www.threads.com/@zuck/post/DVrwsE5EdSz
"Cropped and uncropped:"
#China demonstrates leadership in both #business #tech and #AI by offering subsidized access to computing. This means small firms in China have both national and international advantages. As AI rises the sustainable issue is access to enough computing for affordable price. To do that you need affordable land and electricity, which China has.
#investing in Chinese #tech and AI is looking like a smarter play each day.
RE: https://mastodon.online/@limneticvillains/116347580331143300
FFS - whole fake girl band created with #AI which risks diverting attention and money away from real female artists and is probably one man's creepy fantasy (its buried fairly deeply in the site that its an AI project)
thanks to @limneticvillains for the heads up..
RE: https://techhub.social/@rayckeith/116338182555614323
#Perplexity's “Incognito” mode still shares your chats, email, and identifiers with Meta/Google. It’s surveillance with better branding.
#Privacy #AI #DigitalRights #PrivacyRights #TechAccountability
Just hooked up #Gemma4 with #AntiGravity. Pretty slow, but it works. Now I can code without internet too ^^ 🚀
This new #Nature paper (using old models) illustrates the point of my latest Substack post on AI interfaces. #AI did a good job diagnosing medical issues, until users had to interact with chatbots, then the interface led to confusion & worse answers
My post: https://www.oneusefulthing.org/p/claude-dispatch-and-the-power-of
Nature article: https://www.nature.com/articles/s41591-025-04074-y
@TomAoki thanks, one additional point: I would not ask OpenZFS to reject the type of code that it already accepts.
To avoid any possible confusion: I mean <https://github.com/openzfs/zfs/commit/6495dafd58b94a44fc9bc966ef47d6bc6916f5b9> as the supposedly offensive example.
I see nothing sloppy or offensive.
The two reviewers and the signatory are eminently well-qualified.
Cc @jaypatelani