[go: up one dir, main page]

Waxell blog cover: Prompt Injection Doesn't Come from Your Users

Prompt Injection Doesn't Come from Your Users

Prompt Injection Doesn't Come from Your Users

Most AI agent teams filter user inputs for prompt injection. Attackers are injecting through tool call results — database records, web pages, emails your agent reads. **Primary keyword:** prompt injection AI agents

Logan Kelly

Waxell blog cover: AWS Security Agent Is GA. Is Your Governance?

AWS Security Agent Is Generally Available. Is Your Governance?

AWS Security Agent Is Generally Available. Is Your Governance?

AWS Security Agent went GA on March 31, 2026. It runs autonomous penetration tests at $50/task-hour with no built-in human approval gate before high-risk actions. Here's what that means for governance.

Logan Kelly

Waxell blog cover: Multi-Agent Governance Blind Spot

Your Multi-Agent System Has a Governance Blind Spot. Here's Where to Look.

Your Multi-Agent System Has a Governance Blind Spot. Here's Where to Look.

Governing each agent individually isn't enough when they delegate to each other. The coordination layer — context handoffs, policy inheritance, trust boundaries — is where multi-agent incidents originate.

Logan Kelly

Waxell blog cover: ForcedLeak — Salesforce Agentforce's Prompt Injection Governance Lesson

ForcedLeak: What Salesforce Agentforce's CVSS 9.4 Exploit Reveals About AI Agent Governance

ForcedLeak: What Salesforce Agentforce's CVSS 9.4 Exploit Reveals About AI Agent Governance

ForcedLeak exposed sensitive CRM data via a $5 domain purchase and a public web form. Here's the governance gap that made it possible — and what would have stopped it.

Logan Kelly

Waxell blog cover: AI Agent PII Protection — Detection vs. Prevention

PII Protection for AI Agents: Why Detection Is Not the Same as Prevention

PII Protection for AI Agents: Why Detection Is Not the Same as Prevention

Most teams detect PII after it enters the agent context window. Prevention blocks it before it reaches the LLM. Here's why you need both layers — and what most teams are missing.

Logan Kelly

Waxell blog cover: Indirect Prompt Injection — The Trusted Document Problem

The Trusted Document Problem: Why Indirect Prompt Injection Is Now Your AI Agent's #1 Security Risk

The Trusted Document Problem: Why Indirect Prompt Injection Is Now Your AI Agent's #1 Security Risk

CIS and OWASP both ranked prompt injection as the top AI security risk. Here's why the threat is worse than most teams think — and why it comes from trusted documents, not user inputs.

Logan Kelly

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.

Waxell

Waxell provides observability and governance for AI agents in production. Bring your own framework.

© 2026 Waxell. All rights reserved.

Patent Pending.