Superagent is an open-source AI safety platform built to protect applications from prompt injections, data leaks, and harmful outputs. It embeds real-time safety directly into AI workflows, helping teams secure models before threats cause damage. Superagent provides guardrails that block jailbreaks, prompt manipulation, and sensitive data exfiltration. It includes redaction tools to remove PII, PHI, and secrets automatically from text. The platform also scans code repositories to detect AI-specific attack vectors like repo poisoning. Superagent is designed for low-latency production environments and works with any major LLM provider. It enables teams to prove compliance with modern AI security and regulatory standards.
Features
- Run locally with docker and docker compose
- Javascript SDK
- Documentation available
- Superagent is a powerful tool that simplifies the configuration and deployment of LLM
- Manage and deploy AI agents to production
- Built in memory and document retrieval via vector dbs, powerful tools, webhooks, cron jobs etc.