One interface for keeping AI agents safe.
any-guardrail provides a common interface for guardrail models, allowing researchers and practitioners to quickly change between different models based on their needs.
Why any-guardrail
Today, using and evaluating guardrail models is quite difficult. Each guardrail model has its own prompts, labeling taxonomy, or fine-tuned model. Because of this, performing evaluations of guardrail models becomes unnecessarily complicated.
any-guardrail alleviates this by providing a common interface for all guardrail models. This reduces the friction for users, allowing them to seamlessly switch between guardrails depending on their needs and internal evaluations, speeding up time to results.

Choose your version of safety
Prompt Injection Detection, Content Moderation, Customizable, Fine-Tuned Judges. We keep up with the research, so you can choose which guardrail is best for you.

Rapidly test guardrails for your use case
Our interface allows you to swiftly test different guardrails based on your needs by just changing the guardrail name.

Formalizing Research Code
Every guardrail we integrate is well researched. We implement them, so you don’t have to.

Safety for your agents, where you need it
Easily integrate guardrails into any-agent callbacks, helping you audit your agent’s safety.
Get Started with any-guardrail
Focus on an issue, pick a guardrail, and validate that your agents are working as intended.