One interface for keeping AI agents safe.

any-guardrail provides a common interface for guardrail models, allowing researchers and practitioners to quickly change between different models based on their needs.

Why any-guardrail

Today, using and evaluating guardrail models is quite difficult. Each guardrail model has its own prompts, labeling taxonomy, or fine-tuned model. Because of this, performing evaluations of guardrail models becomes unnecessarily complicated.

any-guardrail alleviates this by providing a common interface for all guardrail models. This reduces the friction for users, allowing them to seamlessly switch between guardrails depending on their needs and internal evaluations, speeding up time to results.

Illustration of safety shield

Choose your version of safety

Prompt Injection Detection, Content Moderation, Customizable, Fine-Tuned Judges. We keep up with the research, so you can choose which guardrail is best for you.

Illustration of a shield over a form

Rapidly test guardrails for your use case

Our interface allows you to swiftly test different guardrails based on your needs by just changing the guardrail name.

Illustration of magnifying glass over code lines

Formalizing Research Code

Every guardrail we integrate is well researched. We implement them, so you don’t have to.

Illustration of a shield with a checkmark

Safety for your agents, where you need it

Easily integrate guardrails into any-agent callbacks, helping you audit your agent’s safety.

Get Started with any-guardrail

Focus on an issue, pick a guardrail, and validate that your agents are working as intended.