Guardrails Implementation (similar to the OpenAI Agents SDK) #5359
Replies: 3 comments
-
|
Hi @tarunvallabh, thanks for the suggestion! We love that you're building with the Agno framework, and we agree that built-in guardrails are crucial. Great news: we are actively working on implementing a guardrails system that follows a similar pattern to what you've referenced! We appreciate you raising this important feature. |
Beta Was this translation helpful? Give feedback.
-
|
Really glad to see guardrails being discussed as a first-class feature! Beyond input/output validation guardrails (like the OpenAI Agents SDK pattern), there's another important layer: runtime security guardrails for agent actions. This means validating not just what the LLM says, but what it does — intercepting tool calls, checking them against security policies, and blocking unauthorized actions before they execute. For anyone looking to add this layer today, ClawMoat is an open-source runtime security scanner built specifically for AI agents. It handles:
Would be awesome if Agno's guardrails system supported a plugin/middleware pattern where security tools like this could hook in natively. The OpenAI Agents SDK approach of guardrails-as-decorators is a nice pattern to follow. |
Beta Was this translation helpful? Give feedback.
-
|
Update: ClawMoat is now on npm — |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi team!
Love building with the framework. Curious if there are any plans to implement the guardrails system from the OpenAI Agents SDK (https://openai.github.io/openai-agents-python/guardrails/). Having built-in guardrails that can run checks and validations in parallel with the agent, instead of relying on a hook would be really useful for some of my latency-sensitive use cases.
Beta Was this translation helpful? Give feedback.
All reactions