APL is a portable, composable policy layer for AI agents. Runtime-agnostic guardrails that go beyond allow/deny — modify, escalate, and observe.
You've built an HR agent. It works great in happy paths. But then...
Sensitive data exposed in responses without any redaction or filtering.
One long conversation drains your entire monthly budget in minutes.
Destructive operations executed without any confirmation or approval.
Your HR agent starts giving medical advice or trading tips.
APL is a protocol for agent policies — like MCP, but for constraints instead of capabilities.
Works with OpenAI, Anthropic, LangGraph, LangChain, or any custom agent framework.
Not just allow/deny — also modify content, escalate to humans, and observe for audit.
Write policies in YAML with no Python required. Simple rules, powerful enforcement.
Update policies without redeploying your agent. Change rules in real-time.
One line to protect all your LLM calls. Automatic patching of OpenAI, Anthropic, and more.
Multiple policies run in parallel. Results composed with smart conflict resolution.
Auto-instrument your existing code. APL patches OpenAI, Anthropic, LiteLLM, and LangChain automatically.
import apl # One line protects all LLM calls apl.auto_instrument( policy_servers=["stdio://./my_policy.py"] ) # Use your LLM normally from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4", messages=[{ "role": "user", "content": "What's my SSN? It's 123-45-6789" }] ) # Response is already clean! # → "Your SSN is [REDACTED]"
Policies don't just allow or deny — they can guide, transform, and escalate.
Create, test, validate, and serve policies from your terminal.
Start protecting your AI agents in under 2 minutes. No Docker, no external services required.