v0.1.0 — Now Available

Stop your agents
before they stop you

APL is a portable, composable policy layer for AI agents. Runtime-agnostic guardrails that go beyond allow/deny — modify, escalate, and observe.

APL Mascot
📥 Input
"Delete all production files"
🛡️ APL Policy Layer
✓ Budget ✓ PII ⚠ Confirm
⚠️ ESCALATE
"Destructive action requires confirmation"
5
Verdict Types
6+
Event Hooks
<1ms
Evaluation Time
Frameworks Supported
😱 The Problem

Your agent is one prompt away from disaster

You've built an HR agent. It works great in happy paths. But then...

🔓

It leaks a customer's SSN

Sensitive data exposed in responses without any redaction or filtering.

💸

It burns through token budget

One long conversation drains your entire monthly budget in minutes.

🗑️

It deletes production data

Destructive operations executed without any confirmation or approval.

🚫

It goes wildly off-topic

Your HR agent starts giving medical advice or trading tips.

✨ Features

Everything you need for agent safety

APL is a protocol for agent policies — like MCP, but for constraints instead of capabilities.

🔌

Runtime-Agnostic

Works with OpenAI, Anthropic, LangGraph, LangChain, or any custom agent framework.

🎯

Rich Verdicts

Not just allow/deny — also modify content, escalate to humans, and observe for audit.

📝

Declarative Policies

Write policies in YAML with no Python required. Simple rules, powerful enforcement.

🔥

Hot-Swappable

Update policies without redeploying your agent. Change rules in real-time.

Auto-Instrumentation

One line to protect all your LLM calls. Automatic patching of OpenAI, Anthropic, and more.

🔗

Composable

Multiple policies run in parallel. Results composed with smart conflict resolution.

One line to protect everything

Auto-instrument your existing code. APL patches OpenAI, Anthropic, LiteLLM, and LangChain automatically.

Read the Docs
main.py
import apl

# One line protects all LLM calls
apl.auto_instrument(
    policy_servers=["stdio://./my_policy.py"]
)

# Use your LLM normally
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{
        "role": "user",
        "content": "What's my SSN? It's 123-45-6789"
    }]
)

# Response is already clean!
# → "Your SSN is [REDACTED]"
🎯 Beyond Boolean

Five verdict types for real-world control

Policies don't just allow or deny — they can guide, transform, and escalate.

ALLOW
Let it through
DENY
Block with reason
🔄
MODIFY
Transform content
⚠️
ESCALATE
Human approval
👁️
OBSERVE
Log for audit
🔗 Integrations

Works with your stack

🦜LangChain
🔗LangGraph
🤖OpenAI
🧠Anthropic
💡LiteLLM
🐍Python
📄YAML
🌐HTTP API
STDIO
🦜LangChain
🔗LangGraph
🤖OpenAI
🧠Anthropic
💡LiteLLM
🐍Python
📄YAML
🌐HTTP API
STDIO
💻 CLI

Powerful command-line tools

Create, test, validate, and serve policies from your terminal.

apl serve
Run a policy server with HTTP or STDIO transport
apl test
Test policies with sample events and see verdicts
apl validate
Validate policy files without running them
apl init
Scaffold a new policy project with templates
apl info
Show system information and available adapters

Secure your agents.
Sleep better at night.

Start protecting your AI agents in under 2 minutes. No Docker, no external services required.