Open-Source Protocol

Authorization Layer
for AI Agents.

An open protocol that keeps people in control of AI agents. A person sets what an agent is allowed to do. The agent can't do anything else. Every action leaves a receipt.

How It Works in Practice

What HAP makes possible — in practice.

One agent, one set of rules

Each agent has its own set of rules about what it can do. A person decides what's allowed — how much it can spend, who it can email, what data it can change — and the agent can't do anything outside of that.

Each person controls their own area

Each person on a team has their own area — sales, marketing, finance. They decide which AI agents work in that area and what those agents can do. No central IT team in between.

Actions that need approval from more than one person

If an agent's action touches more than one area — say, a marketing agent that needs to spend part of the finance budget — each person responsible has to approve before the agent can proceed.

Authority follows decisions, not job titles

Whoever is responsible for a decision approves it, no matter where they sit on the org chart. Managers don't have to sign off on everything — the person who actually owns the decision does.

Adding agents doesn't mean adding IT

Agents don't get accounts, passwords, or API keys of their own. Each one works under a person's approval. Adding more agents just means more approvals — not more systems to manage.

How HAP Works

A person approves what an agent is allowed to do. The system enforces it — blocking anything outside the approval. Every action produces a receipt anyone can check.

Human
AI Agent
Service Provider
Gatekeeper
Executor

Service Provider

Where a person records their approval, along with the exact limits.

Gatekeeper

Checks the approval before any action runs. Blocks anything outside the approved limits.

Executor

Runs the action — but only if the Gatekeeper allows it.

HAP uses two pieces of infrastructure: Service Providers record approvals. Gatekeepers check them before anything runs.

Agents Aren't Employees. They're Extensions.

Other approaches give AI agents their own accounts and passwords, like employees with their own identity. But if an agent does something wrong, who's responsible? The agent can't be held to account — it's software. There's no way for it to feel a consequence.

HAP works differently. An agent never acts on its own authority. Every action traces back to the person who approved it, with the exact limits they set. The agent is an extension of the person — not a separate employee.

Anything that can't be undone only runs if a person approved it — and stays inside the limits that person set.

Explore Use Cases

Select a use case to generate a ready-to-paste prompt for your AI chat.

Or

Compliance Alignment

HAP turns compliance requirements into something the system actually enforces — not just something written in a policy document.

EU AI Act

Article 14 of the EU AI Act requires real human oversight of high-risk AI. HAP provides this by design — oversight isn't a checkbox in a policy, it's built into how the system runs.

ISO 42001

Every AI action needs a person who owns the decision and has set the limits — and has said why. No approval, no action.

NIST AI RMF

Every action leaves a tamper-proof record — who approved it, what limits were set, what happened. Anyone can verify it.

Build With HAP

HAP is an open protocol for keeping people in charge of AI agents. Verifiable. Works across platforms. Not tied to any AI vendor.