Open-Source Protocol
What HAP makes possible — in practice.
Each agent has its own set of rules about what it can do. A person decides what's allowed — how much it can spend, who it can email, what data it can change — and the agent can't do anything outside of that.
Each person on a team has their own area — sales, marketing, finance. They decide which AI agents work in that area and what those agents can do. No central IT team in between.
If an agent's action touches more than one area — say, a marketing agent that needs to spend part of the finance budget — each person responsible has to approve before the agent can proceed.
Whoever is responsible for a decision approves it, no matter where they sit on the org chart. Managers don't have to sign off on everything — the person who actually owns the decision does.
Agents don't get accounts, passwords, or API keys of their own. Each one works under a person's approval. Adding more agents just means more approvals — not more systems to manage.
A person approves what an agent is allowed to do. The system enforces it — blocking anything outside the approval. Every action produces a receipt anyone can check.
Where a person records their approval, along with the exact limits.
Checks the approval before any action runs. Blocks anything outside the approved limits.
Runs the action — but only if the Gatekeeper allows it.
HAP uses two pieces of infrastructure: Service Providers record approvals. Gatekeepers check them before anything runs.
Other approaches give AI agents their own accounts and passwords, like employees with their own identity. But if an agent does something wrong, who's responsible? The agent can't be held to account — it's software. There's no way for it to feel a consequence.
HAP works differently. An agent never acts on its own authority. Every action traces back to the person who approved it, with the exact limits they set. The agent is an extension of the person — not a separate employee.
Anything that can't be undone only runs if a person approved it — and stays inside the limits that person set.
HAP turns compliance requirements into something the system actually enforces — not just something written in a policy document.
Article 14 of the EU AI Act requires real human oversight of high-risk AI. HAP provides this by design — oversight isn't a checkbox in a policy, it's built into how the system runs.
Every AI action needs a person who owns the decision and has set the limits — and has said why. No approval, no action.
Every action leaves a tamper-proof record — who approved it, what limits were set, what happened. Anyone can verify it.
The specification — how approvals are structured and signed.
Where people record their approvals.
The check that makes sure nothing runs without an approval.
An open-source program that runs the check alongside your AI tools.
How the protocol is governed and who runs it.
HAP is an open protocol for keeping people in charge of AI agents. Verifiable. Works across platforms. Not tied to any AI vendor.