Open-Source Protocol

Authorization Layer
for AI Agents.

Every agent action — authorized by a human, bounded, and cryptographically proven. No policy engines. No role hierarchies. A protocol for accountability that scales without scaling IT.

How HAP Works

HAP separates authorization from execution. Humans authorize actions through cryptographic attestations. Gatekeepers verify those attestations before any system is allowed to execute.

Human
AI Agent
Service Provider
Gatekeeper
Executor

Service Provider

Issues cryptographic attestations proving a human authorized an action within defined bounds.

Gatekeeper

Verifies attestations before execution and blocks any action that exceeds authorized limits.

Executor

Performs the action — but only after authorization has been validated.

HAP enforces authorization through two infrastructure components: Service Providers issue attestations. Gatekeepers verify them before execution.

AI Executes. Humans Own It.

AI Agents can deploy code, move money, grant access, and operate infrastructure. But they cannot own it — because ownership requires bearing consequences, and AI cannot bear them.

HAP ensures that irreversible actions only execute within bounds set by a human who owns the outcome.

Explore Use Cases

Select a use case to generate a ready-to-paste prompt for your AI chat.

Or

Compliance Alignment

HAP turns policy requirements into enforceable infrastructure.

EU AI Act

Article 14 mandates effective human oversight for high-risk AI. HAP satisfies this structurally — oversight is not a checkbox, it's the architecture.

ISO 42001

Every AI action requires a human Decision Owner who has set the bounds and articulated the intent. No attestation, no execution.

NIST AI RMF

Every decision produces a cryptographic trail of authorship, bounds, and commitments — tamper-proof and verifiable.

Build With HAP

HAP is the open protocol for human authority over AI agents. Verifiable, interoperable, and infrastructure-free.