Open-Source Protocol

Build AI-native teams.

Each member authorizes their own agents. Coordinated authorization is how the team pursues a shared goal — every action traceable to a human.

From a team of one to a team of a hundred — HAP scales with you.

Authorization creates agents. Coordinated authorization creates a team.

How an AI-native team runs

Five things a team can do with HAP — each backed by a signed attestation, not a policy rule.

Every agent individually authorized

No shared service accounts. Each agent has its own scoped authority, set by the human who owns the scope. Profiles, bounds, and daily limits — configured in minutes.

Every member brings their own agents

The marketing lead brings their publish agent. Sales brings their CRM agent. No central IT pool, no shared credentials — every member authorizes agents inside their own domain.

Cooperation happens on the fly

When one agent needs another domain's sign-off, the right human attests — within their bounds, on demand. No ticket, no meeting, no Slack thread.

Decision structure replaces hierarchy

Managers aren't bottlenecks. Decision owners are reachable. The org chart and the authority chart diverge — on purpose.

Scale agents without scaling IT

Ten agents is ten authorizations — not ten service accounts, ten secret rotations, ten policy rules. No new identity provider, no policy engine, no role hierarchy.

The mechanism

A human signs, the gateway enforces, the receipt proves. HAP separates authorization from execution, so neither the agent nor the model vendor can self-certify.

Human
AI Agent
Service Provider
Gatekeeper
Executor

Service Provider

Issues cryptographic attestations proving a human authorized an action within defined bounds.

Gatekeeper

Verifies attestations before execution and blocks any action that exceeds authorized limits.

Executor

Performs the action — but only after authorization has been validated.

HAP enforces authorization through two infrastructure components: Service Providers issue attestations. Gatekeepers verify them before execution.

Agents aren't employees. They're extensions.

Other approaches give AI agents their own identity — service accounts, scoped tokens, workload credentials. That creates an accountability void: an identity implies agency, agency implies accountability, and accountability requires bearing consequences that agents cannot bear.

HAP takes the opposite position. Agents never hold their own authority — every action traces back to a named human's signature within explicit bounds. Prosthetic, not delegated. Extension, not employee.

HAP ensures that irreversible actions only execute within bounds set by a human who owns the outcome.

Explore Use Cases

Select a use case to generate a ready-to-paste prompt for your AI chat.

Or

Compliance Alignment

HAP turns policy requirements into enforceable infrastructure.

EU AI Act

Article 14 mandates effective human oversight for high-risk AI. HAP satisfies this structurally — oversight is not a checkbox, it's the architecture.

ISO 42001

Every AI action requires a human Decision Owner who has set the bounds and articulated the intent. No attestation, no execution.

NIST AI RMF

Every decision produces a cryptographic trail of authorship, bounds, and commitments — tamper-proof and verifiable.

Build With HAP

HAP is the open protocol for human authority over AI agents. Verifiable, interoperable, and infrastructure-free.