AI executes.
Humans decide.

HAP enforces human-authored direction.
Execution is impossible without a Decision Owner.

AI executes the how.
Humans decide the why.

Direction is the last human domain. HAP enforces it.

Read more

The Scarcity of Direction

Modern AI doesn't wait. It predicts, escalates, and acts at machine speed. But without human leadership, AI moves into a decision vacuum—substituting statistical probability for the unique human innovation that creates real value.

To remain in control, we must enforce a fundamental boundary:

“No consequential action may be taken in a human system without an identifiable human who has explicitly authorized it, understood its tradeoffs, and accepted responsibility for its outcomes.”

HAP turns this axiom into infrastructure. It ensures every action traces back to a human Decision Owner who provides the direction machine intelligence cannot duplicate. By forcing AI to pause at the point of irreversibility, HAP keeps authorship human and innovation possible.

What AI Can Do vs. What Humans Must Do

AI can:

  • generate options
  • simulate outcomes
  • execute tasks
  • correct mistakes
  • plan optimally
  • scale instantly

AI cannot:

  • set the frame
  • justify why to act (problem)
  • choose what to optimize (objective)
  • accept the tradeoff
  • make binding commitment
  • be a decision owner

Direction is human.
Execution is machine.
HAP keeps the boundary intact.

The Six Human Gates

AI can simulate a thousand paths, but it cannot open the gate to any of them. HAP enforces these mandatory preconditions before any execution begins.

Frame — The Boundary

Humans define what we are deciding. AI has no context until a human sets the decision boundary.

Problem — The Justification

Every action needs a reason. AI calculates solutions; only humans determine if the problem is worth solving.

Objective — The Optimization

AI optimizes for any metric. Only humans can choose which outcome actually matters.

Tradeoff — The Cost

Every choice abandons alternatives. Only humans can accept the loss of what is sacrificed.

Commitment — The Point of No Return

Commitment makes a choice binding. Only a human can make an AI action irreversible.

Decision Owner — The Responsibility

Authorship and Ownership are unified. No action is taken without an identifiable human who bears the consequences.

How HAP Works

Stop → Ask → Confirm → Proceed

1

Stop

AI detects missing or ambiguous decision states: frame, problem, objective, tradeoff, commitment, or decision owner.

2

Ask

HAP triggers a structured question that forces human direction.

3

Confirm

The human confirms the decision the AI must follow.

4

Proceed

Only then does the system continue.

No skipping.
No inference.
No silent automation.

Governance Enforced, Not Documented

Most AI governance frameworks share the same core requirement: humans must remain in control of consequential AI decisions.

The EU AI Act mandates it. ISO 42001 requires it. NIST AI RMF recommends it. But none of them say how.

HAP is the how. It enforces human oversight at the protocol level — not through policies that can be ignored, but through cryptographic gates that cannot be bypassed.

Enforceable by Design

Every AI action requires a human Decision Owner who has articulated the problem, objective, and tradeoffs. No attestation, no execution.

EU AI Act Ready

Article 14 mandates effective human oversight for high-risk AI. HAP satisfies this structurally — oversight is not a checkbox, it's the architecture.

Audit-Ready Infrastructure

Every decision produces a cryptographic trail of authorship, tradeoffs, and commitments — tamper-proof and verifiable.

Why This Matters Now

As AI accelerates, execution becomes free.
Abundance becomes default.

The real scarcity becomes human direction — decisions that commit someone and shape a trajectory.

If humans stop making the hard calls, machines will make them by default — quietly, through convenience.

HAP is the infrastructure that keeps direction human.

Build With HAP

HAP turns human direction into the governing layer of intelligent systems.