Human-in-the-Loop (HITL)
A system design pattern where a human reviews and approves every AI output before any action is taken. HITL provides the maximum level of human oversight but constrains the system's speed and scalability to the pace of human review.
Why It Matters
HITL is often the default assumption in AI governance policies, but it's not always practical or even desirable at scale. Understanding when HITL is appropriate — and when alternative oversight models are better — is a key governance decision.
Example
A radiology AI flags potential tumors in medical images, but a human radiologist reviews every flagged image before any diagnosis is communicated to the patient. The AI increases the radiologist's efficiency without making autonomous medical decisions.
Think of it like...
HITL is like a spell-checker that highlights suggestions but waits for the writer to accept or reject each one — nothing changes without explicit human approval.
Related Terms
Human-on-the-Loop (HOTL)
A system design pattern where AI operates autonomously but a human monitors outputs and retains the ability to intervene, override, or shut down the system when needed. HOTL balances automation efficiency with human oversight for systems where reviewing every output isn't feasible.
Human-in-Command (HIC)
A governance principle where humans retain ultimate authority and control over AI systems, including the ability to decide the scope of AI autonomy, override any AI decision, modify the system's behavior, and shut it down entirely. HIC is the overarching principle that encompasses both HITL and HOTL as implementation patterns.
Automation Bias
The tendency for humans to over-rely on automated systems, accepting AI outputs without sufficient scrutiny even when those outputs are wrong. Automation bias increases with system accuracy — the more often the AI is right, the less likely humans are to catch the times it's wrong.