ryantjessee.com
  • Home
  • About
  • Now
  • Research
  • Speaking
  • LinkedIn
  • Email
  • Stack
  • Boards
ryantjessee.com

LinkedIn

Email

The Human-in-Command (HiC) Framework

The Human-in-Command (HiC) framework is an enterprise AI governance architecture developed by Ryan T. Jessee to address the accountability gap that emerges when AI systems take consequential, often irreversible actions in regulated environments. Where conventional AI governance approaches treat oversight as a procedural checkpoint, HiC establishes enforceable command authority as the structural foundation for legally defensible AI deployment at scale. It is the answer to a question that defense, telecommunications, critical infrastructure, and regulated enterprise organizations are now confronting directly: when an AI system executes a high-stakes action, who is actually in command?

The Problem HiC Solves

The dominant paradigm for AI oversight in enterprise settings has been human-in-the-loop: a person reviews an AI recommendation and clicks approve. In low-stakes, advisory contexts, this works adequately. In high-consequence environments — where AI systems adjust network controls, execute supply chain transactions, modify access permissions, or interface with industrial hardware — it creates a dangerous accountability vacuum.

The problem is not that humans are approving AI actions. The problem is that approval without authority, context, and constraint is meaningless oversight. When an employee clicks "approve" on an AI-recommended action they do not fully understand, without access to the constraints governing that action and without genuine override capability, the organization has created what the framework's foundational research calls "Scapegoat-as-a-Service": a governance structure that assigns human liability to decisions that are, in practice, beyond human command.

This failure mode is particularly acute in regulated industries. Defense contractors must demonstrate that AI systems operating in restricted enclaves remain under human authority at all times. Telecommunications operators must show that autonomous network management systems have enforceable boundaries. Healthcare organizations face mounting legal and regulatory pressure to prove that AI-assisted clinical decisions have a coherent accountability chain. In each case, the standard audit response — "a human approved it" — is no longer sufficient. Enterprise AI accountability requires something harder to fake: genuine command authority over the systems making consequential decisions.

What Human-in-Command Means

Human-in-Command is not a technology requirement. It is an organizational one. The HiC framework defines what it means for humans to maintain genuine command over AI systems that execute autonomous action — not in the narrow sense of a kill switch, but in the full organizational sense: decision rights, escalation paths, constraint visibility, and the audit architecture to prove it.

The distinction from human-in-the-loop is structural. In a human-in-the-loop model, a human is present in the workflow. In a Human-in-Command model, a human holds command authority — meaning they possess the context to understand what the system is doing, the constraints that govern it, and the real capability to intervene. This shifts the governance question from "was a human notified?" to "was the human in a position to command?"

Organizationally, HiC prescribes four elements of command authority that must be present for an AI system to be authorized for autonomous execution in high-stakes environments. Intent clarity means the human authority must have specified and understood what the system was directed to accomplish. Input transparency means the operator must be able to see what data and signals the system acted on. Constraint visibility means the rules, limits, and boundaries governing the system's action must be known and enforceable by the human authority. Action preview means a deterministic representation of what the system will change must be available before it changes it.

Where compliance-checkbox approaches to AI governance ask whether a policy exists, the HiC framework asks whether the governance structure actually produces command. The framework is designed to survive regulatory scrutiny, legal discovery, and internal audit — because it grounds enterprise AI accountability in the substance of control, not its appearance.

The MV-HiC Extension

The Minimum Viable Human-in-Command (MV-HiC) evidence standard translates the HiC framework into a concrete procurement and deployment gating requirement. Its purpose is practical: organizations implementing AI governance for the first time need a minimum threshold — a floor below which no autonomous system should be authorized to operate in regulated or high-consequence environments.

MV-HiC defines four required artifacts that any agentic system must be capable of producing on demand: the intent record (what the system was directed to accomplish), the inputs record (the data and signals the system acted on), the constraints record (the rules and limits in effect at the time of action), and the action preview (a deterministic representation of what will change before it changes). If a system cannot produce all four artifacts on demand, it must remain in an advisory role — its outputs available to human operators, but no autonomous write authority granted.

This minimum standard is designed to be implementable before a full HiC governance architecture is in place. It gives organizations a defensible starting point and a set of evidence requirements that any AI vendor or internal AI team must meet before their system earns autonomous execution rights in the enterprise.

Application in Regulated Industries

Defense and national security. AI systems operating in classified environments and restricted enclaves face the most rigorous command authority requirements of any sector. The HiC framework directly addresses the challenge of maintaining human command over systems that execute decisions faster than human review cycles allow — providing the governance architecture and audit evidence required for authorization under high-assurance frameworks. In defense contexts, the question is never whether to govern AI; it is how to govern it in ways that preserve operational velocity without surrendering command.

Telecommunications. Network operations centers are deploying AI for autonomous fault management, traffic optimization, and security response at increasing scale. The HiC framework provides the governance layer that makes autonomous network management defensible: clear constraint boundaries, an audit-ready evidence standard, and escalation architecture that preserves human command authority when network decisions carry cascading consequences across critical communications infrastructure.

Critical infrastructure. Energy grids, water systems, and industrial control environments operate at the intersection of AI automation and physical consequence. HiC addresses the governance gap that emerges when AI systems integrate with operational technology — where an autonomous action that adjusts a control parameter carries liability implications far beyond the IT stack. The framework's emphasis on enforceable constraints and action preview is particularly applicable to cyber-physical environments where reversibility cannot be assumed.

Regulated enterprise (finance and healthcare). Financial institutions and healthcare organizations face mounting regulatory pressure to demonstrate that AI-assisted decisions have coherent accountability chains. Whether the context is AI-assisted underwriting, algorithmic risk scoring, or clinical decision support, the HiC framework provides the organizational structure for demonstrating to regulators that human authority over consequential AI action is real, not nominal — and that the evidence exists to prove it.

Research Foundation

The HiC framework is grounded in published academic research. The foundational paper — Scapegoat-as-a-Service: Moving from Human-in-the-Loop to Human-in-Command in Regulated Systems (Jessee, R.T., 2026, SSRN Working Paper, revised February 2026) — names the accountability failure mode, proposes the Human-in-Command governance architecture, and defines the MV-HiC evidence standard for systems executing high-stakes actions in industrial and federated workflows. The paper is available on SSRN: Read on SSRN →

Doctoral research on AI governance architectures and accountability mechanisms in regulated industries is ongoing. Ryan T. Jessee begins an Executive Ph.D. program at Virginia Tech in Fall 2026, where his work will examine institutional design for human command authority, evidence standards for AI-assisted decision-making in healthcare and national security contexts, and policy frameworks for enterprise AI accountability. Working papers and empirical extensions to the HiC framework will be published as they develop.

Work With Ryan

Organizations implementing AI governance frameworks, preparing for regulatory scrutiny, or building internal AI accountability structures engage Ryan T. Jessee for architecture design, advisory work, and executive education. Whether the engagement is a governance framework build, a speaking program for senior leadership, or a hands-on workshop with the team responsible for AI deployment authorization, the starting point is the same: getting the accountability structure right before the AI footprint expands. View speaking and workshop options → or reach out directly at hello@ryantjessee.com.