ryantjessee.com
  • Home
  • About
  • Now
  • Research
  • Speaking
  • LinkedIn
  • Email
  • Stack
  • Boards
ryantjessee.com

LinkedIn

Email

Research

My research focuses on the governance architectures and evidence standards required for safe, auditable AI adoption in regulated and high-consequence environments. The central problem: as AI systems move from advisory tools to autonomous agents that write to systems of record, modify access controls, and integrate with industrial hardware, the enterprise challenge shifts from capability to governability.

The work below represents my current published output. Additional research is in progress, including work stemming from doctoral study beginning Fall 2026.

Featured Publication

Scapegoat-as-a-Service: Moving from Human-in-the-Loop to Human-in-Command in Regulated Systems

Jessee, R.T. (2026). SSRN Working Paper / Preprint. Revised February 2026.

When AI systems take consequential actions in massive manufacturing, network operations, or highly restricted enclaves—such as executing supply chain orders, adjusting cyber-physical controls, or approving payments—the audit trail must extend beyond the name of the person who clicked "approve."

This paper names the failure mode (Scapegoat-as-a-Service), proposes a governance architecture (Human-in-Command), and defines a minimum evidence standard (MV-HIC) for systems executing high-stakes actions in industrial and federated workflows.

Read on SSRN →

Frameworks

Human-in-Command (HiC)

Human-in-Command is a governance architecture designed for environments where AI systems execute consequential, often irreversible actions. It moves beyond the passive "Human-in-the-Loop" model—where a person approves actions without necessarily understanding them—toward a posture of enforceable authority.

In the HiC model, human operators retain genuine command: they possess the context, the constraints, and the override capability required to supervise autonomous action meaningfully. The framework defines the decision rights, autonomy boundaries, and escalation paths that make delegated AI action audit-ready across federated, high-assurance environments.

Read the full HiC Framework →

MV-HIC Evidence Standard

The Minimum Viable Human-in-Command (MV-HIC) evidence record establishes the baseline artifacts required before any agentic system may be authorized for autonomous execution in a high-stakes environment.

The four required artifacts are:

  1. Intent — What the system was directed to accomplish
  2. Inputs — The data, signals, and context the system acted on
  3. Constraints — The rules, limits, and boundaries in effect at the time of action
  4. Action Preview — A deterministic representation of what the system will change before it changes it

If a system cannot produce these four artifacts on demand, it must remain advisory. This standard is designed to survive rigorous audit and serve as a gating requirement for enterprise AI procurement and deployment authorization.

Doctoral Research

Beginning Fall 2026, I will be pursuing an Executive Ph.D. at Virginia Tech. My doctoral research will examine the governance architectures and accountability mechanisms required as agentic AI systems take on consequential roles in regulated industries—focusing on institutional design for human command authority, evidence standards for AI-assisted decision-making in healthcare and national security contexts, and policy frameworks for enterprise AI accountability and auditability. Working papers and research updates will be posted here as they develop.