There is a particular kind of credibility tax that governance practitioners owe. If you publish frameworks for responsible AI deployment and then run an ungoverned pipeline to manage your own content, you have already answered the most important question anyone will ask about your work. So when I built the content infrastructure behind this site, I treated it as a governance problem first and an engineering problem second.
This is a short account of what I built, why the architecture decisions were governance decisions, and what I am knowingly accepting as residual risk.
The Stack
The site runs on Super.so, rendered from a Notion workspace. Content updates move through a Claude-mediated layer using a Notion MCP integration — what I have been calling a "Dispatch" workflow — that allows me to push structured updates without manual republishing. It is a lightweight, three-layer architecture: authoring, translation, rendering.
Nothing about this is exotic. The governance interest is not in the tools. It is in where the human sits relative to the pipeline.
The Architecture as a Governance Artifact
In my published work on the Minimum Viable Human-in-Command framework, I describe four pre-execution artifacts that any governed AI action should produce before an agent is permitted to act: a declared Intent, defined Inputs, explicit Constraints, and an Action Preview subject to human review. The framework was developed for regulated enterprise systems. It scales down without losing its shape.
In this pipeline, those artifacts map as follows. Intent is declared when I initiate a content update — the task is bounded before the model is involved. Inputs are the Notion page content I have authored, not external feeds or unreviewed material. Constraints are implicit in the pipeline design: the model translates and structures; it does not originate. Action Preview is the review step before anything reaches the rendered site.
The gate is not sophisticated. It does not need to be. The blast radius of a personal website is small, and governance overhead should be proportional to consequence. What matters is that the gate exists, that it is structural rather than aspirational, and that nothing publishes without my eyes on it. That is the Competence Floor for this use case: the minimum viable threshold an operator must meet before granting an AI system operational authority over an output channel.
The Risk Profile — and What I Am Accepting
I want to be direct about this, because intellectual honesty is the whole point.
The realistic attack surface in this pipeline is the Notion content layer. If malicious instructions were inserted into a page I am editing — through pasted external content, a manipulated embed, or a compromised collaborator — those instructions could potentially be interpreted by the Claude layer as prompts rather than as content to be published. This is prompt injection, and the Notion MCP integration does not sanitize for it by design.
I am a sole author in a private workspace. My review gate catches anomalous output before it renders. The pipeline is not automated to the point of autonomous publication. For a personal proof-of-concept, I have evaluated the residual risk and I am accepting it. That is a deliberate, documented decision — not an oversight.
One additional structural mitigation is worth naming explicitly: the Notion workspace connected to this pipeline contains only this site. There is no other content, no other integrations, and no ambient organizational context for a compromised session to reach. This is intentional compartmentalization — a design decision that eliminates spillage risk, removes the conditions for a confused deputy attack in which the agent might act on unrelated content as though it were an instruction, and forecloses any meaningful root access vector into broader personal or professional data. The blast radius of a successful injection is bounded to a single public-facing website, not an entire workspace. That boundary is structural, not incidental.
This is also where I want to pause for the reader who is considering a similar setup at larger scale. The risk calculus changes with the blast radius. A pipeline that ingests external content, operates with multiple collaborators, or runs with reduced human review introduces compounding injection surface. The structural mitigations I am relying on — sole authorship, manual gate, bounded inputs — do not transfer automatically to more complex deployments. Govern accordingly.
The other risk worth naming is automation creep. Pipelines like this have a natural tendency toward increased autonomy over time. A manual review step that feels frictionless today becomes a checkbox tomorrow and disappears the following quarter. If I ever move toward a more automated publish workflow, the governance architecture will need to be revisited before that transition, not after.
The Obligation
I did not build this because it was the most efficient way to run a personal website. I built it because the practitioner who publishes governance frameworks and then exempts their own stack from scrutiny has a consistency problem.
The MV-HIC framework argues that human command is not a UI feature or a compliance checkbox — it is an architectural commitment. This site is a small proof of that claim. The tools are modest. The stakes are low. The structure is the same one I would recommend at enterprise scale, calibrated to the consequence of failure at this scope.
If the framework only works when someone else is implementing it, it does not work.