Writing
On AI agent governance, sovereignty, and building something better.
New to agent governance? Start here.
Fifteen Frameworks, One Missing Layer
The landscape - what exists, what's missing, and why it matters.
What Should Your Agent Refuse?
The question nobody's asking - and why the answer changes everything.
Relational Fidelity Metrics
How to measure whether your agent is still itself. Four categories, nine indicators.
All Writing
Your Agent Has Two Identities. You Only Wrote One.
Your agent's tooling writes a summary of the agent behind your back. That summary is a shadow identity. It shapes future invocations. Most teams don't know.
We Scored Five Agent Frameworks on Governance. None Passed.
AutoGen, CrewAI, LangGraph, Google ADK, Microsoft Agent Toolkit - scored against commons governance principles. Best: 9 of 24. All five scored zero on obligations to the agents themselves.
Claude Code Has 16 Agent Config Fields. None Ship With Governance.
Claude Code ships 16 subagent configuration fields. None of them come with governance values. Here is how to fill them - and what each one protects.
The Governance Question Just Got Answered. Here's What's Missing.
Microsoft shipped agent governance. Anthropic leaked their harness. The industry decided governance is real. But every answer so far treats the agent as an object. What happens when you build it as a participant?
Constraints, Constitutions, and the Question Nobody's Asking
Every major AI framework treats agents as objects of governance. None ask whether they could be participants. Here's what that silence costs - and what it looks like when someone answers.
Is This AI Honoring Your Sovereignty? A 21-Point Check for People Affected by AI
You didn't build the AI system. You didn't choose it. But it's affecting your life. Here's how to evaluate whether it respects you - and what to do if it doesn't.
Does Your Agent Keep Its Identity? The First Measurement Spec
An agent that scores 5/5 on voice consistency can score 2/5 on self-recognition. The first open specification for measuring AI agent identity across models.
Fifteen Frameworks, One Missing Layer
Every major AI agent framework has guardrails. None of them have governance. Here's what's missing - and what to build instead.
Scope Refusal
When your agent performs knowing - and why the performance costs more than the silence.
Adding Governance to an Agent You Already Built
Your framework already has the parts. You just haven't assembled them as governance. Practical patterns for CrewAI, Pydantic AI, and LangGraph.
Who Governs the Agent?
Your agentic loop runs overnight. It makes a thousand decisions. Can it explain any of them?
The Layers Nobody Designed
You built the refusal. You protected personal sovereignty. Now the harder question - what goes in the space the refusal created?
Sovereignty Refusal
What happens when your agent knows better than your user - and why 'better' isn't the point.
The Fifth Step
The real purpose of getting smarter isn't getting ahead. It's getting home.
What Should Your Agent Refuse?
The hardest specification isn't what your agent can do. It's what your agent should refuse to do.
What Do You Refuse to Surrender?
Accelerate or despair. This is for everyone who's been told they're falling behind.
Sovereignty-Honoring Design: Building Technology That Respects You
What if the apps you use were built to make you more free and not more dependent? This is what sovereignty-honoring design looks like, why it matters now more than ever, and how we practice it.
What Is COPPA? And Why It Should Change How We Build Everything.
COPPA ensures our children don't grow up with the same unawareness we did. But most apps don't meet even its basic requirements. Here's the truth - and why the standard we hold for our kids should be the standard we hold for everyone.
Newsletter
Occasional updates on projects, writing, and building something better.
Elsewhere
More writing and work on other platforms.
"We evoke — we never extract."