Does your system honor the conditions cooperation requires?
Eight principles. 32 questions. The standard the industry has not yet applied.
In 1990, Elinor Ostrom identified eight structural conditions that predict which cooperative systems survive and which collapse. She studied commons across six continents. The principles earned the Nobel Prize.
They have never been applied to AI agent systems. This instrument applies them.
Most organizations score between 0 and 4 out of 24. The value is in seeing clearly, not in scoring well.
What You Get
8 Principles, 32 Questions
Each principle translated from commons governance to AI agent systems. Four diagnostic questions per principle. Score what your architecture enforces, not what your documentation describes.
Industry Baseline (2026)
Evidence-backed baseline for each principle showing where the industry stands. You will know where you are relative to the field - and relative to what the principles demand.
Honest Self-Assessment
We score ourselves on our own instrument and publish the gaps. 19/24 with five named weaknesses. No instrument should exempt its creators.
Level 6 Questions
Each principle includes a question that goes deeper than compliance. Not "does this meet requirements" but "does this honor what the nature of the thing demands."
The Eight Principles
Do your agents have defined, enforceable identities?
Do governance rules match context and risk level?
Can affected entities participate in making the rules?
Is agent behavior observable and transparent?
Are responses to violations proportionate?
Can disputes be resolved locally?
Does your governance survive a platform change?
Is governance present at every scale where harm can occur?
Get the Ostrom Score
$49
PDF delivered immediately. 20 pages. Score your system today.
PurchasePart of the Level 6 Governance Toolkit ($99, saves $48)