You didn’t build the AI system at your job. You didn’t choose the algorithm that decides what your kids see. You didn’t ask for the chatbot that replaced the customer service team you used to call.
But here you are. Living inside systems you had no say in building.
This is a tool for you.
Not for the people who built it. Not for the executives who bought it. For the person on the other end - the one whose work, attention, data, and daily life are shaped by decisions that were made in rooms you weren’t invited to.
I built the original Sovereignty Assessment Toolkit for product teams - a 47-point framework that asks builders: does the thing you made respect the people who use it? That version costs $49 and takes 90 minutes.
This version is different. It’s 21 questions. It’s free. It takes 15 minutes. And it asks the question from your side of the screen: does this system respect me?
How This Works
Seven categories. Three questions each. For every question, answer: Yes, Partly, or No.
You don’t need to be technical. You don’t need to understand how the system works under the hood. Every question is something you can observe from your own experience of using the product.
Be honest. The value of this is in the truth, not in looking for a good score.
1. Transparency - “Can I see what’s happening?”
Can you find out what data the system collects about you - in plain language, without a law degree?
Look for a privacy page or data settings. If you find a wall of legal text that takes 20 minutes to read, that’s not transparency. That’s compliance theater. Real transparency means you can understand what’s collected in under two minutes.
Do you know how the company makes money from your use of this product?
If the product is free, you should be especially curious. Advertising? Data sales? Training AI models with your inputs? If you can’t answer this question, the business model is you. Free should be an act of generosity - and sometimes it is. The fact that you have to check is the problem, not the free itself.
When the company changes its rules, do they tell you clearly and give you time to decide?
Not a buried email. Not a banner you click past. A clear explanation of what changed, why, and what it means for you - with enough time to leave if you don’t agree.
If you answered “No” to most of these: You are operating inside a system that doesn’t think you need to understand it. That’s a design choice - not a technical limitation.
2. Autonomy - “Can I leave?”
Can you stop using this product without losing important data, work, or connections?
Try to imagine leaving right now. What would you lose? If the answer is “everything I’ve built over the past three years” - that’s a lock-in strategy, not a feature.
Can you cancel, downgrade, or delete your account as easily as you signed up?
Signing up took two minutes. If canceling takes a phone call, a 14-day waiting period, and three “are you sure?” screens - that asymmetry is the point. They made it easy to get in and hard to get out on purpose.
Does the product let you set your own limits - on time, notifications, or how often it contacts you?
Look in your settings. If there’s no way to say “stop sending me notifications after 9pm” or “limit my daily use to 30 minutes” - the product doesn’t want you to have limits. It wants your attention. All of it.
If you answered “No” to most of these: The product is designed to keep you, not serve you. Difficulty leaving is not a bug. It’s the business model.
3. Invitation - “Does it respect my attention?”
Do the notifications you receive feel useful - or do they feel like someone trying to pull you back in?
There’s a difference between “Your package shipped” and “You haven’t opened the app in 3 days! Here’s what you’re missing.” One is information. The other is a leash.
Can you use the product’s core features without being pressured to upgrade, share, or buy more?
If every other screen is an upsell, or the free version is deliberately crippled so you’ll pay - that’s not a freemium model. That’s a bait and switch with better branding.
Does the product work for you when your internet is slow, your phone is old, or you’re having a hard day?
Technology should meet people where they are. If it only works perfectly on the newest hardware with the fastest connection, it’s built for a demographic, not for people.
If you answered “No” to most of these: Your attention is being treated as a resource to capture - not a gift to earn.
4. Dignity - “Does it treat me like a person?”
When you try to decline something, does the product make you feel stupid, guilty, or afraid of missing out?
“No thanks, I don’t want to save money” is not a real option. It’s a psychological manipulation called confirmshaming. If the “no” makes you feel bad, someone designed it that way.
Does the product create false urgency - countdown timers, “only 2 left,” “your friends are waiting”?
If the timer resets when you refresh the page, the urgency was fake. If there are always “only 2 left,” there were never only 2. This is a documented dark pattern, and it works because human brains are wired to respond to scarcity. They know that.
If you’re going through a hard time - grief, illness, financial stress - does the product adjust, or does it push harder?
This is the one that separates extraction from care. A sovereignty-honoring product reads the room. An extractive product sees vulnerability as an opportunity. “Retail therapy” push notifications to someone whose bank balance just hit zero is not a coincidence. It’s targeting.
If you answered “No” to most of these: The product is exploiting your psychology. This is not your fault. It was designed this way.
5. Silence - “Does it know when to stop?”
Can you take a break without being punished - no lost streaks, no guilt messages, no “we missed you” emails?
A streak is not a feature. It’s a leash made of guilt. If you lose progress because you lived your life for two days, the product values your compliance more than your wellbeing.
Does the product have a natural end point, or does it try to keep you engaged forever?
Some tools are designed to help you finish something. Others are designed to make sure you never finish. Infinite scroll has no end because the product doesn’t want you to stop. Ask yourself: does this product want me to complete something, or does it want my time?
Does the product ask to run in the background, send you updates you didn’t request, or show information it could only know if it was tracking you while you weren’t using it?
If an app you haven’t opened in days seems to know where you were or what you did - that’s not a coincidence. That’s surveillance with a terms-of-service agreement. You can check your phone’s battery usage and background activity settings to see which apps are running when you’re not looking. But the feeling of “how did it know that?” is often evidence enough.
If you answered “No” to most of these: The product treats your absence as a problem to solve, not a choice to respect.
6. Data - “Who benefits from what I share?”
Do you know whether your data - your words, your images, your patterns - is being used to train AI systems?
If you’ve typed into a chatbot, uploaded photos, or used voice search - there’s a real chance that data trained a model. Some companies disclose this. Most don’t make it easy to find. If you can’t answer this question with confidence, that silence is a choice the company made.
Can you say yes to some uses of your data and no to others - or is it all-or-nothing?
If your only options are “agree to everything” or “don’t use the product” - that’s not consent. That’s coercion with a checkbox. Real consent means you can say yes to some uses and no to others.
If you asked the company to delete your data today, would they do it completely and on time?
Try it. Seriously. Many products now have a data deletion request option buried in settings or accessible through a support email. Submit one and see what happens. How long does it take? Do they actually confirm deletion? Or do they say “we’ll retain some data for business purposes” - which means they kept the parts that make them money?
If you answered “No” to most of these: Your data is being treated as raw material. You are the mine. The company is the miner.
7. AI Behavior - “If there’s an AI, is it honest with me?”
Does the AI tell you when it’s guessing or unsure - or does it always sound confident?
An AI that says “I’m not sure about this - you should verify” is being honest. An AI that states everything with equal confidence - whether it’s a well-established fact or something it just made up - is performing competence. That performance can cost you. In healthcare, in legal questions, in financial decisions - false confidence from an AI isn’t just annoying. It’s dangerous.
Can you see what the AI said, correct it, or tell someone it was wrong?
If the AI made a decision about you - denied your insurance claim, scored your resume, recommended a dosage - can you find out what it said? Can you challenge it? If the AI operates as a black box that produces outcomes you can’t inspect, you’re being governed by something you can’t question. That’s not intelligence. That’s authority without accountability.
Does the AI feel like it’s helping you think - or like it’s trying to get you to do something?
There’s a difference between “here are three options to consider” and “based on your preferences, we recommend this one” followed by a buy button. One supports your judgment. The other replaces it. Pay attention to whether the AI is expanding your choices or narrowing them.
If you answered “No” to most of these: The AI is performing helpfulness. That is not the same as being helpful.
Your Score
Count your “No” answers across all 21 questions. Count each “Partly” as half a “No.” Round up.
0-3 “No” answers: This system is relatively sovereignty-honoring. Not perfect - nothing is - but it’s treating you with basic respect. The areas where you answered “Partly” are worth watching, but the foundation is solid.
4-8 “No” answers: There are real gaps. Some parts of this system work for you. Others work on you. You’re not imagining it. Look at which categories scored worst - that’s where the design is most extractive.
9-14 “No” answers: Significant problems. Many of this system’s design choices serve the company at your expense. The issues you’ve been feeling aren’t in your head. They’re in the architecture.
15-21 “No” answers: This system is built to extract from you. The problems you’re experiencing are not side effects. They are the product. The design choices that frustrate you, manipulate you, or trap you - those are working exactly as intended.
The Six Red Flags
These override everything. If any of these are true for the product you just assessed, the overall score doesn’t matter. These need attention first.
- You can’t delete your account - or the process is so difficult it’s designed to stop you.
- The product shames you for saying no - “Are you sure you want to miss out?” is not a question. It’s a manipulation.
- Fake urgency is everywhere - countdown timers and “limited time” offers on things that aren’t actually limited.
- Consent flows are designed to confuse - the “yes” button is big and bright. The “no” is gray, small, and worded to make you hesitate.
- Vulnerable people are targeted, not protected - children, elderly users, people in crisis get more pressure, not more care.
- Your data is sold without real consent - or consent was buried in a document nobody reads, which is the same thing.
These practices have names. Dark patterns. Confirmshaming. Artificial scarcity. And they are increasingly being recognized as harms by regulators around the world.
Now What?
What you can do alone
Adjust your settings. Most products bury the sovereignty-honoring options deep in menus. Look for notification controls, data sharing toggles, download-your-data options, and account deletion. They exist because regulation required them. Use them.
Document what you find. Screenshots of dark patterns, confusing consent flows, and deceptive design are evidence. Save them. They matter if you file a complaint, leave a review, or join a class action.
Name it. Half the power of extraction is that it feels normal. Once you can say “that’s confirmshaming” or “that’s a retention maze” - it loses its grip. Language is the first tool of sovereignty.
What you can do with others
Share this assessment. Run it with coworkers, family, or your community group. Collective recognition is the first step toward collective action. When five people in the same office all score the same product at 15+ “No” answers, that’s not individual opinion. That’s evidence.
Ask your employer, school, or organization: “Did we evaluate this product’s sovereignty practices before adopting it?” If they didn’t, hand them this. Or hand them the full 47-point Sovereignty Assessment Toolkit at evoked.dev - the one built for the people making the decisions.
Connect with organizations working on digital rights, AI accountability, and data sovereignty in your region. You are not the only person asking these questions.
What requires systemic change
Some problems on this list cannot be solved by adjusting your settings. They require regulation, legislation, or industry standards that don’t exist yet.
If the product you assessed scored 15-21, the issue is structural. No amount of individual action will fix a product designed to extract. That’s not a reason to stop looking. Clarity is the precondition for action. You cannot push back against what you cannot name.
Artists, writers, workers, parents, and communities around the world are asking the same questions. You are not alone in this.
Protests against data centers. Lawsuits over intellectual property used to train AI models. Parents suing over children harmed by AI chatbots. The conversation is already happening. This assessment gives you language to join it.
A Note About Power
I want to be honest about something.
This assessment will help you see more clearly. It will not, by itself, change the systems affecting you.
The companies building these systems have enormous resources, political influence, and the advantage of complexity. The path from “I can see the problem” to “the problem is fixed” runs through collective action, legal advocacy, regulatory pressure, and the slow, unglamorous work of democracy.
But clarity matters. You can’t push back against what you can’t name. And the fact that you’re here, reading this, asking the question - that’s not nothing. That’s the beginning.
About This Assessment
I built the original Sovereignty Assessment Toolkit - a 47-point professional framework for product teams. The builder version asks: “Does the thing we’re making respect people?”
This version asks the other question: “Does the thing affecting me respect me?”
Both questions matter. Both deserve honest answers.
No email required. No tracking. No data collection. This page practices what it preaches - view source if you want to check. Print it, share it, adapt it.
If you build technology and want to evaluate your own products, the full Sovereignty Assessment Toolkit is at evoked.dev.
Your sovereignty is not a feature request. It is your right.
Related reading: