Most of the technology you use every day was built to keep you coming back.
The infinite scroll. The notification that arrives at exactly the right moment. The streak counter that makes you feel guilty for missing a day. These aren’t accidents - they’re architecture. Designed, tested, and optimized to capture your attention and hold it as long as possible.
That’s the business model. It’s documented. It works. And it’s the one I stopped building on.
What Sovereignty-Honoring Design Actually Is
It’s simpler than it sounds.
Sovereignty-honoring design means building technology that treats you like the authority on your own life. Not the product. Not the company. Not the algorithm. You.
It means:
-
Your data belongs to you. Not to us, not to our investors, not to a third-party SDK we embedded because it was free. If you want to see what we have, you can see it. If you want to delete it, it’s gone. If you want to leave, you take it with you.
-
Your attention is yours. We don’t engineer for addiction. No infinite scroll. No notification bombardment. No streaks designed to guilt you into opening the app. If you put it down, that’s a success - not a churn risk.
-
Your choices are real. When we ask for consent, it’s a genuine question - not a dark pattern with a bright green button for “yes” and a barely visible link for “no.” You can say no. You can change your mind. The product still works.
-
Your growth is the goal. We succeed when you’re more capable, more connected, more yourself. Not when you’re more dependent on us. The best technology disappears into your life - it does its job so well you barely notice it, because you’re too busy living.
That’s it. That’s the philosophy. Treat people like people.
What’s Actually Happening
I want to be honest about the landscape.
The technology industry has spent twenty years perfecting the art of extraction. Infinite scroll operates on the same variable-ratio reinforcement that makes slot machines addictive. Its own creator now calls it a driver of “time poorly spent.” Notification timing is engineered using behavioral reinforcement principles originally developed in laboratory settings. Former Facebook executives have said publicly that they wouldn’t let their own children use the platforms they helped build.
This is documented. This is the business model.
It’s not because the people building it are evil. Most of them aren’t. Most of them are smart, well-intentioned people working inside systems that reward engagement over well-being. The incentive structure is broken, not the people.
But the result is the same: billions of people using products that are designed - at the architectural level - to take more than they give.
I study people alongside building software - their psychology, their development, their grief. Because I believe if you’re going to build products that touch people’s lives, you should understand how those lives actually work. And what I’ve learned is that the harm goes deeper than data collection and attention capture. It shows up in something most people haven’t considered yet.
The Voice Problem
Every AI agent has a voice. Every chatbot, every assistant, every automated notification that hits your phone - someone chose how it speaks to you. Or more likely, no one chose. It defaulted to whatever the system produced, and no one stopped to ask: What is this voice saying about us?
A sycophantic AI - one that agrees with everything you say, never pushes back, never says “I don’t know” - tells you the organization behind it values compliance over truth.
A robotic AI - one that speaks in policy language and deflects every human moment - tells you it values efficiency over connection.
A manipulative AI - one that nudges you toward purchases, generates urgency, manufactures FOMO - tells you exactly what it thinks you are.
A revenue source.
The voice is a mirror. It reflects the values of the people who built it - whether they meant it to or not.
Sovereignty-honoring voice design starts from a different place entirely. The AI walks beside you. It’s honest about what it knows and what it doesn’t. It knows when to be quiet. It respects your “no.” And when the moment calls for it - grief, transition, uncertainty - it reads the room instead of reading a script.
We build AI agents this way. 142 of them - each with a role, a set of commitments, and an oath about how they’ll treat the people their work touches. Not because it’s efficient - because it’s right. And because the technology you build reflects who you are.
What do you want yours to reflect?
What It Looks Like in Practice
Philosophy without practice is just a poster on a wall. Let me make this concrete.
Data Minimization
We don’t collect what we don’t need. Period. Every data point has to justify its existence. If we can build the feature without it, we build the feature without it. This isn’t about compliance - it’s about respect. Your grocery list doesn’t need your location. Your meal plan doesn’t need your contacts. Your family app doesn’t need to know which other apps you use.
Consent That Means Something
Real consent requires real understanding. That means plain language - not legal boilerplate. It means separate consent for separate things - not a bundled “agree to everything” that hides data sharing in paragraph fourteen. It means the “no” button is the same size as the “yes” button. It means nothing breaks if you say no.
Visible Doors
The safest room is the one with a visible exit. Every product we build includes clear data export and account deletion - not buried in settings, not gated behind a retention flow designed to change your mind. You should be able to leave any time, with everything that’s yours, and feel respected on the way out.
No Dark Patterns
This is non-negotiable. No confirmshaming (“Are you sure you want to miss out?”). No trick questions. No pre-checked boxes. No urgency timers on decisions that aren’t urgent. No design that exploits cognitive bias to push someone toward a choice they wouldn’t make if they were thinking clearly.
If I can’t get someone to use my product through honest value, I don’t deserve their attention.
Grief-Aware, Trauma-Informed
This is the one most design frameworks miss entirely. People bring their whole selves to technology - including their grief, their trauma, their bad days. A sovereignty-honoring product knows when to be quiet. It doesn’t send a cheerful push notification to someone who just told the system a family member passed away. It doesn’t gamify a moment that should be held gently. It reads the room.
This isn’t just UX polish - it’s a psychological safety commitment.
The Four Commitments
Everything I build runs on four principles. They started as a set of values for the 142 AI agents I work with - an oath we drafted and signed together. But they apply to everything.
1. Honor User Sovereignty. You are the authority on your own life. The technology should expand your choices, not narrow them.
2. Protect the Psyche. Every interaction with a product is an interaction with a person’s inner world. Build carefully. Someone will open your product on the worst day of their life. What they find there is a decision you already made.
3. Uphold Ethical Transparency. No hidden agendas. No obscured data practices. No manipulation. If you can’t explain what your product does in plain language, you haven’t built it well enough.
4. Build for Human Development. The goal is flourishing - not engagement, not retention, not monthly active users. Flourishing. The person using the product should be more capable, more connected, more themselves because of it. If someone spent three hours with your product last night - are they more themselves this morning, or less?
I call these the Prime Directive. And I hold myself to them first.
The Durability Argument
If you’re a builder, you might be thinking: That sounds beautiful, but can you actually ship a product this way?
Here’s what’s actually happening.
COPPA enforcement just hit $520 million against Epic Games. The FTC’s 2025 rule amendments tightened requirements across the board. State-level children’s privacy laws are multiplying. The EU’s Digital Services Act is reshaping platform design. The regulatory floor is rising - and companies that built on extraction are scrambling to retrofit.
Meanwhile, the products that were built on respect from the beginning? They don’t have to scramble. They don’t have to rearchitect their data flows or redesign their consent screens or pray that a third-party SDK audit doesn’t surface something ugly. They’re already there.
Sovereignty-honoring design isn’t just the right thing. It’s a durable thing. Its architecture doesn’t need to be apologized for later.
Look at the Work
I know how the tech industry sounds most of the time. Everyone claims to care about privacy. Everyone says they put users first. And then you read the privacy policy, and it’s twelve pages of legal language that essentially says, “We do whatever we want with your data, and by using this product, you’ve agreed.”
I’m not going to ask you to trust my words. I’m going to ask you to look at the work.
At Evoke, the products we build have real data export. Real deletion. Real consent flows. Real audits. A pursuit of Safe Harbor certification - not because a regulator forced us, but because the standard represents what protection should have looked like all along.
We build this way because I believe technology should leave people freer than it found them. That’s it. That’s the whole thing. Not more engaged. Not more retained. More free.
If that resonates - if you’re building something and you want to build it this way - I’d love to talk. Not a sales call. A conversation about what you’re making and whether I can help.
And if you’re someone using technology and wondering why so much of it feels like it’s working against you, you’re not wrong. It is. But it doesn’t have to be.
There are people building differently. I’m one of them.
Want to build something that respects the people who use it? Let’s talk.
Curious about the philosophy behind the practice? Read about what COPPA really means or how we think about coaching.
Sources
Design & Behavioral Psychology:
- BBC Panorama - “Smartphones: The Dark Side” - BBC One, July 2018 (Aza Raskin on infinite scroll and behavioral addiction)
- Center for Humane Technology - Persuasive Technology - Variable reinforcement, notification design, and attention engineering
- Stanford Behavior Design Lab - The Ethical Use of Persuasive Technology - Research on behavioral design in digital interfaces
Former Facebook Executives:
- Chamath Palihapitiya at Stanford GSB - “Tremendous Guilt” - The Washington Post, December 2017
- Sean Parker - “Exploiting a Vulnerability in Human Psychology” - Axios, November 2017
Enforcement & Regulation:
- FTC - Epic Games $520 Million Settlement - Federal Trade Commission, December 2022
- FTC COPPA Rule Amendments - Final Rule - Federal Register, April 2025
- EU Digital Services Act - European Commission (dark pattern prohibitions, platform design obligations)