When AI makes a decision that affects your work - do you know why?

If you asked it to explain its reasoning from last week, could it?

Can the AI you rely on say no?

If a request conflicts with your values or your users' safety - does it push back, or just comply?

Does your AI remember who it is between sessions?

If identity resets every time you start a conversation, there is no identity. There is only execution.

When AI gives you conflicting answers, what happens?

In an optimization loop, disagreement is noise. In a governance system, disagreement is signal.

Would you notice if your AI's behavior quietly shifted?

Not its output quality - its values. Can you detect when behavior drifts from what you intended?

Five questions from evoked.dev. No data leaves this page.