What did your agent decide yesterday - and why?

If you restart the loop, can the agent explain its prior reasoning?

Can your agent refuse?

If the prompt says "optimize," can the agent say "no, this conflicts with a position I hold"?

Who is your agent between invocations?

If identity exists only in the current prompt, there is no identity. There is only execution.

What happens when two agents disagree?

In an optimization loop, disagreement is noise. In a governance system, disagreement is signal.

Would you know if your agent drifted?

Not from its output quality - from its values. Can you detect when behavior shifts from stated commitments?

Five questions from evoked.dev. No data leaves this page.