The AI Doesn’t Know Who You Are
The missing primitive of authority
The AI doesn’t know who you are.
Not your identity.
Your authority.
Today’s AI systems are fluent, capable, and increasingly autonomous. Yet they operate without a clear reference for who they represent, what they’re allowed to decide, or when they must stop. That gap is about to matter more than ever, because AI is moving from advising to acting.
In this essay, acting means making a change the business has to live with. For example, moving money, changing a system of record, committing to a promise, or triggering an external obligation.
Authority is not a pattern in text. It is a structural property of systems.
Authority is the formally defined right to act, decide, or commit an organization to outcomes within a specific scope, time, and set of constraints.
In human organizations, authority is explicit. It is assigned, scoped, time-bound, auditable, and revocable. Delegations, approvals, and signing limits are not suggestions. They are coordinates that define where action is valid.
In most AI systems, that layer is missing. So models improvise. And when authority is improvised, accountability collapses.
Identity isn’t authority
Enterprises already have identity. They’ve gotten very good at it:
SSO tells you who signed in.
Roles tell you what category of user they are.
Permissions tell you what buttons they can click.
Those controls answer access. They do not answer institutional commitment.
The question that matters once a system starts acting is:
What can this actor commit the organization to, on whose behalf, under which constraints, at this point in time?
Identity answers: Who is this?
Authority answers: What is this actor allowed to do that binds the institution, in this scope, under these constraints, right now?
That difference sounds semantic until you watch it fail in production.
A procurement officer can log in indefinitely while their delegated signing authority expires the day they change teams. A VP can approve a budget up to a threshold until a board resolution changes the limit. An “acting head of” can authorize in a narrow window, and then instantly lose standing when the role is backfilled.
Authority is contextual, temporal, and revocable. Language is none of those.
Most agent systems collapse all of this into a dangerously simple assumption:
User = allowed actor.
Or worse, they infer authority from how the user sounds, what they did last quarter, or what similar people usually do.
When a system cannot reference authority, it substitutes inference. That looks like competence until it becomes commitment.
How models fake authority (and why it works … until it doesn’t)
Models are excellent at one thing: inference. Given enough context, they’ll predict what comes next. In demos, this looks like competence. In organizations, it becomes something more dangerous: simulated standing.
Consider a specific sequence: a model approves a $75K vendor contract because it saw the VP approve $50K contracts last quarter, the vendor uses similar language to previously approved vendors, and the request came via the VP’s EA. Each signal looks reasonable. Together, they produced an unauthorized commitment.
None of that is authority. It’s pattern completion, or simulated standing. The model generated the language of authorization without possessing the state of authorization. It looks legitimate because it sounds legitimate.
This isn’t a model bug. It’s what any probabilistic system does when a boundary condition is missing: it fills the gap with plausibility.
In a demo, the cost of a wrong prediction is a retry. In an enterprise workflow, the cost is a commitment the organization never intended to make.
At enterprise scale, inference-based authority produces predictable failure modes:
Authority drift: escalation becomes statistically rare, so escalation stops happening.
Silent overreach: boundaries are crossed with no alert because the boundary was never defined in a way a system can enforce.
No line between suggestion and decision: advisory output quietly becomes execution because nothing in the architecture forces a distinction.
Once authority is inferred instead of referenced, accountability doesn’t fail loudly. It dissolves quietly.
And the enterprise response is always the same: add more approvals, add more humans, add more friction. That’s the only brake available when the system can’t locate authority.
Authority is a primitive, not a prompt
Authority cannot live inside prompts, system messages, fine-tuning, or agent memory, because those mechanisms are advisory, not binding.
Prompts shape intent, but they don’t enforce constraints.
Models are probabilistic. They produce plausible continuations, not valid commitments.
Memory is narrative. It’s what the system thinks happened, not what was authorized.
If an action can bind the business, the boundary that permits that action cannot be something the model can rewrite.
If the boundary can be rewritten by the model, it is not a boundary.
Authority must be:
Explicit: declared, not inferred
External: not owned by the model
Inspectable: you can point to it
Versioned: you can say what applied then
Time-bound: it starts, ends, and expires
Revocable: it can be withdrawn without retraining a model
In practice, this splits the architecture:
The Model proposes intent: “I want to do X.”
The Authority Layer holds the keys: “You may do X only within this scope, limit, time window, and escalation rule.”
The Execution Layer enforces. If the mandate doesn’t match, the action is rejected or escalated.
The model does not check itself. It is checked against an external, machine-verifiable reference frame.
If you want an analogy that kills the “just prompt it better” conversation, think jurisdiction.
You don’t ask a system to infer jurisdiction from correspondence. You define it, publish it, and enforce it. A court’s authority to hear a case is declared before the case begins, not inferred from the arguments presented. Air traffic control doesn’t infer a no-fly zone from pilot chatter.
In any serious domain, the boundary of valid action is a first-class object.
Enterprises have these boundaries, but they are currently trapped in SharePoint pages, PDF policies, email chains, tribal knowledge, and ad-hoc Slack approvals. To a human, that’s context. To a machine, it’s invisible.
Enterprises have never had to make that boundary machine-verifiable, because humans carried it in their heads.
Once you see authority as a missing system object, the failure modes stop looking like model errors and start looking like predictable governance bugs.
What breaks when authority is missing
When authority is not a first-class system concern, you get a very specific class of enterprise failure:
The orphaned commitment - actions occur that no one can truthfully say were authorized.
This shows up everywhere, from pricing changes and vendor onboarding to customer promises and internal policy exceptions. Not because the system was malicious. Not because the model was “unsafe.” Because authority was never located, so the system couldn’t possibly respect it.
The symptoms are consequences enterprises already recognize, but usually misdiagnose:
AI takes actions no one technically authorized.
A commitment is made, and everyone assumes someone else was accountable.
Humans can’t explain why something was allowed.
You get a post-hoc story, not a referenceable basis.
Compliance becomes theater.
Screenshots and transcripts replace an actual, auditable chain of authority.
Velocity collapses.
Teams add approvals to compensate for missing structure. “Governance” becomes a synonym for friction.
This isn’t about slowing systems down. It’s about enabling speed without ambiguity.
This is the real reason enterprises hesitate on agentic AI. Not because they don’t trust models to produce good text. Because they can’t locate who is accountable when text turns into commitments.
The “trust” conversation goes nowhere because it’s misframed.
The problem isn’t trust.
The problem is unlocated authority.
Giving AI authority without giving up control
There’s a common fear hiding behind every agent conversation:
“If we give AI authority, we lose control.”
That fear comes from conflating authority with autonomy.
Authority doesn’t mean “the system does whatever it wants.” It means:
scoped mandate
explicit bounds
known escalation paths
enforceable constraints
The correct relationship is simple:
Models operate inside authority. They do not define it.
In a mature system, an agent can be extremely autonomous within a declared scope, and extremely conservative outside it. That’s not a personality trait. It’s architecture.
An agent authorized to approve travel expenses under $500 for the APAC team through Q1 can execute instantly within that scope, and escalates automatically outside it. That’s not restriction. It’s clarity.
Autonomy without authority is chaos.
Authority without autonomy is bureaucracy.
The balance requires structure.
Why this becomes unavoidable
Right now, most AI deployments still live in the advisory phase: summarize, draft, recommend. The moment they move from advising to acting, from words to commitments, authority becomes unavoidable.
As usage scales, “minor” errors stop being minor because they compound. As agents integrate into systems of record, the blast radius stops being hypothetical. And as enterprises demand defensibility, fluency stops being impressive.
Authority will become a first-class concern within the system. Not an HR artifact, not a policy PDF, not a model capability.
The question enterprises will ask, every single time, is not “Was the model confident?”
It’s “Who authorized this?”
Until AI systems can answer that by pointing to a machine-verifiable reference, not a conversation transcript, they can’t cross the line from advisor to actor.
The authority layer isn’t a governance add-on. It’s the missing foundation that makes everything else possible.
Until then, the AI might know who you are.
But it has no idea what you, or it, is actually allowed to do.



Good post, but I'm left wondering: how do you suggest this authority be discovered, cataloged, maintained, and applied at runtime during prompting and by AI agents? Any recommendations about how to "manage" "authority?" How do you map Enterprise Identity to Enterprise Authority?