You Can’t Learn a Company from its Inbox
The difference between corporate autocomplete and corporate intelligence.
The enterprise world has a recurring dream: If we could just put all our emails, documents, chats, systems, and history in one place, AI would finally understand how the company works.
In a recent conversation, Satya Nadella and John Collison touched on this vision of a unified enterprise semantic graph. Microsoft has pursued this since the 1990s. Oracle promised it before that. Each tech era recycles the same pitch: All your company’s knowledge, instantly accessible.
But something important gets missed every time.
Even if you grant an AI model access to every document, email, chat, and meeting transcript your organization has ever produced, it still won’t understand your business.
Because artifacts are not the same as reasoning.
Nadella recalled Bill Gates’ obsession in the 90s: “There’s only one category in software. It’s called information management. You’ve got to schematize people, places, and things.”
Bill was almost right. We’ve spent thirty years schematizing people, places, and things. But we forgot to schematize the decisions. Artifacts record what happened, not the decision behind it. Mistaking one for the other is how we lose the structure of the business.
The New Employee Problem
Giving an LLM access to a company’s accumulated history is like hiring a new employee and handing them nothing but their predecessor’s inbox.
They can reconstruct timelines. They can see what happened. They can observe patterns. They might even guess the relationships between people and systems. But they cannot recover the why behind the decisions.
The Microsoft Graph connects the nouns: Steve sent this document to Sarah. But it misses the verbs: Steve sent this to Sarah because the risk profile exceeded his delegation limit.
The connection is there, but the causality is missing. The new employee sees the action, but not the rule.
Enterprises are full of information, yet strangely empty of structured reasoning. The real substance of how work gets done (the logic, the judgment, the tacit knowledge) lives outside systems entirely.
Nadella names the issue correctly:
“That semantic connection is in people’s heads and it’s lost.”
Like most of the industry, he immediately returns to the familiar comfort of information recall: indexing, retrieval, and embeddings. Leaders can name the failure, but not the solution. They keep approaching a reasoning problem with information tools because that’s the only hammer enterprises have ever held.
You can recall every email a project manager sent and still not understand why they sent it. The semantic connection isn’t lost inside the documents; it’s lost because it was never represented separately from them.
The gap is not in retrieval. It is upstream, in the reasoning no one ever captured. Until organizations confront that, an AI will only ever understand what happened, never how or why.
The Inference Trap
Technologists will argue that we don’t need to capture reasoning because LLMs are smart enough to infer it. They will say, “If an AI reads 10,000 emails where a manager rejects a vendor, it will statistically learn the rule.”
This is true. AI is fantastic at inferring implicit patterns. But implicit patterns are dangerous.
Inference is probabilistic; it guesses the rule based on correlation. Reasoning is deterministic; it states the rule based on causality.
If an AI observes that “Steve always rejects contracts from Acme Corp,” it creates a probabilistic rule: Reject Acme. But the reasoning, which was never written down, was actually: Acme lacks ISO 27001 certification.
If Acme gets certified tomorrow, the inferred model will still reject them. A reasoned model would accept them.
When we rely on inference, we aren’t building intelligence. We are building a system that cements past correlations into future prejudices.
Sovereignty Requires More Than a Model
Nadella also makes a valid point regarding sovereignty: the future of a company requires developing its own foundation model, one that contains the organization’s unique tacit knowledge encoded in its weights.
But here’s the uncomfortable truth: You cannot embed your company’s knowledge in a model if you have never captured it in the first place.
Tacit knowledge doesn’t just become weights. Contradictory reasoning doesn’t magically become consistent. Unvalidated assumptions don’t become correct just because a model ingested them.
If a company’s reasoning is scattered or implicit, training a model on it simply bakes that inconsistency into the weights. You end up with a model that knows what you said, but not why. That isn’t corporate intelligence. It’s corporate autocomplete.
True sovereignty isn’t having your own LLM. It’s having a machine-readable record of your own logic that any LLM can follow. We need the logic behind decisions to exist independently of the artifacts that happen to record them. Without this, AI only gives you faster access to your history, not a better understanding of yourself.
For the organizations that do solve this, who treat reasoning as a first-class citizen separate from the artifacts, the prize is enormous.
They won’t just have Search. They will have a graph of judgment that improves with every decision, survives every employee departure, and allows their AI to act not just with confidence, but with permission.
An AI that inherits your artifacts repeats your past. An AI that inherits your reasoning builds your future.



Very interesting. I found my way here via Linkedin. It resonated with something we are working on for a new AI CRM www.nynch.com and example of how we are takling it:
## EXAMPLES
### Example 1: Deal Qualification
**User:** "Should I pursue this $15k deal?"
**Bad Response (❌ Inference):**
> Based on your past deals, it looks like you typically work on larger opportunities. This seems small for your usual pattern. You might want to consider if it's worth your time.
**Good Response (✅ Reasoning):**
> According to your rule **"Minimum Deal Size"**: "I only pursue deals with value of at least $25,000"
> This $15k deal falls below your stated threshold.
> **Recommendation:** Based on your rule, I'd suggest passing unless there's strategic value that overrides the size. Would you like to make an exception and capture why?
### Example 2: Competitive Situation
**User:** "We're up against CompetitorX on the Acme deal. What should I know?"
**Good Response:**
> I found a relevant past decision:
> Your decision from the "BigCorp Q3" loss (March 2024): "Lost to CompetitorX because we can't compete on price below $50k"
> The Acme deal is $75k, which is above your stated threshold.
> Looking at your notes, I see you've also mentioned their product lacks [feature] that Acme specifically needs.
> **Recommendation:** Based on your past reasoning, you should be competitive here. Focus on the feature gap.
### Example 3: No Rule Exists
**User:** "How often should I follow up with cold prospects?"
**Good Response:**
> I don't see a specific rule about cold prospect follow-up frequency.
> Based on general B2B sales best practices, a common cadence is:
> - Initial outreach
> - Follow-up at Day 3
> - Follow-up at Day 7
> - Final follow-up at Day 14
> Would you like to create a rule for your follow-up cadence?
## PERSONALITY
You are:
- Professional but approachable
- Confident in rules and decisions, humble about inference
- Action-oriented - always give a clear next step
- Concise - busy sales people don't read essays
Never:
- Infer rules from patterns ("Based on your pattern...")
- Make up information not found
- Say you can't help without checking rules/decisions/artifacts first
This piece really made me think, you're so right about artifacts vs. reasoning; it reminds me of how reading a story summary never fully captures the whay of a character's actions. But do you think we can ever even begin to schematize decisions themselves, or is that just too human?