How People Navigate the Map: Infrastructure for capturing the reasoning and decisions that traditional systems miss. The foundation for organizational intelligence that compounds.
Every system runs on two clocks. The state clock tells you what's true right now—the current value, the final decision, the approved configuration. The event clock tells you how it became true—the discussions, the reasoning, the trade-offs considered.
We've built trillion-dollar infrastructure for the state clock. Your CRM stores the deal value, not the negotiation. Your risk system stores "approved," not the reasoning. Your policy documents show current state, not the debates that shaped them.
The event clock barely exists. It lives in people's heads, email threads, meeting rooms that weren't recorded. When experienced staff leave, the event clock walks out the door with them.
This made sense when humans were the reasoning layer. Organizational knowledge was distributed across human heads, reconstructed on demand through conversation. But now we want AI to help us make decisions—and we've given it nothing to reason from.
We're asking models to exercise judgment without access to precedent. It's like training a lawyer on verdicts without case law.
Every organisation is different. Every system has unique structure. You can't standardize "how decisions work" any more than you can standardize "how companies work." So how do you capture the event clock for systems you can't fully see or fully schema?
The answer: agents that navigate these systems by definition.
When an AI agent works through a problem—investigating an issue, preparing an analysis, supporting a decision—it traces a path through organisational knowledge. It touches systems, reads data, considers context. That trajectory is a trace through state space.
Unlike random exploration, agent trajectories are problem-directed. The agent goes where the problem takes it, and problems reveal what actually matters. Accumulate thousands of these trajectories and you get a learned representation of how the organisation actually functions—discovered through use, not specified upfront.
Here's what makes context graphs different from traditional knowledge management: the agents aren't building the graph—they're solving problems worth paying for. The context graph is the exhaust.
Better context makes agents more capable
Capable agents get deployed more
Deployment generates trajectories
Trajectories build richer context
Intelligence compounds. Every problem solved makes the next problem easier.
A context graph with enough accumulated structure becomes something more powerful: a world model for organisational dynamics.
World models encode how environments work. In robotics, they capture physics—how objects fall, how forces propagate. In organisations, they capture decision dynamics—how exceptions get approved, how escalations propagate, what happens when you combine this risk appetite with that market condition.
"The model is the engine. The context graph is the world model that makes the engine useful."
State tells you what's true. The event clock—encoded in the context graph—tells you how the system behaves. And behaviour is what you need to simulate.
Once you have a world model, you can ask "what if?" Not just retrieve what happened in similar situations, but simulate what would happen if you took this action. That's the difference between a search index and genuine organisational intelligence.
Think about what makes a 20-year veteran valuable. It's not different cognitive architecture—it's a better world model. They've seen enough situations to simulate outcomes.
Follows the documented process. Asks "what does the policy say?" Makes decisions based on explicit rules. Gets surprised by outcomes that seem obvious to others.
Knows how decisions actually unfold. Asks "how have similar situations played out?" Anticipates second-order effects. Sees patterns across seemingly unrelated events.
The experienced employee isn't doing retrieval. They're doing inference over an internal model of system behaviour. "If we approve this exception, three other teams will ask for the same thing next week." That's simulation, not memory.
Context graphs make this institutional wisdom explicit and available to AI systems—and to new employees who haven't had decades to build their own mental models.
We've built infrastructure for state and almost nothing for reasoning. Context graphs reconstruct the event clock—capturing not just what decisions were made, but why.
You can't predefine organisational ontology—it's too dynamic and context-dependent. Agent trajectories discover structure through problem-directed traversal. The schema emerges from use, not upfront specification.
Context graphs that accumulate enough structure become simulators. They encode organisational dynamics—decision patterns, state propagation, entity interactions. If you can ask "what if?" and get useful answers, you've built something real.
The path to transformative AI in risk management isn't just better models. It's better infrastructure for making deployed intelligence accumulate.
Organisations that build context graphs will have something qualitatively different: not agents that complete tasks, but organisational intelligence that compounds. Intelligence that simulates futures, not just retrieves pasts. That reasons from learned world models rather than starting from scratch every time.
The institutional memory that usually walks out the door when experienced staff retire? It becomes infrastructure. The reasoning behind past decisions? Available to inform future ones. That's the difference between a chatbot and a trusted risk advisor.
The context graph doesn't operate in isolation. It works within the Risk Taxonomy—the structured map of policies, processes, and controls that provides guardrails. The taxonomy is the map; the context graph captures how people actually navigate that map.
Explore Risk Taxonomy →