Protocol · Veridom

Operating Model Protocol (OMP™)

OMP™ is evidence infrastructure for AI-mediated decisions. Every interaction produces a sealed, cryptographically chained audit record — before any regulator asks for it.

The Problem

Every AI-mediated decision creates an accountability question.

Every time an AI system makes a decision in a regulated environment — approves a loan, generates a legal brief, denies an insurance claim, flags a transaction — someone is eventually going to ask: why did it do that, who was responsible, and can you prove it?

Right now, almost no institution can answer all three questions with verifiable evidence.

What Exists Today

Most of the market documents policy or blocks risk. It does not produce the record.

AI governance tools

AI governance tools tell you what your AI should do. They write policies, map risks, build dashboards. They are documentation systems.

AI security tools

AI security tools tell you when your AI does something dangerous. They block prompt injections, filter outputs, detect anomalies. They are protection systems.

Neither produces a court-admissible, examiner-ready record of what the AI actually did, who was accountable, and proof the record has not been altered since.

That gap is what OMP™ closes.

What OMP™ Actually Is

A layer that intercepts outputs, classifies the decision, and seals the evidence.

A piece of infrastructure that sits between your AI system and the world. Every time the AI produces an output, OMP™ intercepts it before it is dispatched and forces it through a classification:

OPERATING MODEL PROTOCOL Three-state routing creates a sealed evidence chain Every AI output is intercepted before dispatch, classified, assigned accountability, and sealed into a tamper-evident record. INPUT AI output enters OMP before dispatch Loan approval, legal brief, claim denial, transaction flag, or any other AI-mediated decision. CLASSIFIER Deterministic routing decision Same inputs, same path, same accountability state. AUTONOMOUS Dispatch immediately High confidence. Routine case. No human review required. ASSISTED Named human review Borderline case. Output pauses until the assigned reviewer signs off. SLA TIMER STARTS ESCALATED Hard stop and escalation Senior accountable party assigned. Nothing moves until resolution. SEALED EVIDENCE RECORD Output, rule trigger, accountable human, timestamp, and SHA-256 integrity hash N-1 HASHED SHA-256 N+1 CHAINED SHA-256 BREAK ONE LINK AND THE WHOLE CHAIN FAILS VERIFICATION
OMP™ sits between AI systems and the external world, routing every decision and producing a sealed audit trace.
Autonomous

High confidence, routine, no human required, dispatch immediately.

Assisted

Borderline, a named human reviews before it goes out.

Escalated

Hard stop, a named senior accountable party is assigned, an SLA timer starts, nothing moves until they resolve it.

Every one of these decisions — regardless of which path — produces a sealed record: what the AI produced, which rule triggered the classification, who was responsible, when it happened, and a cryptographic hash that proves the record has not been touched since. Each hash is chained to the prior one. Break one link and the whole chain fails verification.

That chain is what Veridom sells. Not the routing. The chain.

Why The Chain Is The Product

The mechanism matters because it produces an answerable proof artifact.

The routing logic — the three states, the confidence score, the Watchtower rules — is the mechanism. It is how the chain gets built. But the thing a regulator, an insurer, a court, or a board actually wants is the chain itself. A 30-day Proof-Point artifact covering every AI-mediated interaction at an institution, sealed and signed, answerable in under 30 seconds.

Canada

OSFI's E-21 guideline lands in September 2026. It requires federally regulated Canadian institutions to demonstrate operational resilience in AI-mediated processes with structured evidence. The institutions that cannot produce the evidence will not pass the examination. OMP™ produces the evidence as a standard output.

Kenya

The same requirement is arriving in Kenya. The CBK AI Guidance Note, expected Q2 2026, will define what adequate AI governance evidence means for every licensed digital credit provider in the country. The ODPC issued 96 data protection determinations in 2025 — the cases institutions lost turned on whether they could produce a contemporaneous, verifiable record of what their systems did. They could not.

Why It Is Hard To Copy

OMP™ is defensible because the evidence chain is architectural, not cosmetic.

Model-agnosticism.

OMP™ sits above the inference layer. It does not care whether the AI is GPT-4, Claude, an open-source model, or a legacy rule-based system. It governs all of them under one evidence schema. OpenAI's native compliance mode covers OpenAI deployments. Anthropic's covers Anthropic deployments. Neither covers the institution's heterogeneous stack. OMP™ does.

The chain.

Logging is not a chain. A dashboard is not a chain. A PDF report is not a chain. The sequential SHA-256 structure means tampering with any record in the history invalidates every record after it. This cannot be retrofitted into a governance dashboard after the fact. It has to be built in from the first interaction.

The invariance.

The routing logic does not change between deployments, verticals, or software versions without a formal specification amendment. The same inputs always produce the same routing decision. That is the foundation of defensibility. A system where the routing logic can drift is not infrastructure — it is a product.

The Long Game

The schema is the company. The verticals are distribution.

Veridom's endgame is not to be a compliance vendor. It is to be the reference implementation for AI accountability evidence — the same position C2PA holds for content provenance. Publish the schema openly, submit it to a standards body, let regulators reference it, and profit from being the most deeply integrated deployment of the standard.

The verticals — legal, financial services, healthcare, government — are distribution. The schema is the company.

The Reasoning

The record is not a side-effect of governance. It is governance.

The insight that drives everything is this: the audit record is not a byproduct of AI governance. It is AI governance.

Every other tool in the market treats the record as a side-effect of doing governance. OMP™ treats the record as the primary output and builds the routing logic in service of producing it correctly.

That inversion is why the gap exists. Nobody has assembled deterministic routing, immutable cryptographic chain, and named human accountability in a single deployable layer. That is the gap. That is what OMP™ fills.

The Specification

The full technical specification is published and independently verifiable.

The specification is open. The architecture is published. The prior art is dated.

If you are building AI deployment infrastructure and the evidence layer is missing, write to us at hello@veridom.io.