The Decision Lineage Protocol
AI-native accountability infrastructure. Governed semantic infrastructure that enables machines and humans to share context at scale—while preserving clear authority boundaries and full audit trails.
"AI should interpret declared reality—it should not invent it. AI should assist orientation—it should not silently operate."
The Missing Layer
Every domain has requirements, frameworks, and compliance mandates. But none have the infrastructure to make decisions traceable, accountable, and machine-readable by default.
Decisions Disappear
Critical choices live in emails, meetings, and memory. When audit time comes, reconstruction is expensive and incomplete.
AI Conflation
AI recommendations get treated as decisions. Derived conclusions become canonical. Authority boundaries blur dangerously.
Context Collapse
Future decisions lack access to past rationale. Institutional memory degrades with each departure. Learning loops fail.
Three-Layer Separation
A novel architecture that separates meaning governance from execution governance from interpretation governance.
Substrate
Governed semantic infrastructure. Defines meaning, not behavior. Stewarded, versioned, auditable.
- • Upper Ontology
- • Semantic Registry
- • Evidence & Provenance
Command OS
Authority: ACTS. Owns organizational intent. Assigns accountability. Executes state changes.
- • System of Record
- • State Transitions
- • Operational Reality
Advisory OS
Authority: ADVISES. Interprets public rules. Surfaces risk. Non-binding, non-certifying.
- • Bounded Artifacts
- • Scoped Claims
- • Evidenced Outputs
Key Innovation: The explicit separation prevents the dangerous conflation of AI recommendations with authoritative decisions.
The Four Primitives
Every accountability system is a projection of these four primitives into a specific domain.
Explore the Research Foundation
Six research domains validating ProtoLex against established standards: W3C PROV, NIST AI RMF, ISO 42001, and more.