Interlink Bridge · Knowledge Archive
interlink-bridge.info · All Open Publications · DOI-Secured
26 Records CC BY 4.0 Zenodo
EU AI Act
Pre-Compliance Inspection Framework
Mar 3 · 2026
Technical Architecture Enforcement
Mar 3 · 2026
Runtime Responsibility Boundaries
Mar 3 · 2026
Core Frameworks
Execution-Bound Governance for AI Systems
Apr 6
IBOGS-1.0 Open Governance Standard
Mar 17
Structural Sovereignty
Mar 12
GCI-01 Governed Cognitive Interface
Mar 20
UGA-01 Unified Governance
Mar 12
Commit Boundary Manifesto
Mar 14
Probabilistic Cognition, Det. Safety
Mar 13
UCC Runtime Architecture
Mar 16
DIE-01 Delegation Integrity
Mar 3
Universal AI Competence Test
Mar 11
Software
go on Terminal · GCI-01
Mar 23
Multi-Agent Demonstrator
Mar 23
Pharma QC Demonstrator
Mar 20
LIAN Edge Demonstrators
Apr 3
Governance Fragments
Admissible Autonomous Systems
Mar 31
Fragment IV · Admissible Persistence
Mar 31
Fragment V · Reality Coupling
Mar 31
Fragment VI · Scope Integrity
Mar 31
Fragment VII · Delegation Chain
Mar 31
Doctrine & Vision
Structural Sovereignty Doctrine v1.0
Mar 18
LIAN Edge v0.1 Demonstrator
Mar 31
KI-2035 / KI-2060 Vision
Mar 18
ETL-01 Executive Translation Layer
Mar 27
26
Open Records
3
EU AI Act
10
Frameworks
4
Software
5
Fragments
4
Doctrine · Vision
🇪🇺
EU AI Act — Compliance Architecture
Pre-compliance inspection, diagnostic frameworks, and technical enforcement architectures aligned with EU AI Act Articles 9 · 12 · 14 · 22
3 records
Mar 3 · 2026
Technical Note
Runtime Responsibility Boundaries: A Missing Control Layer in Regulated Human-AI Systems
Current AI safety approaches focus on model capability, accuracy, and compliance artifacts. However, many real-world failures occur at runtime when systems continue to interact after responsibility becomes unclear. This work introduces runtime responsibility boundaries as a structural control layer — defining when systems must defer, halt, or transfer authority.
Art. 14 · Human OversightRuntime Control
10.5281/zenodo.18846514
Mar 3 · 2026
Standard
EU Artificial Intelligence Act: Pre-Compliance Inspection and Diagnostic Framework
A technology-agnostic inspection and diagnostic framework designed to make structural conformity with the EU AI Act inspectable before deployment. Professional Reference Edition 2026. Translates regulatory intent into structured diagnostic questions applicable across AI system types and risk categories.
Art. 9 · Risk ManagementArt. 12 · LoggingPre-Compliance
10.5281/zenodo.18847120
Mar 3 · 2026
Standard
Technical Architecture Enforcement Framework: Structural Conformity Layer for High-Consequence AI Systems
Alignment-oriented technical interpretation of EU AI Act requirements, translating regulatory intent into execution-layer architecture. Key contributions: 12 Structural Governance Primitives and Article-by-Article architectural mapping. Bridges the gap between regulatory compliance language and technical enforcement reality.
Art. 9 · 12 · 14 · 2212 PrimitivesHigh-Consequence
10.5281/zenodo.18847235
Core Frameworks — Interlink Bridge Architecture Stack
The foundational architectural frameworks defining structural governance, admissibility, delegation, and cognitive interface layers
10 records
Apr 6 · 2026
Preprint
Execution-Bound Governance for AI Systems
This work introduces a structural approach to AI governance based on admissibility rather than post-hoc validation. It argues that governability requires a constraint-first architecture in which an admissibility layer determines which transitions may exist at all, and points toward execution-bound enforcement where inadmissible paths never materialize at the execution interface.
AdmissibilityExecution BoundaryHardware-Coupled Direction
10.5281/zenodo.19440804
Mar 17 · 2026
Technical Note
Interlink Bridge Open Governance Standard v1.0 (IBOGS-1.0)
The Interlink Bridge Open Governance Standard defines a layered architectural specification for AI systems operating in regulated and sovereign environments. The core open standard of the Interlink Bridge stack — published as CC BY 4.0 for unrestricted use by governments, standards bodies, and industry.
Open Standard · CC BY 4.0Sovereign EnvironmentsLayered Architecture
10.5281/zenodo.19070178
Mar 12 · 2026
Preprint
Structural Sovereignty: A Governance Architecture for High-Consequence Human–AI Systems
Presents Interlink Bridge — a conceptual governance architecture comprising ten interdependent frameworks for AI systems operating in high-consequence environments. Central thesis: governance must be a structural property of what the system can become, not a description of what it should do. The master theoretical paper of the Interlink Bridge series.
Master FrameworkHigh-Consequence10 Frameworks
10.5281/zenodo.18988485
Mar 20 · 2026
Preprint
GCI-01: Governed Cognitive Interface — A Governed Execution Interface for Probabilistic Large Language Models
Modern AI workflows involve multiple LLMs in sequence. GCI-01 defines model admissibility — before any model is invoked, whether that model class is authorized for the proposed transition. Five structural components: UGA-01, SCA-01, DIE-01, Routing Engine, Drift Detector. The model is not the system. The governance layer is the system.
Model AdmissibilityMA-1 to MA-5Multi-Model
10.5281/zenodo.19138098
Mar 12 · 2026
Preprint
UGA-01: Unified Governance Architecture — Constraint-First Runtime Design for Governable AI Systems
UGA-01 is the master reference for the complete Interlink Bridge governance stack. Defines System Ω = (A, T, δ, α, Δ, H) — admissible states, admissible transitions, execution function, authority constraint, drift boundary, halt condition. Three enforcement levels: E1 Structural · E2 Runtime · E3 Propagation.
Master ReferenceSystem Ω3 Enforcement Levels
10.5281/zenodo.18980560
Mar 14 · 2026
Preprint
The Commit Boundary Manifesto: The Core Governance Problem of Modern AI Systems
Every AI system has a generation boundary. Almost none have a commit boundary — the point where a proposal becomes a real state transition. That missing boundary is the governance problem. This manifesto defines the HCB (Human Commit Boundary) as a structural requirement, not a UX choice.
HCB · Commit BoundaryManifestoArt. 14
10.5281/zenodo.19023078
Mar 13 · 2026
Standard
Probabilistic Cognition, Deterministic Safety: A Governance Architecture for AI Systems
Proposes a master architecture separating probabilistic AI cognition from deterministic authority resolution, admissibility validation, and state transition commitment. Central rule: "Reason probabilistically. Commit deterministically." The governance layer enforces determinism at the boundary where probability meets consequence.
Probabilistic / Deterministic SplitAuthority Resolution
10.5281/zenodo.19005223
Mar 16 · 2026
Preprint
UCC Runtime Architecture v1.0 — A Personal Sovereignty Governed AI Interaction Model
Introduces a governance architecture for AI-assisted systems designed to preserve human sovereignty over consequence-bearing actions. Modern AI systems increasingly generate responses that carry real-world consequences without structural human confirmation. UCC defines the interaction model that prevents this.
Personal SovereigntyRuntime Architecture
10.5281/zenodo.19054453
Mar 3 · 2026
Standard
DIE-01: Delegation Integrity Envelope — Constraint-First Delegation Architecture
Constraint-first delegation architecture for micro-authority surfaces including inbox agents, email triage, and attention delegation. Core principle: Delegation becomes defensible only when authority is bound below the delegation event, not after. Defines the structural envelope within which delegation may occur.
Delegation ArchitectureAuthority Binding
10.5281/zenodo.18847285
Mar 11 · 2026
Standard
Universal AI Competence Test — A Structural Reasoning and Architecture Awareness Benchmark
A structured diagnostic framework designed to evaluate deeper reasoning capabilities in AI systems. While many benchmarks focus on accuracy and speed, this test evaluates structural reasoning, authority awareness, admissibility understanding, and governance coherence — the capabilities that matter for governed deployment.
BenchmarkStructural ReasoningDiagnostic
10.5281/zenodo.18964964
Software — Reference Implementations & Demonstrators
Live, browser-runnable implementations of Interlink Bridge governance primitives — pharma QC, multi-agent, personal interface, domain-adaptive governance
4 records
Mar 23 · 2026
Software
go on — Governed Cognitive Interface Terminal v1.0 · GCI-01 Reference Implementation
Browser-based governed cognitive interface implementing GCI-01 as a PWA. Features: Matrix Onboarding, Personal Sovereignty Layer, World Clock, multi-model routing (Claude · Gemini · GPT-4o · Grok · DeepSeek · Falcon · Mistral · Local), Governance Console, file upload. Free. No account required.
PSL · Personal SovereigntyMulti-ModelLive PWA
10.5281/zenodo.19181560
Mar 23 · 2026
Software
Interlink Bridge — Multi-Agent Admissibility Demonstrator · GCI-01 in Agentic Systems
Interactive HTML demonstrator implementing GCI-01 admissibility governance in a three-agent content workflow. Six scenarios including standard pipeline, authority class violation, semantic drift recovery, delegation chain break, IP conflict escalation, and ungoverned pipeline comparison. Central thesis: Capability is not authorization.
Multi-AgentMLISS · DAP · HCB6 Scenarios
10.5281/zenodo.19181601
Mar 20 · 2026
Software Docs
Pharmaceutical QC Admissibility Demonstrator — MLISS · STAB · DAP · CAR
Interactive HTML demonstrator implementing core governance primitives in a pharmaceutical Quality Control context. Six scenarios across a simulated batch release workflow. MLISS admissibility conditions, STAB drift detection with rollback (not halt), DAP delegation chains, CAR authority register. GMP audit log export.
Pharma · GMPMLISS · STAB · DAP · CAREU AI Act Art. 9 · 12 · 22
10.5281/zenodo.19132690
Apr 3 · 2026
Software
LIAN Edge — Domain-Adaptive Structural Governance Demonstrators: Core · Clinical · Military
Three domain-adaptive governance demonstrators: Core (5 scenarios including replay attack), Clinical/Hospital (6 scenarios including emergency protocol and AI prescribing blocked), Military UAV (4 scenarios including No-Strike Zone and auto-engage blocked). Admissibility as execution constraint across high-consequence domains.
LIAN · PoA · HCBClinical · Military · CoreReplay Guard
10.5281/zenodo.19396852
Structural Governance Fragments — Extended Theory Series
Seven philosophical-architectural fragments extending the Structural Sovereignty doctrine into persistence, reality coupling, scope integrity, and delegation chain theory
5 records
Mar 31 · 2026
Preprint
Admissible Autonomous Systems — A Structural Governance Architecture for Execution-Bound AI Systems
Introduces a structural governance architecture for AI systems based on admissibility. Current approaches rely on monitoring, validation, alignment, audit. These operate after execution. Admissibility operates before. This paper defines what it means for an autonomous system to be execution-bound by structure rather than by policy.
AdmissibilityAutonomous SystemsPre-Execution
10.5281/zenodo.19351903
Mar 31 · 2026
Report
Admissible Persistence — Fragment IV: Continuation as a Condition of Existence, Not a Property of Time
Extends structural governance from admissibility of action to admissibility of continuation. A system does not simply persist through time — it must continuously satisfy structural conditions for its own existence to remain valid. Continuation is not default. It is conditional.
Fragment IVPersistenceContinuation
10.5281/zenodo.19350255
Mar 31 · 2026
Preprint
Reality Coupling — Fragment V: Maintaining Admissibility Without Dependence on Interpretation
Addresses the structural relationship between admissibility and reality over time. A common critique: a system may operate within formal admissibility while drifting from the reality its conditions were meant to govern. Fragment V defines reality coupling as a structural requirement — admissibility must track the world it governs.
Fragment VReality CouplingDrift
10.5281/zenodo.19351117
Mar 31 · 2026
Preprint
Scope Integrity — Fragment VI: Defining the Conditions Under Which Conditions Are Valid
Addresses a foundational question: who defines the conditions under which a system is allowed to operate? Previous fragments establish that actions must be admissible and conditions must track reality. Fragment VI asks: what constrains the definition of admissibility itself? Scope must be bounded or governance is self-referential.
Fragment VIScope IntegrityMeta-Governance
10.5281/zenodo.19351411
Mar 31 · 2026
Preprint
Delegation Chain Integrity — Fragment VII: Maintaining Authority Continuity Across Distributed Execution
Addresses the structural integrity of delegation in governed systems. Conditions must be defined within admissible scope. Execution must be structurally bound. Fragment VII asks: when authority is delegated across system boundaries, does the governance chain remain intact? Defines structural requirements for authority continuity.
Fragment VIIDelegation ChainDAP
10.5281/zenodo.19351747
📐
Doctrine — Canonical Architectural Statements
Working papers and canonical doctrine documents establishing the foundational principles of the Interlink Bridge governance stack
2 records
Mar 18 · 2026
Working Paper
Structural Sovereignty Doctrine v1.0 — Architecture Precedes Policy
Defines a runtime-first framework for governable AI systems. Central claim: Architecture precedes policy. If authority, limits, and stop conditions are not technically encoded into the system from the start, policy alone cannot make a system governable. The canonical doctrine statement of the Interlink Bridge architecture series.
Doctrine · CanonicalRuntime-FirstArchitecture Precedes Policy
10.5281/zenodo.19085045
Mar 31 · 2026
Software Docs
LIAN Edge v0.1 — Live Governance Demonstrator for Pre-Runtime Admissibility
Live demonstrator for pre-runtime admissibility enforcement. Shows one decisive claim: inadmissible transitions never become executable paths. Request → structural check → either path exists or it does not. Five scenarios including presence tokens, commit tokens, and replay attack prevention.
LIAN · Pre-RuntimeToken EngineReplay Prevention
10.5281/zenodo.19344983
Vision & Long-Horizon Architecture
Long-term architectural vision for ambient intelligence, room-neutral AI presence, and executive translation between governance theory and organizational practice
2 records
Mar 18 · 2026
Working Paper
Interlink Bridge Vision Architecture: KI-2035 / KI-2060 — Ambient Intelligence · Room-Neutral Presence · Long-Horizon Coexistence
Describes the long-horizon design philosophy underlying the Interlink Bridge Architecture: what AI presence in the world should feel like when it is done right. Not a product specification — a design philosophy. Intelligence that is present rather than accessed. Distributed across rooms, not locked in a device. Stable across time, not stateless per session.
KI-2035 · KI-2060Ambient IntelligenceLong-Horizon
10.5281/zenodo.19092677
Mar 27 · 2026
Preprint
ETL-01 — Executive Translation Layer: Turning Assigned Accountability into Real-Time Authority
Most AI governance fails before architecture even begins — not because systems are too complex, but because leadership cannot act when it matters. ETL-01 bridges assigned accountability and real-time authority. The missing layer between governance theory and organizational execution. One-liner: Governance doesn't fail because we lack rules. It fails because responsibility cannot be exercised when systems are live.
ETL-01Executive TranslationReal-Time Authority
10.5281/zenodo.19242685
No records match this filter.