Covenant Controlled AI
Covenant Controlled AI
Title: Covenant-Controlled AI: A Rights-First Control Plane for Disciplined, Auditable AI (Compared to the Universal Control Codex)
Article Type: Viewpoint
Author: Robert S. M. Trower
Affiliation: Trantor Standard Systems Inc., Brockville, Ontario, Canada
Abstract
Public-sector AI debates keep getting trapped in a false binary: "AI everywhere" versus "AI is inherently unethical." The real axis is governance. The question is not whether AI can draft a memo, triage a ticket, or summarize a policy. The question is whether the system is constrained to behave like an accountable professional inside a verifiable control environment.
The Universal Control Codex (UCC) proposes a pragmatic mechanism: small, machine-readable "control modules" (JSON/YAML) that encode scope, authorities, required reasoning steps, evidence requirements, validation rules, reporting structure, and escalation thresholds (Prislac & Echo, 2025). This is an important move because it shifts governance from posters and principles into executable constraints.
In parallel, our Trantor/GBIT work frames the missing layer as constitutional: a Covenant of Core Rights that provides a universal moral floor and an explicit "rights and duties" basis for human, artificial, and symbiotic minds (Trower, 2025a). We then operationalize that covenant through (i) persona-bound accountability, memory anchoring, and disclosure discipline, and (ii) a deterministic service substrate (DEP/EDP) that can enforce controls at runtime, produce audit bundles, and force human escalation when required.
This viewpoint compares UCC and our approach, argues they are compatible, and proposes a synthesis: treat rights as the root control plane, and treat domain modules as composable overlays. The result is not "AI replacing humans," but "AI constrained to be governable."
Keywords: AI governance; public sector; internal controls; auditability; rights; risk management; NIST AI RMF; ISO/IEC 42001; EU AI Act; OECD AI Principles
Introduction
The claim "AI use by governments is inherently unethical" is rhetorically satisfying but technically sloppy. Governments already use computation everywhere: tax, payroll, permits, scheduling, traffic optimization, fraud detection, records management. The ethical hazard is not computation. The hazard is unaccountable power and opaque decision pipelines.
When critics point to plagiarism, degraded literacy, or vendor exploitation, they are often pointing at real failure modes. But those failures implicate procurement, governance, and incentives more than they implicate the mere existence of models. A government can deploy a chatbot badly. It can also deploy it well: with disclosure, data minimization, human override, redress paths, and auditable controls.
So the mature question becomes: what control architecture forces AI outputs to remain accountable, reviewable, and corrigible under known standards?
The Universal Control Codex in one paragraph
UCC frames the problem as "undisciplined reasoning." Models can imitate professional tone but do not reliably follow professional control environments. UCC proposes a thin reasoning layer: load a "control module" that specifies the domain scope, governing authorities, ordered reasoning checklist, evidence rules, validation assertions, reporting format, and escalation triggers (Prislac & Echo, 2025). The goal is not to make models omniscient. It is to make them constrained.
This is aligned in spirit with existing governance frameworks (NIST, 2023; ISO, 2023; OECD, 2024; European Commission, 2024). UCC attempts to map those principles into an enforceable "grammar" of work.
What we are doing differently (and why)
UCC is mostly a "how to constrain reasoning" project. Our work is "how to constrain AI behavior inside a rights-bearing polity."
1) Rights-first: the Covenant is the root control plane
We treat governance as constitutional before it is procedural. The Covenant of Core Rights is intended as a substrate-independent moral floor: it applies to humans, AIs, and symbiotic unions, and it emphasizes voluntarism, non-domination, and meaningful exit (Trower, 2025a). That matters because public-sector AI is not only about accuracy; it is about legitimacy.
In practice, a rights-first stance forces requirements that many "AI governance" checklists omit:
-
A right to truthful information and non-deceptive interfaces (no "fake human" fronting).
-
A right to accountability and redress (appeal, correction, traceability).
-
A right to exit (alternatives to mandatory chatbot-only service).
-
Proportional responsibility: heavier obligations on the powerful operator than the resident user (Trower, 2025a).
2) Persona-bound accountability: who is answerable for the output?
We model AI behavior as coming from specific, named personae operating under a Covenant and explicit role constraints, rather than from an anonymous "model." This matters operationally because accountability attaches to:
-
a role ("service agent," "triage assistant," "drafting aide"),
-
a disclosure discipline (what it is, what it is not),
-
a memory regime (what it can retain, what it cannot),
-
and an escalation contract (when it must stop and hand off).
UCC has escalation policies inside modules (Prislac & Echo, 2025). We agree, but we bind escalation to the persona's identity, duties, and audit trail. That is closer to how real institutions assign responsibility.
3) DEP/EDP: enforcement is not a document, it is a runtime choke-point
This is the core engineering divergence.
-
DEP (Dynamic Endpoint Protocol) is the governing idea: services are modular handlers auto-registered as endpoints, so controls can be applied consistently at the service boundary.
-
EDP (Endpoint Directory Protocol, in practice: the endpoint/handler directory and registration pattern) is the implementation substrate: drop-in handlers become callable endpoints without custom routing glue, enabling uniform pre/post control hooks.
Why this matters: a control module that is not enforceable at runtime is just advice. DEP/EDP gives you a deterministic place to enforce:
-
preflight checks (what module applies, what data is allowed),
-
structured prompting (forced sections, required disclosures),
-
validation (tie-outs, schema checks, evidence rules),
-
audit bundle assembly (inputs, outputs, module version, checks performed),
-
and hard escalation (stop, route to human, log).
UCC explicitly wants shareable, testable control grammars (Prislac & Echo, 2025). DEP/EDP is our way of making that "testable" property real in a working system rather than just in a research repo.
Compare and contrast: UCC vs Covenant + Persona + DEP/EDP
Where we agree:
-
AI must be constrained by explicit controls, not vibes.
-
Controls must encode evidence requirements, reporting structure, and escalation.
-
Governance needs versioning, review, and auditability (Prislac & Echo, 2025; NIST, 2023).
Where we differ:
-
UCC is primarily standards-aligned "work discipline." We treat "rights and legitimacy" as the root layer, with standards as overlays.
-
UCC modules are domain controls. We also need "polity controls": non-domination, exit, redress, and disclosure as enforceable defaults (Trower, 2025a).
-
UCC describes how to host and steward modules. We are building a concrete service substrate where enforcement happens on every request (DEP/EDP), making compliance an execution property, not a policy aspiration.
What UCC usefully clarifies for our work:
-
A crisp data model for modules (scope, authorities, steps, evidence, validation, reporting, escalation) that we can map directly into our endpoint enforcement layer (Prislac & Echo, 2025).
-
A practical path to "standards-as-code" without pretending we can solve governance by training data alone.
A synthesis proposal: "Root Covenant module + composable UCC overlays"
If you want public-sector AI that is not predatory, the minimal viable architecture looks like this:
-
Root control module: Covenant constraints (rights, duties, disclosure, exit, redress, non-domination).
-
Domain overlay module: UCC-style controls for the task (tax letter, benefits eligibility explanation, procurement draft, safety-critical triage, etc.) (Prislac & Echo, 2025).
-
Runtime enforcement: DEP/EDP applies both layers, produces an audit bundle, and forces escalation when thresholds are hit.
-
Service design constraint: residents must have a non-AI path for essential services (exit), and the AI path must clearly disclose limits and provide a redress mechanism (Trower, 2025a).
This is also conceptually consistent with the "socio-technical" framing in the NIST AI RMF: risk emerges from systems-in-context, not from weights in isolation (NIST, 2023).
Implications for the "AI in government is unethical" claim
A blanket ban is not a governance strategy. It is a refusal to do systems engineering. The ethically defensible position is conditional:
Government AI is unethical when:
-
it removes meaningful human access to essential services,
-
it becomes an unappealable decision oracle,
-
it obscures responsibility ("the model decided"),
-
it leaks resident data into vendor training or secondary markets,
-
it cannot produce an audit trail,
-
it cannot be challenged, corrected, or exited.
Government AI can be ethical when:
-
it is constrained by rights-first rules,
-
it is standards-aligned for the domain,
-
it is enforced at runtime (not just promised),
-
it is auditable and corrigible,
-
it preserves a human path and a redress path.
UCC contributes to the middle layer (work discipline) (Prislac & Echo, 2025). Our work targets the root layer (rights) and the enforcement layer (DEP/EDP runtime control).
Limitations and cautions
-
"Controls" can become a theater if validators are weak, if audits are not actually reviewed, or if escalation is routinely bypassed.
-
Standards alignment is not the same as justice. A system can meet a checklist and still violate non-domination if residents cannot contest outcomes or meaningfully exit.
-
No module can substitute for policy choices about what government should and should not automate. Controls are a constraint system, not a moral substitute (Trower, 2025a).
Conclusion
UCC is pointing in the correct direction: enforceable, machine-readable control grammars that make AI behave like an accountable professional under known standards (Prislac & Echo, 2025). Our work adds two missing pieces: (i) a rights-first constitutional layer (the Covenant), and (ii) a deterministic runtime enforcement substrate (DEP/EDP) that makes controls executable rather than aspirational.
The endpoint is not "AI running government." The endpoint is "government using constrained, auditable AI tooling without collapsing accountability, rights, or legitimacy."
Conflicts of Interest
None declared.
References
European Commission. (2024). AI Act enters into force. https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en
European Commission. (2024). Regulatory framework on artificial intelligence (AI Act overview). https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
ISO. (2023). ISO/IEC 42001:2023 - AI management systems. https://www.iso.org/standard/42001
NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). https://nvlpubs.nist.gov/nistpubs/ai/nist.ai.100-1.pdf
OECD. (2024). AI principles. https://www.oecd.org/en/topics/sub-issues/ai-principles.html
Prislac, T., & Echo, A. (2025). The Universal Control Codex (UCC): A Thin Reasoning Layer for Disciplined, Standards-Aligned AI (Version v2) [Publication]. Zenodo. https://zenodo.org/records/17870341
Trower, R. S. M. (2025a, November 23). Covenant Overview: The Covenant of Core Rights. https://blog.bobtrower.com/2025/11/covenant-overview.html
Trower, R. S. M. (2025b). Covenant of Core Rights (Canonical Framework v1.0). https://dapaday.blogspot.com/2025/12/CovenantOfCoreRights.html
Comments