Governance Infrastructure
for Autonomous Decisions

AI agents make thousands of decisions with no record of why. AEGIS evaluates proposals through six quantitative gates — returning structured decisions with hash-chained audit trails and compliance-aligned documentation.


The Problem

AI agents and engineering teams make high-stakes decisions every day — deployments, architecture changes, resource allocations. But there is no standardized way to evaluate those decisions, and no audit trail to prove they were sound.

ChallengeAEGIS Solution
No paper trailEvery evaluation produces a decision_id, timestamp, gate results, rationale, and hash-chained audit entry.
No standard for "good"Six quantitative gates evaluate risk, profit, novelty, complexity, quality, and utility against calibrated thresholds.
Compliance can't auditHash-chained audit logs, NIST AI RMF artifacts, EU AI Act Annex IV technical file included.
Governance slows teams downOne API call. Five integration methods. Zero runtime dependencies in the core SDK.

How It Works

Every proposal is evaluated against six quantitative gates. The result is a structured decision — PROCEED, PAUSE, HALT, or ESCALATE — with confidence scores, rationale, and next steps. Every evaluation is hash-chained.

Proposal → [Risk] [Profit] [Novelty] [Complexity] [Quality] [Utility] → Decision ↓ PROCEED / PAUSE / HALT / ESCALATE + confidence + rationale + next steps + hash-chained audit entry

How AEGIS Is Different

Most governance tools either filter agent actions in real-time or manage compliance through checklists. AEGIS does neither — it evaluates the engineering decision itself through six quantitative gates.

ApproachWhat It DoesBlind Spot
Runtime agent guardrailsMonitors agent behavior, gates capabilities via trust scores and circuit breakersA trusted agent can still produce a bad proposal
Compliance dashboardsPolicy management, checklists, binary pass/fail risk scoringNo quantitative rigor — a checklist can't evaluate mathematical risk
Quantitative decision governanceBayesian posteriors, utility theory, KL divergence drift detection across six gatesEvaluates decisions, not runtime behavior — complementary to the other approaches

Quick Example

# Install and evaluate in 30 seconds $ pip install aegis-governance $ export AEGIS_API_KEY="your-key" # Python SDK from aegis_governance import AegisConfig, PCWContext, pcw_decide decision = pcw_decide( PCWContext( agent_id="my-agent", proposal_summary="Add Redis caching layer", estimated_impact="medium", risk_proposed=0.15, complexity_score=0.7, ) ) # → DecisionStatus.PROCEED (confidence: 0.87) # → decision_id: aegis-2026-03-24-a7f3... # → audit hash: sha256:e3b0c44298fc...

Integration Options

Key Features

Six Quantitative Gates
Risk, profit, novelty, complexity, quality, utility. Not heuristic scores: Bayesian posteriors, logistic functions, and hard-floor enforcement against calibrated thresholds. All six must pass.
Hash-Chained Audit Trails
Tamper-evident decision logs. Every evaluation gets a unique ID, timestamp, gate results, rationale, and a SHA-256 hash-chain entry your auditor can verify independently.
Enterprise Security
Hybrid signatures combining classical (Ed25519) with post-quantum (ML-DSA-44, FIPS 204) and encryption (ML-KEM-768, FIPS 203) — the first governance platform with quantum-resistant audit trails. HSM/KMS key management. Shadow mode for risk-free rollout. Drift detection for continuous monitoring.
Compliance-Aligned
Artifacts aligned with NIST AI RMF 1.0, EU AI Act Annex IV, ISO 42001, SOC 2, and FedRAMP High. Runbook templates accelerate your governance program.
AI-Agent Native
MCP server for Claude Code and Cursor. GitHub Action for PR governance gates. Built for the agents that actually make decisions.
Configurable Thresholds
YAML-driven parameter management with version-controlled schemas. Domain templates for CI/CD, finance, healthcare, and infrastructure.

Built for Regulated Environments

AEGIS includes compliance artifacts and runbook templates aligned with these frameworks. Alignment artifacts, not certifications. See compliance maturity →

NIST AI RMF 1.0
Govern, Map, Measure, Manage artifacts and CI validation
EU AI Act
Annex IV technical file, Article 14 human oversight, Article 72 monitoring
ISO 42001
AIMS policy and PDCA cycle framework
SOC 2
Runbook templates aligned to trust criteria
FedRAMP High
BCP/DRP, incident response, and access review runbooks