AEGISdocs

Governance Infrastructure for Autonomous Decisions

Standardize engineering decisions. Create auditable proof. Let AI agents operate with confidence. AEGIS evaluates proposals through six quantitative gates.

Quantitative decision governance for engineering teams. Evaluate proposals through mathematical gates — not runtime guardrails or compliance checklists.

AEGIS evaluates proposals through six quantitative gates — returning structured PROCEED / PAUSE / ESCALATE / HALT decisions with hash-chained audit trails and compliance-aligned documentation.

Get Started Try the Advisor API Reference Portal


The Problem

AI agents and engineering teams make high-stakes decisions every day — deployments, architecture changes, resource allocations. But there is no standardized way to evaluate those decisions, and no audit trail to prove they were sound.

ChallengeWhat Goes Wrong
No paper trailAI agents make decisions with no record of what was evaluated, why, or by whom
No standard for "good"Every team has different criteria, tribal knowledge, and gut-feel thresholds
Compliance can't auditRegulators and risk teams can't verify what they can't see
Governance slows teams downAdding oversight means adding friction, process, and delays

How AEGIS Solves It

Every decision gets a unique ID, timestamp, rationale, and cryptographically verifiable audit log.

ChallengeAEGIS Solution
No paper trailEvery evaluation produces a decision_id, timestamp, gate results, rationale, and hash-chained audit entry
No standard for "good"Six quantitative gates evaluate risk, profit, novelty, complexity, quality, and utility against calibrated thresholds
Compliance can't auditHash-chained audit logs, NIST AI RMF artifacts, EU AI Act Annex IV technical file included
Governance slows teams downOne API call. Five integration methods. Zero runtime dependencies in the core SDK

How It Works

Proposal → [Risk] [Profit] [Novelty] [Complexity] [Quality] [Utility] → Decision

                                                          PROCEED / PAUSE / HALT / ESCALATE
                                                          + confidence + rationale + next steps

Every proposal is evaluated against six gates:

  • Risk — Is the risk delta acceptable? Bayesian posterior confidence evaluation.
  • Profit — Does the expected value justify the change? Bayesian confidence check.
  • Novelty — How novel is this approach? Logistic function scoring.
  • Complexity — Is the system complexity within bounds? Hard floor enforcement.
  • Quality — Does code quality meet standards? Minimum score with no zero subscores.
  • Utility — Is the net utility positive? Lower confidence bound verification.

Quick Example

from aegis_governance import AegisConfig, PCWContext, PCWPhase, pcw_decide

config = AegisConfig.default()
evaluator = config.create_gate_evaluator()

decision = pcw_decide(
    PCWContext(
        agent_id="my-agent",
        session_id="session-1",
        phase=PCWPhase.PLAN,
        proposal_summary="Add Redis caching layer",
        estimated_impact="medium",
        risk_proposed=0.15,
        complexity_score=0.7,
    ),
    gate_evaluator=evaluator,
)

print(decision.status.value)  # "proceed", "pause", "halt", or "escalate"
curl -X POST https://aegis-api-980022636831.us-central1.run.app/evaluate \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "proposal_summary": "Add Redis caching layer",
    "estimated_impact": "medium",
    "risk_proposed": 0.15,
    "complexity_score": 0.7
  }'
echo '{"proposal_summary":"Add caching","estimated_impact":"low"}' \
  | aegis evaluate
- uses: undercurrentai/aegis-governance/.github/actions/aegis-gate@main
  with:
    proposal_summary: "Add Redis caching layer"
    estimated_impact: medium
    risk_proposed: 0.15

Integration Options

MethodBest ForDocs
REST APIAny language, CI/CD pipelinesQuickstart
Python SDKPython applications, scriptsQuickstart
CLIShell scripts, local evaluationQuickstart
MCP ServerAI agent integration (Claude, Codex)MCP Tools
GitHub ActionPR governance gatesAction

How AEGIS Is Different

Most approaches to AI governance fall into two categories: runtime agent guardrails that monitor and gate what agents can do in real-time, or compliance dashboards that manage policies through checklists and binary pass/fail scoring. Both have a blind spot — neither evaluates whether the engineering decision itself is mathematically sound.

AEGIS takes a third approach: quantitative decision governance. Instead of watching agent behavior or managing compliance checklists, AEGIS evaluates the proposal through six mathematical gates — Bayesian posteriors, logistic functions, utility theory, and hard-floor enforcement. The question isn't "can this agent act?" or "does this check a compliance box?" — it's "should this specific engineering change ship?"

Key Features

  • Hash-chained audit trails — tamper-evident decision logs for compliance and accountability
  • Enterprise-grade security — hybrid signatures combining classical (Ed25519) with post-quantum (ML-DSA-44, FIPS 204) and encryption (ML-KEM-768, FIPS 203). The first governance platform with quantum-resistant audit trails, prepared for CNSA 2.0 timelines.
  • Risk-free rollout — shadow mode lets you evaluate without affecting production decisions (Professional tier and above)
  • Continuous monitoring — drift detection alerts when decision patterns shift from baselines
  • AI-agent native — MCP server, GitHub Action, and SDK for autonomous agent governance
  • Compliance-aligned — artifacts aligned with NIST AI RMF, EU AI Act, and SOC 2 trust criteria
  • Configurable thresholds — YAML-driven parameter management with version-controlled schemas

Built for Regulated Environments

AEGIS includes compliance artifacts and runbook templates aligned with these frameworks. Alignment artifacts, not certifications. See compliance maturity.

FrameworkCoverage
NIST AI RMF 1.0Govern, Map, Measure, Manage artifacts and CI validation
EU AI ActAnnex IV technical file, Article 14 human oversight plan, Article 72 monitoring framework
ISO 42001AIMS policy, PDCA cycle framework
SOC 2Runbook templates aligned to trust criteria
FedRAMP HighBCP/DRP, incident response, and access review runbooks

View compliance documentation

On this page