Skip to content

Architecture

A layered, vendor-neutral architecture for governing mixed agent runtimes.

ClawForge is structured as four layers: agents, adapters and interception, governance runtime, and the control plane. Local enforcement where supported, MCP proxying where it isn't, and an append-only audit pipeline either way.

Layer 01Agents

Mixed runtimes — what your enterprise actually runs.

Claude CodeOpenAI AgentsLangGraphOpenClawMCP serversCustom enterprise agents
Layer 02Adapters & interception

Where ClawForge meets each runtime. Local enforcement where supported, proxy where not.

HooksMCP proxyAGT integrationSDK adapters
Layer 03Governance runtime

Substrate that enforces policy at agent-edge. Standalone or on top of Microsoft AGT.

ClawForge engineMicrosoft AGT compatibilitySandboxingIdentity & scopes
Layer 04Control plane

Operator surface. Vendor-neutral, self-hosted, append-only.

PolicyApprovalsAudit & evidenceIncidentsAdmin consoleMCP catalog & approvals

Microsoft AGT integration

AGT is the substrate. ClawForge is the operations layer above it.

AGT enforces policy on every tool call at the runtime layer — including MCP traffic via its own MCPGateway and MCPSecurityScanner. ClawForge does not duplicate that enforcement. It's the operations layer above AGT: the operator console, approval queue, policy distribution, and cross-runtime audit federation. ClawForge is not a Microsoft product and does not replace AGT.

For AGT-supported runtimes (LangChain, AutoGen, CrewAI, Semantic Kernel, OpenAI Agents SDK, Google ADK, and more), AGT does the per-tool-call enforcement, MCP gateway, and append-only audit. ClawForge writes the policy AGT enforces, surfaces AGT’s approval hooks into an operator queue, and federates AGT’s audit log into the cross-runtime event store.

For runtimes outside AGT (Claude Code today, OpenClaw, custom agents), ClawForge handles interception itself — through SDK adapters, runtime hooks, or its own MCP proxy. The operator surface stays the same either way.

Control flows

What actually moves through the system

A technical evaluator should be able to understand the core loops from this page alone, without reading the full docs first.

Policy enforcement flow

Policies are versioned in the control plane and enforced close to the runtime, so the operator model stays centralized while execution controls stay local to the assistant.

Audit flow

Runtimes emit tool and session events upward into the control plane so operators can query behavior without collecting logs machine by machine.

Heartbeat and control propagation

The heartbeat loop reports liveness, checks policy freshness, and carries kill-switch state back to connected clients.

Kill-switch behavior

Emergency controls publish through the same policy loop, with a local fail-secure posture available when the control plane stops responding for too long.

Trust boundaries

Keep policy centralized, keep execution close to the runtime

The control plane story should feel multi-agent and fleet-aware, while still making it obvious where enforcement happens and where data lives.

Assistant runtime

Enforces policy, tracks local state, and uploads audit data.

Control plane API

Stores org policy, audit records, identity state, and runtime status.

Operator console

Provides the review and response surface used by admins and platform teams.

Customer environment

Self-hosted deployment keeps control-plane services and storage under customer ownership.