Skip to main content
Architecture

System Architecture

Akmatori uses a secure 4-container architecture with network isolation to provide safe, scalable AI-powered incident automation.

Component Overview

Alert Sources
AlertmanagerZabbixPagerDutyGrafanaDatadog
↓
πŸ’¬Slack Bot
⟷
⚑API ContainerIncident managementSkill orchestrationWebSocket to Agent
⟷
πŸ—„οΈPostgreSQLIncidents, SkillsCredentials (encrypted)
WebSocket↓
↓
πŸ€–Agent WorkerRuns pi-mono (multi-LLM)
NO database accessNO direct secrets
⟷MCP calls
πŸ”ŒMCP GatewayFetches credentialsSSH/Zabbix execution
↓
🧠LLM ProvidersOpenAI · Anthropic · GoogleOpenRouter · Custom/On-Prem

Supported LLM Providers

Akmatori uses pi-mono as its unified LLM runtime, giving you the flexibility to choose any provider or run models on your own infrastructure.

OpenAI

GPT-4o, GPT-4 Turbo, o1, o3

Anthropic

Claude 3.5 Sonnet, Claude 3 Opus

Google

Gemini 2.0, Gemini 1.5 Pro

OpenRouter

Access to 100+ models

Custom/On-Prem

GLM, Kimi, Minimax, Mistral, LLaMA, etc.

Configure your LLM provider in the web UI under Settings β†’ LLM Provider. No environment variables needed - API keys are stored encrypted in the database.

Security Design

ContainerDatabase AccessSecrets AccessExternal Network
APIβœ… Fullβœ… Allβœ… Slack
MCP Gatewayβœ… Read-onlyβœ… Tool credentialsβœ… SSH, APIs
Agent Worker❌ None❌ Noneβœ… LLM APIs only
PostgreSQLN/AN/A❌ Internal only

Key Security Features

Credential isolation

Agent Worker never sees database credentials

Per-incident auth

LLM API keys passed via WebSocket for each task

Network segmentation

Three isolated Docker networks

UID separation

API (UID 1000) and Codex (UID 1001) for file permission control

Docker Services

akmatori-apiDockerfile.api

Main Go backend: incident management, skill orchestration, WebSocket server for Agent Worker

frontend, api-internal
postgrespostgres:16-alpine

PostgreSQL database storing incidents, skills, tools, and encrypted credentials

api-internal
mcp-gatewaymcp-gateway

Model Context Protocol gateway: fetches credentials from DB, executes SSH/Zabbix operations

api-internal, agent-network
agent-workeragent-worker/Dockerfile

Agent Worker: runs pi-mono for multi-provider LLM inference (OpenAI, Anthropic, Google, OpenRouter, custom). Isolated, no DB access.

agent-network

Network Isolation

Akmatori uses three separate Docker networks to ensure security through isolation:

frontend

External access for the UI and API proxy

api-internal

API ↔ Database, MCP Gateway ↔ Database connections

codex-network

Isolated network for Codex Worker ↔ MCP Gateway

The Codex container has no direct database access. All tool operations flow through the MCP Gateway, which handles credential resolution at runtime.

How It Works

Akmatori uses OpenAI Codex CLI in an isolated container to execute AI-powered automation tasks. When an alert is received or a skill is triggered:

1

Alert Normalization

API container extracts key fields using source-specific adapters

2

Incident Creation

Records context, creates workspace with skill files and symlinks

3

Task Dispatch

API sends task + OpenAI credentials to Codex Worker via WebSocket

4

AI Execution

Codex Worker runs Codex CLI in the incident workspace

5

Tool Calls

When Codex needs SSH/Zabbix access, Python wrappers call MCP Gateway

6

Credential Fetch

MCP Gateway retrieves credentials from database and executes the operation

7

Result Streaming

Output streams back through WebSocket to API for real-time updates

8

Completion

Results posted to Slack (if configured) and incident status updated

This architecture ensures the AI agent never has direct access to sensitive credentials.