Skip to main content
Architecture

System Architecture

Akmatori uses a secure 4-container architecture with network isolation to provide safe, scalable AI-powered incident automation.

Component Overview

Alert Sources
AlertmanagerZabbixPagerDutyGrafanaDatadog
↓
πŸ’¬Slack Bot
⟷
⚑API ContainerIncident managementSkill orchestrationWebSocket to Agent
⟷
πŸ—„οΈPostgreSQLIncidents, SkillsCredentials (encrypted)
WebSocket↓
↓
πŸ€–Agent WorkerRuns pi-mono (multi-LLM)
NO database accessNO direct secrets
⟷MCP calls
πŸ”ŒMCP GatewayFetches credentialsSSH/Zabbix execution
↓
🧠LLM ProvidersOpenAI · Anthropic · GoogleOpenRouter · Custom/On-Prem

Supported LLM Providers

Akmatori uses pi-mono as its unified LLM runtime, giving you the flexibility to choose any provider or run models on your own infrastructure.

OpenAI

GPT-5.4, GPT-5.3 Codex, GPT-5.2 Codex

Anthropic

Claude Opus 4.6, Claude Sonnet 4.6, Claude Haiku 4.5

Google

Gemini 2.5 Pro, Gemini 2.5 Flash

OpenRouter

Access to 200+ models from all providers

Custom/On-Prem

Any OpenAI-compatible endpoint

Configure your LLM provider in the web UI under Settings β†’ LLM Provider. No environment variables needed - API keys are stored encrypted in the database.

Security Design

ContainerDatabase AccessSecrets AccessExternal Network
APIβœ… Fullβœ… Allβœ… Slack
MCP Gatewayβœ… Read-onlyβœ… Tool credentialsβœ… SSH, APIs
Agent Worker❌ None❌ Noneβœ… LLM APIs only
PostgreSQLN/AN/A❌ Internal only

Key Security Features

Credential isolation

Agent Worker never sees database credentials

Per-incident auth

LLM API keys passed via WebSocket for each task

Network segmentation

Three isolated Docker networks

UID separation

API (UID 1000) and Agent (UID 1001) for file permission control

Docker Services

akmatori-apiDockerfile.api

Main Go backend: incident management, skill orchestration, WebSocket server for Agent Worker

frontend, api-internal
postgrespostgres:16-alpine

PostgreSQL database storing incidents, skills, tools, and encrypted credentials

api-internal
mcp-gatewaymcp-gateway

Model Context Protocol gateway: fetches credentials from DB, executes SSH/Zabbix operations

frontend, api-internal, codex-network
akmatori-agentagent-worker/Dockerfile

Agent Worker: runs pi-mono for multi-provider LLM inference (OpenAI, Anthropic, Google, OpenRouter, custom). Isolated, no DB access.

frontend, api-internal, codex-network

Network Isolation

Akmatori uses three separate Docker networks to ensure security through isolation:

frontend

External access for the UI and API proxy

api-internal

API ↔ Database, MCP Gateway ↔ Database connections

codex-network

Isolated network for Agent Worker ↔ MCP Gateway

The Agent container has no direct database access. All tool operations flow through the MCP Gateway, which handles credential resolution at runtime.

How It Works

Akmatori uses pi-coding-agent as a multi-provider LLM runtime in an isolated container to execute AI-powered automation tasks. When an alert is received or a skill is triggered:

1

Alert Normalization

API container extracts key fields using source-specific adapters

2

Incident Creation

Records context, creates workspace with skill files and symlinks

3

Task Dispatch

API sends task + LLM credentials to Agent Worker via WebSocket

4

AI Execution

Agent Worker runs pi-mono in the incident workspace

5

Tool Calls

When the agent needs SSH/Zabbix access, MCP tools call the MCP Gateway

6

Credential Fetch

MCP Gateway retrieves credentials from database and executes the operation

7

Result Streaming

Output streams back through WebSocket to API for real-time updates

8

Completion

Results posted to Slack (if configured) and incident status updated

This architecture ensures the AI agent never has direct access to sensitive credentials.