How to Enable Debug Logs in OpenClaw: Complete Troubleshooting Guide

Something broke. Your OpenClaw agent is not responding, messages are not arriving, or a cron job is silently failing. The first step is always the same: check the logs.
OpenClaw has a surprisingly deep logging system with multiple surfaces, configurable levels, per-channel filtering, diagnostic flags, and even OpenTelemetry export. This guide covers all of them, from the quick one-liner to full observability.
Quick Start: See What Is Happening Right Now
# Tail logs in real time
openclaw logs --follow
# With local timestamps (instead of UTC)
openclaw logs --follow --local-time
# Last 500 lines
openclaw logs --limit 500
This is the fastest path from "something is wrong" to "I can see what is happening." The openclaw logs command reads the Gateway's log file via RPC, so it works even if you are connected remotely.
Where Logs Live
OpenClaw writes rolling JSON-lines log files:
/tmp/openclaw/openclaw-YYYY-MM-DD.log
The date uses the gateway host's local timezone. You can override the path:
// ~/.openclaw/openclaw.json
{
"logging": {
"file": "/var/log/openclaw/openclaw.log"
}
}
You can also view logs in the Control UI (browser dashboard) under the Logs tab.
Enable Debug Logging
Method 1: Environment Variable (One-Off)
Best for a single debugging session without changing config:
# Start gateway with debug logs
OPENCLAW_LOG_LEVEL=debug openclaw gateway run
# Or trace level for maximum verbosity
OPENCLAW_LOG_LEVEL=trace openclaw gateway run
The environment variable takes precedence over the config file.
Method 2: CLI Flag (One-Off)
# Global flag works with any subcommand
openclaw --log-level debug gateway run
This overrides even the environment variable for that command.
Method 3: Config File (Persistent)
Edit ~/.openclaw/openclaw.json:
{
"logging": {
"level": "debug",
"consoleLevel": "debug"
}
}
Then restart the gateway:
openclaw gateway restart
Important: logging.level controls file log verbosity. logging.consoleLevel controls terminal output verbosity. Set both if you want debug everywhere.
Log Levels
From least to most verbose:
| Level | What You See |
|---|---|
silent |
Nothing |
fatal |
Only fatal errors |
error |
Errors only |
warn |
Warnings and errors |
info |
Normal operation (default) |
debug |
Detailed internal state |
trace |
Everything, including raw payloads |
For most troubleshooting, debug is enough. Use trace only when debugging protocol-level issues (WebSocket frames, raw HTTP, etc.).
Filter Logs by Channel
When debugging a specific messaging platform, filter by channel:
# WhatsApp channel logs only
openclaw channels logs --channel whatsapp
# Telegram channel logs only
openclaw channels logs --channel telegram
# Discord channel logs only
openclaw channels logs --channel discord
This filters out noise from other channels so you can focus on the problem.
Gateway WebSocket Debugging
The Gateway communicates with agents via WebSocket. To debug RPC issues:
# Normal mode: only errors, parse errors, slow calls
openclaw gateway
# Verbose: all request/response traffic
openclaw gateway --verbose
# Compact WS log format
openclaw gateway --verbose --ws-log compact
# Full WS log format (raw frames)
openclaw gateway --verbose --ws-log full
--verbose only affects console output and WebSocket log verbosity. It does not change file log levels.
Diagnostic Flags (Targeted Debug Logs)
When you need debug-level detail for one specific subsystem without raising the global log level, use diagnostic flags:
// ~/.openclaw/openclaw.json
{
"diagnostics": {
"flags": ["telegram.http"]
}
}
Or via environment variable for a one-off run:
OPENCLAW_DIAGNOSTICS=telegram.http,telegram.payload openclaw gateway run
Flags support wildcards:
# All Telegram diagnostics
OPENCLAW_DIAGNOSTICS=telegram.* openclaw gateway run
# Everything
OPENCLAW_DIAGNOSTICS=* openclaw gateway run
Flag logs go to the standard log file and are still redacted according to logging.redactSensitive.
Console Output Styles
Control how console logs look:
{
"logging": {
"consoleStyle": "pretty"
}
}
| Style | Description |
|---|---|
pretty |
Human-friendly, colored, with timestamps (default) |
compact |
Tighter output, best for long sessions |
json |
JSON per line, for log processors |
Sensitive Data Redaction
By default, OpenClaw redacts sensitive tokens from console output:
{
"logging": {
"redactSensitive": "tools",
"redactPatterns": ["sk-.*", "ghp_.*"]
}
}
redactSensitive:offortools(default:tools)redactPatterns: custom regex patterns to redact
Redaction affects console output only. File logs are not redacted. Keep this in mind if you share log files.
JSON Output for Scripting
Pipe logs into jq or other tools:
# JSON output
openclaw logs --json
# Filter for errors only
openclaw logs --json | jq 'select(.level == "error")'
# Follow with JSON
openclaw logs --follow --json | jq '.message'
JSON mode emits typed objects:
meta: stream metadata (file, cursor, size)log: parsed log entrynotice: truncation/rotation hintsraw: unparsed log line
OpenTelemetry Export
For production observability, export logs, metrics, and traces to an OpenTelemetry collector:
{
"plugins": {
"allow": ["diagnostics-otel"],
"entries": {
"diagnostics-otel": {
"enabled": true
}
}
},
"diagnostics": {
"enabled": true,
"otel": {
"enabled": true,
"endpoint": "http://otel-collector:4318",
"protocol": "http/protobuf",
"serviceName": "openclaw-gateway",
"traces": true,
"metrics": true,
"logs": true,
"sampleRate": 0.2,
"flushIntervalMs": 60000
}
}
}
Or enable the plugin via CLI:
openclaw plugins enable diagnostics-otel
Exported Metrics
OpenClaw exports structured metrics:
openclaw.tokens: Token usage counters (by provider, model, channel)openclaw.cost.usd: Cost trackingopenclaw.run.duration_ms: Agent run durationopenclaw.webhook.received/processed/error: Message flowopenclaw.message.queued/processed: Queue activityopenclaw.session.state/stuck: Session health
Exported Traces
openclaw.model.usage: Spans for each model call with token detailsopenclaw.webhook.processed: Webhook handling spansopenclaw.message.processed: Message processing with outcome
This integrates with Grafana, Datadog, Jaeger, or any OTLP-compatible backend.
Common Troubleshooting Patterns
"My agent is not responding to messages"
# 1. Check if gateway is running
openclaw gateway status
# 2. Check logs for errors
openclaw logs --follow --local-time
# 3. Check channel-specific logs
openclaw channels logs --channel telegram
# 4. Run diagnostics
openclaw doctor
"Cron jobs are failing silently"
# Check cron job status via CLI
openclaw cron list
# Look for auth errors in logs
openclaw logs --json | jq 'select(.message | contains("auth"))'
# Enable debug for a more detailed view
OPENCLAW_LOG_LEVEL=debug openclaw gateway run
"Messages are delayed"
# Check queue depth and session state
openclaw logs --json | jq 'select(.message | contains("queue") or contains("stuck"))'
# Enable verbose WebSocket logging
openclaw gateway --verbose --ws-log compact
"Channel connection dropped"
# Filter for specific channel
OPENCLAW_DIAGNOSTICS=whatsapp.* openclaw gateway run
# Check webhook processing
openclaw logs --json | jq 'select(.message | contains("webhook"))'
Quick Reference
# Basic log viewing
openclaw logs # Last 200 lines
openclaw logs --follow # Live tail
openclaw logs --follow --local-time # With local timestamps
openclaw logs --limit 500 # More history
openclaw logs --json # Machine-readable
# Debug levels
OPENCLAW_LOG_LEVEL=debug openclaw gateway run # One-off debug
openclaw --log-level trace gateway run # Maximum verbosity
# Channel filtering
openclaw channels logs --channel telegram
openclaw channels logs --channel whatsapp
# WebSocket debugging
openclaw gateway --verbose --ws-log compact
# Targeted diagnostics
OPENCLAW_DIAGNOSTICS=telegram.* openclaw gateway run
# Health check
openclaw doctor
openclaw gateway status
At Akmatori, we run OpenClaw with OpenTelemetry export feeding into Grafana for monitoring our 8 daily cron jobs and multi-channel messaging. When an automation fails at 3 AM, the structured logs and metrics tell us exactly what happened without SSH-ing into the server.
