Skip to main content
02.01.2025

Interrupt Coalescing in Linux 2026: ethtool -C Settings for Latency and Throughput

head-image

Updated April 2026 with practical ethtool -c and ethtool -C examples for latency and throughput tuning.

Quick Answer

Start here:

# Show current coalesce settings
sudo ethtool -c eth0

# Lower interrupt delay for lower latency
sudo ethtool -C eth0 rx-usecs 25 tx-usecs 25

# Or enable adaptive coalescing
sudo ethtool -C eth0 adaptive-rx on adaptive-tx on

Lower values usually reduce latency. Higher values usually reduce CPU overhead. If the link itself is wrong, fix NIC speed and duplex first. If bursts still overflow queues, also tune ring buffers and net.core.netdev_budget.

Coalescing controls how long the NIC waits before interrupting the CPU. Instead of firing an interrupt for every packet, the NIC can batch work by time or frame count. That improves efficiency, but too much batching increases latency.

Key Settings

  • rx-usecs: microseconds to wait before raising an RX interrupt
  • rx-frames: packets to batch before an RX interrupt
  • tx-usecs: microseconds to wait before a TX interrupt
  • tx-frames: packets to batch before a TX interrupt
  • adaptive-rx / adaptive-tx: let the driver adjust values based on load

When to Lower Coalescing

Lower coalescing for:

  • latency-sensitive APIs
  • real-time streams
  • workloads where p99 latency matters more than raw throughput

Example:

sudo ethtool -C eth0 rx-usecs 10 tx-usecs 10

When to Raise Coalescing

Raise coalescing for:

  • bulk file transfer
  • packet-heavy observability pipelines
  • busy servers where CPU interrupt overhead is the bottleneck

Example:

sudo ethtool -C eth0 rx-usecs 100 rx-frames 128 tx-usecs 100 tx-frames 128

Practical Tuning Flow

  1. Measure baseline latency and CPU usage
  2. Check current settings with ethtool -c
  3. Change one variable at a time
  4. Re-run traffic and latency tests
  5. Persist only the settings that improve your workload

Conclusion

NIC coalescing is one of the fastest levers for changing the latency versus CPU tradeoff on Linux. Keep the change small, test under real load, and avoid tuning it in isolation from link speed, buffer sizes, and NAPI budget.

Akmatori helps SRE teams cut through noisy network incidents by automating checks across link health, kernel tuning, and service symptoms.

Automate incident response and prevent on-call burnout with AI-driven agents!