Nginx Error 99: Cannot Assign Requested Address - Fix Guide

Quick Reference:
# Check current ephemeral port range
cat /proc/sys/net/ipv4/ip_local_port_range
# Check TIME_WAIT connections
ss -tan state time-wait | wc -l
# Quick fix: expand port range
sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535"
# Enable port reuse
sudo sysctl -w net.ipv4.tcp_tw_reuse=1
You're checking your nginx error logs and see this:
[crit] 92901#92901: *109407768 connect() to 85.234.95.10:10080 failed
(99: Cannot assign requested address) while connecting to upstream,
client: 103.26.176.33, server: _, request: "GET /login HTTP/2.0",
upstream: "http://85.234.95.10:10080/login"
Error 99 (EADDRNOTAVAIL) means the kernel cannot allocate a local address for the outbound connection. This typically happens when you've exhausted available ephemeral ports or have too many connections in TIME_WAIT state.
Understanding the Problem
When nginx connects to an upstream server, it needs a local port for the connection. The kernel assigns ports from the ephemeral range (typically 32768-60999). Each connection uses a unique combination of:
(local_ip, local_port, remote_ip, remote_port)
If all available ports are in use or in TIME_WAIT, new connections fail with error 99.
Diagnosis
Check Ephemeral Port Range
cat /proc/sys/net/ipv4/ip_local_port_range
# 32768 60999
# Calculate available ports
echo $((60999 - 32768))
# 28231 ports available
Check TIME_WAIT Connections
# Count TIME_WAIT sockets
ss -tan state time-wait | wc -l
# Count connections to specific upstream
ss -tan state time-wait | grep "85.234.95.10:10080" | wc -l
# Full connection state breakdown
ss -tan | awk 'NR>1 {print $1}' | sort | uniq -c | sort -rn
Check Used Ports
# Count all established connections
ss -tan state established | wc -l
# Check if approaching limit
ss -s
If TIME_WAIT count is close to or exceeds your ephemeral port range, you've found the problem.
Solutions
Solution 1: Expand Ephemeral Port Range
The default range of ~28,000 ports may not be enough for high-traffic servers.
# Temporary (until reboot)
sudo sysctl -w net.ipv4.ip_local_port_range="1024 65535"
# Permanent - add to /etc/sysctl.conf
echo "net.ipv4.ip_local_port_range = 1024 65535" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
This increases available ports from ~28,000 to ~64,000.
Solution 2: Enable TCP TIME_WAIT Reuse
Allow reusing sockets in TIME_WAIT for new outbound connections:
# Temporary
sudo sysctl -w net.ipv4.tcp_tw_reuse=1
# Permanent
echo "net.ipv4.tcp_tw_reuse = 1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Note: tcp_tw_reuse only affects outbound connections and is safe to enable. Do NOT use tcp_tw_recycle as it's been removed from modern kernels and causes issues with NAT.
Solution 3: Enable Keepalive to Upstream
Reduce connection churn by reusing connections to upstream servers:
upstream backend {
server 85.234.95.10:10080;
# Keep connections alive
keepalive 100;
keepalive_timeout 60s;
keepalive_requests 1000;
}
server {
location / {
proxy_pass http://backend;
# Required for keepalive
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
This dramatically reduces the number of connections created and destroyed.
Solution 4: Reduce TIME_WAIT Duration
Lower the FIN timeout (how long sockets stay in TIME_WAIT):
# Check current value (default 60 seconds)
cat /proc/sys/net/ipv4/tcp_fin_timeout
# Reduce to 30 seconds
sudo sysctl -w net.ipv4.tcp_fin_timeout=30
# Permanent
echo "net.ipv4.tcp_fin_timeout = 30" | sudo tee -a /etc/sysctl.conf
Caution: Don't set this too low. TIME_WAIT exists to handle delayed packets and prevent connection confusion.
Solution 5: Use Multiple Upstream IPs
If your upstream has multiple IPs, use them all:
upstream backend {
server 85.234.95.10:10080;
server 85.234.95.11:10080;
server 85.234.95.12:10080;
keepalive 100;
}
Each (local_ip, local_port, remote_ip, remote_port) tuple is unique, so more remote IPs = more available connections.
Solution 6: Bind to Multiple Local IPs
If your nginx server has multiple IPs, use them:
upstream backend {
server 85.234.95.10:10080;
}
server {
# Nginx will round-robin through these for outbound connections
proxy_bind $server_addr transparent;
}
Or explicitly split across upstreams:
split_clients "$remote_addr" $backend_addr {
50% 192.168.1.10;
50% 192.168.1.11;
}
server {
location / {
proxy_bind $backend_addr;
proxy_pass http://backend;
}
}
Recommended Sysctl Settings
For high-traffic nginx reverse proxies:
# /etc/sysctl.conf
# Expand ephemeral port range
net.ipv4.ip_local_port_range = 1024 65535
# Enable TIME_WAIT reuse for outbound connections
net.ipv4.tcp_tw_reuse = 1
# Reduce TIME_WAIT duration
net.ipv4.tcp_fin_timeout = 30
# Increase connection tracking (if using conntrack)
net.netfilter.nf_conntrack_max = 1048576
# Increase local port hash size for faster lookups
net.ipv4.tcp_max_tw_buckets = 1440000
# Increase socket buffer sizes
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
Apply with:
sudo sysctl -p
Monitoring
Set up monitoring to catch port exhaustion before it causes errors:
#!/bin/bash
# /usr/local/bin/check-ports.sh
TIMEWAIT=$(ss -tan state time-wait | wc -l)
PORT_RANGE=$(cat /proc/sys/net/ipv4/ip_local_port_range | awk '{print $2 - $1}')
USAGE_PCT=$((TIMEWAIT * 100 / PORT_RANGE))
if [ $USAGE_PCT -gt 80 ]; then
echo "WARNING: Ephemeral port usage at ${USAGE_PCT}%"
echo "TIME_WAIT: $TIMEWAIT / $PORT_RANGE"
fi
Or with Prometheus node_exporter:
# Alert when approaching port exhaustion
- alert: EphemeralPortExhaustion
expr: |
(node_sockstat_TCP_tw /
(node_netstat_Ip_local_port_range_max - node_netstat_Ip_local_port_range_min))
> 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "High TIME_WAIT socket count"
Root Cause Analysis
This error often indicates:
- Traffic spike: Sudden increase in requests
- Slow upstream: Upstream taking too long, connections pile up
- Missing keepalive: Every request creates a new connection
- Small port range: Default range insufficient for load
- Connection leak: Application not closing connections properly
Check your traffic patterns and upstream response times:
# Upstream response time from nginx logs
awk '{print $NF}' /var/log/nginx/access.log | sort -n | tail -20
# Requests per second
awk '{print $4}' /var/log/nginx/access.log | cut -d: -f1-3 | uniq -c | tail -20
Prevention Checklist
| Action | Impact |
|---|---|
| Enable keepalive to upstream | High - reduces connection churn by 90%+ |
| Expand port range | Medium - doubles available ports |
| Enable tcp_tw_reuse | Medium - allows faster port recycling |
| Monitor TIME_WAIT count | Early warning before failures |
| Load test with realistic traffic | Catch issues before production |
Conclusion
Error 99 "Cannot assign requested address" is almost always caused by ephemeral port exhaustion from high connection rates combined with TIME_WAIT accumulation. The best fix is enabling keepalive to your upstream servers, which reduces connection churn dramatically. Combine with expanded port ranges and tcp_tw_reuse for maximum headroom.
Dealing with nginx issues at scale? Akmatori AI agents can automatically diagnose connection issues, tune kernel parameters, and resolve incidents before they impact users.
