WebSocket Connection Failed
You built a feature that needs real-time communication — chat, live notifications, collaborative editing, live dashboards. It works perfectly on localhost. You deploy it, and the browser console shows:
WebSocket connection to 'wss://yourapp.com/ws' failed:
Error during WebSocket handshake: Unexpected response code: 400
Or it connects but immediately disconnects. Or it works for exactly 60 seconds and then drops. Each of these is a different problem with a different fix.
How WebSockets work (briefly)
Normal HTTP is request-response: your browser asks for something, the server answers, connection closes. WebSockets upgrade an HTTP connection to a persistent, two-way channel. Both sides can send messages at any time without the overhead of new HTTP requests.
The upgrade from HTTP to WebSocket happens via a special handshake. If anything in the middle — a proxy, load balancer, CDN, or firewall — doesn't support or isn't configured for WebSocket upgrades, the connection fails.
Problem 1: Nginx isn't configured for WebSocket
This is the most common cause. If your app is behind nginx (which it probably is in production), nginx doesn't forward WebSocket connections by default.
Fix: Add WebSocket upgrade headers to your nginx config:
location /ws {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# If your WebSocket path isn't /ws, adjust the location block
# If ALL your app traffic might be WebSocket, add these headers
# to your main location block too
The critical lines are proxy_http_version 1.1, Upgrade $http_upgrade, and Connection "upgrade". Without these, nginx rejects the WebSocket handshake.
Problem 2: Connection drops after 60 seconds
If the WebSocket connects but dies after exactly 60 seconds of inactivity, it's a proxy timeout. Nginx, load balancers, and cloud platforms all have default timeouts for idle connections.
Fix option 1: Increase the timeout:
# In nginx
location /ws {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Increase timeouts
proxy_read_timeout 86400s; # 24 hours
proxy_send_timeout 86400s;
}
Fix option 2: Implement ping/pong (better). Send a heartbeat message every 30 seconds to keep the connection alive. Most WebSocket libraries support this.
// Server-side (ws library)
const wss = new WebSocket.Server({ server });
wss.on('connection', (ws) => {
const interval = setInterval(() => {
if (ws.readyState === WebSocket.OPEN) {
ws.ping();
}
}, 30000);
ws.on('close', () => clearInterval(interval));
});
// Client-side — most libraries handle pong automatically
// If yours doesn't:
ws.onmessage = (event) => {
if (event.data === 'ping') {
ws.send('pong');
return;
}
// handle normal messages
};
Problem 3: HTTPS/WSS mismatch
If your page loads over HTTPS, your WebSocket must use wss:// (WebSocket Secure), not ws://. Browsers block insecure WebSocket connections from secure pages — same as mixed content.
// Wrong (on an HTTPS page):
const ws = new WebSocket('ws://yourapp.com/ws');
// Right:
const ws = new WebSocket('wss://yourapp.com/ws');
// Best — auto-detect:
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
const ws = new WebSocket(`${protocol}//${window.location.host}/ws`);
Problem 4: Cloud platform limitations
Vercel doesn't support WebSockets at all in serverless functions. If your app is on Vercel, you need a separate WebSocket server (on Railway, Fly.io, or a VPS) or use a service like Pusher, Ably, or Supabase Realtime.
Heroku supports WebSockets but has a 55-second idle timeout. You need ping/pong heartbeats.
AWS ALB supports WebSockets natively. AWS API Gateway has a WebSocket API but it works differently from regular WebSockets.
Cloudflare proxies WebSocket connections, but you need to enable WebSocket support in your Cloudflare dashboard (it's on by default for paid plans).
Problem 5: Load balancer sticky sessions
If you have multiple app servers behind a load balancer, a WebSocket connection needs to stay on the same server for its entire lifetime. If the load balancer routes the upgrade request to one server and subsequent messages to another, the connection breaks.
Fix: Enable sticky sessions (session affinity) on your load balancer, or use a shared pub/sub backend (Redis) so it doesn't matter which server handles the message.
Reconnection logic
WebSocket connections will drop — network changes, server deploys, mobile users switching between WiFi and cellular. Your client needs to handle reconnection.
function connectWebSocket() {
const ws = new WebSocket('wss://yourapp.com/ws');
ws.onopen = () => console.log('Connected');
ws.onclose = () => {
console.log('Disconnected, reconnecting in 3s...');
setTimeout(connectWebSocket, 3000);
};
ws.onerror = (err) => {
console.error('WebSocket error:', err);
ws.close();
};
}
connectWebSocket();
Add exponential backoff (3s, 6s, 12s, ...) to avoid hammering the server if it's down.
Why AI gets this wrong
AI generates the WebSocket code — the client connection, the server handler, the message passing. That all works on localhost because there's nothing between the client and server. In production, there's nginx, SSL termination, load balancers, CDNs, and platform-specific quirks. AI doesn't configure any of that because it's infrastructure, not code. The code is fine. The plumbing around it is the problem.
Real-time features not working in production?
MeatButton connects you with developers who can diagnose your WebSocket infrastructure, configure nginx and load balancers, and get your real-time features working reliably. First one's free.
Get MeatButton