How to Set Up Nginx as a Reverse Proxy in Front of Your App
Your app is running. You can see it on http://localhost:3000 when you SSH into your server. But when you go to your actual domain in a browser, nothing. Or you're hitting the app directly on port 3000 and you know that's not how real sites work.
You need a reverse proxy. Nginx is the standard choice. But if you've never set one up before, the config files look like hieroglyphics and every tutorial assumes you already know what you're doing.
Here's the whole thing explained in plain English.
What a reverse proxy actually is
Your app listens on some port. Maybe 3000 (Node/Next.js), maybe 8000 (Python/Django), maybe 5000 (Flask), maybe something else. That port is where your app expects to receive requests.
But when someone types yoursite.com into a browser, the browser connects on port 80 (HTTP) or port 443 (HTTPS). Not port 3000. Not port 8000. Ports 80 and 443.
A reverse proxy sits between the internet and your app. It listens on port 80/443, receives the request from the visitor, and forwards it to your app on port 3000 (or whatever). Your app sends the response back to Nginx, and Nginx sends it to the visitor.
Think of it as a receptionist. Visitors walk in the front door and talk to the receptionist. The receptionist walks back to the right office and gets the answer. The visitor never sees the office directly. They just talk to the receptionist.
That's all a reverse proxy is. Nginx is the receptionist.
Why you need one
You might be thinking "why not just run my app on port 80 directly?" A few reasons:
- Security. Your app probably wasn't built to face the raw internet. Nginx is. It's been battle-tested for decades against every kind of attack. It filters out garbage before your app ever sees it.
- SSL/HTTPS. Nginx handles the encryption. Your app just talks plain HTTP to Nginx on localhost. This is way simpler than making your app deal with certificates directly.
- Static files. Nginx serves images, CSS, and JavaScript files much faster than your app does. Your app is good at running code. Nginx is good at sending files. Let each do what it's good at.
- Multiple apps. You can run three different apps on one server, each on a different port, and Nginx routes traffic to the right one based on the domain name. Without Nginx, only one app can use port 80.
- Stability. If your app crashes, Nginx can show a friendly error page instead of a browser connection error. When your app comes back, Nginx starts forwarding to it again automatically.
The minimal working config
Here's the simplest Nginx reverse proxy config that actually works. I'll explain every line.
server {
listen 80;
server_name yoursite.com www.yoursite.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Line by line:
server { ... } — This is a "server block." It tells Nginx: "here's how to handle requests for a specific site." You can have multiple server blocks for multiple sites.
listen 80; — Listen on port 80, which is standard HTTP. After you add SSL, you'll also have listen 443 ssl; but start with 80 to make sure the basics work first.
server_name yoursite.com www.yoursite.com; — Which domain names this block applies to. If someone visits yoursite.com, this block handles it. Replace with your actual domain.
location / { ... } — "For any request path starting with /" which means every request. All of them. This is where the proxying happens.
proxy_pass http://127.0.0.1:3000; — The money line. "Forward this request to my app running on port 3000." 127.0.0.1 is localhost, meaning the same server. Change 3000 to whatever port your app actually uses.
proxy_set_header Host $host; — Pass along the original domain name. Without this, your app thinks every request came from "127.0.0.1" instead of "yoursite.com." Many apps need this to work correctly.
proxy_set_header X-Real-IP $remote_addr; — Tell your app the visitor's real IP address. Without this, your app thinks every request comes from Nginx (127.0.0.1). Logging, rate limiting, and geo features all break without this.
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; — Same idea as X-Real-IP but in a format that handles multiple proxies (like if you're also behind Cloudflare). Most frameworks look for this header.
proxy_set_header X-Forwarded-Proto $scheme; — Tells your app whether the original request was HTTP or HTTPS. Your app needs this to generate correct URLs and set secure cookies. Without it, your app might generate http:// links even though the visitor connected over HTTPS.
Where to put the config file
This depends on your Linux distribution, but the two common patterns are:
Pattern 1: sites-available + sites-enabled (Debian/Ubuntu)
Create the file in /etc/nginx/sites-available/:
sudo nano /etc/nginx/sites-available/yoursite.com
Paste your config. Save it. Then create a symlink in sites-enabled:
sudo ln -s /etc/nginx/sites-available/yoursite.com /etc/nginx/sites-enabled/
The sites-available folder holds all your configs. The sites-enabled folder holds symlinks to the ones that are actually active. This lets you disable a site without deleting its config.
Also: remove the default config if it exists, or it might interfere:
sudo rm /etc/nginx/sites-enabled/default
Pattern 2: conf.d (CentOS/RHEL/Amazon Linux)
Create the file directly in /etc/nginx/conf.d/:
sudo nano /etc/nginx/conf.d/yoursite.com.conf
Files here must end in .conf to be loaded. No symlinks needed.
Don't mix the two patterns. Use whichever one your system already has set up. Check by looking at what already exists in /etc/nginx/.
Testing and reloading
After you save your config, always test it before applying:
sudo nginx -t
This checks for syntax errors. If it says syntax is ok and test is successful, you're good. If it reports an error, it will tell you the file and line number. Go fix it.
Then reload Nginx to apply the new config:
sudo systemctl reload nginx
Reload, not restart. Reload applies the new config without dropping any current connections. Restart kills everything and starts fresh. Use reload unless you have a reason not to.
Common mistakes that will bite you
The trailing slash in proxy_pass
This is the most common Nginx proxy mistake and it's brutal because the config looks fine.
# These are DIFFERENT:
proxy_pass http://127.0.0.1:3000;
proxy_pass http://127.0.0.1:3000/;
Without the trailing slash, Nginx passes the full original path to your app. With the trailing slash, Nginx strips the matched location prefix before forwarding. This matters a lot when you're using a location block with a path like /api/. A request to /api/users would be forwarded as /api/users without the slash, but as /users with it.
For a simple location / config, it doesn't matter. But the moment you start doing path-based routing, you need to understand this or you'll get 404s that make no sense.
Wrong port number
Your proxy_pass port must match the port your app is actually listening on. This sounds obvious but it goes wrong constantly. Your app defaults to 8080. The tutorial you followed says 3000. You copy-paste the tutorial config. Nginx forwards to 3000, nobody's there, you get a 502 Bad Gateway.
Check what port your app is using:
sudo ss -tlnp | grep LISTEN
This shows every program listening on every port. Find your app and note the port.
Not passing headers
If you skip the proxy_set_header lines, your site might appear to work but you'll have subtle problems. Your app's logs will show every request coming from 127.0.0.1. Your rate limiter won't work (it thinks there's only one user). Your app might generate HTTP links instead of HTTPS links. Secure cookies might not be set correctly.
Always include the four header lines from the config above. They're not optional for any real deployment.
WebSocket support
If your app uses WebSockets (real-time features, chat, live updates, hot reloading during development), the basic proxy config will silently break them. WebSockets need an HTTP upgrade, and Nginx won't do that by default.
Add these lines inside your location block:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
Without these, HTTP requests work fine but WebSocket connections fail. You'll see errors in your browser console about WebSocket connections being closed immediately. Your app might partially work but with missing real-time features.
Multiple apps on one server
This is one of the best reasons to use Nginx. Say you have a main site, an API, and an admin panel, each running as separate apps on different ports:
# Main site
server {
listen 80;
server_name yoursite.com www.yoursite.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# API
server {
listen 80;
server_name api.yoursite.com;
location / {
proxy_pass http://127.0.0.1:4000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Admin panel
server {
listen 80;
server_name admin.yoursite.com;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Each server block handles a different subdomain and forwards to a different port. One Nginx instance, one server, three apps. This is exactly how most small-to-medium deployments work.
You can put all three in one config file or split them into separate files. Either works.
Adding SSL with Let's Encrypt
Once your HTTP proxy is working, adding HTTPS is straightforward. Don't try to set up SSL and the proxy at the same time. Get the proxy working on port 80 first, then add SSL.
Install Certbot:
# Ubuntu/Debian
sudo apt install certbot python3-certbot-nginx
# CentOS/RHEL
sudo yum install certbot python3-certbot-nginx
Then run it:
sudo certbot --nginx -d yoursite.com -d www.yoursite.com
Certbot will automatically modify your Nginx config to add the SSL certificate, set up port 443, and redirect HTTP to HTTPS. It does all of it. You don't need to manually edit the config for SSL.
Certbot also sets up auto-renewal. Certificates expire every 90 days, but Certbot handles that. You can verify with:
sudo certbot renew --dry-run
That's it for SSL. Don't overthink it.
Why AI-generated configs often break
If you asked ChatGPT or Claude to write your Nginx config, there's a decent chance it gave you something that doesn't work. Here's why.
Extra directives you don't need. AI loves to add upstream blocks, load balancing parameters, caching directives, and buffer tuning. For a single app on a single server, you need none of that. Each extra directive is another thing that can be wrong, and the AI usually doesn't explain what any of it does. You end up with a 40-line config when 12 lines would work.
Wrong port number. The AI doesn't know what port your app runs on. It guesses. It usually guesses 3000 or 8080 because those are common defaults. If your app uses something else, the whole config is broken and the error message (502 Bad Gateway) doesn't tell you "wrong port." You have to figure that out yourself.
Missing WebSocket support. Most AI-generated configs don't include the WebSocket upgrade headers. If your app uses WebSockets, you won't discover this until you test real-time features and they silently fail. The AI never asks if you need WebSocket support.
Mixing config patterns. Some AI responses put configs in sites-available, some put them in conf.d, and some put them in the main nginx.conf. If the AI picks a different pattern than what your system uses, Nginx might ignore the file entirely. No error, no warning, just nothing happens.
Upstream blocks for single servers. The AI generates an upstream block with one server in it and then references it in proxy_pass. This is technically correct but adds unnecessary complexity. Upstream blocks are for load balancing across multiple servers. If you have one app on one server, proxy_pass http://127.0.0.1:3000; is all you need.
The deeper problem is that the AI can't see your server. It can't check whether Nginx is installed, what version it is, what config pattern your system uses, what port your app runs on, or whether the config it generates actually works. It generates text that looks like a valid config and hopes for the best.
Summary: the checklist
- Make sure your app is running and note what port it's on
- Create the Nginx config file in the right location for your system
- Set
proxy_passto point to your app's actual port - Include all four
proxy_set_headerlines - Add WebSocket headers if your app needs them
- Test with
nginx -t - Reload with
systemctl reload nginx - Verify it works on HTTP first
- Add SSL with
certbot --nginx
If you get through that list and it works, you're done. You have a proper production-style deployment. Most of the internet runs on this exact setup.
Config not cooperating?
Nginx configs are one of those things where the AI gets you 80% of the way there and the last 20% costs you three hours. Install MeatButton and a real expert will look at your actual server, figure out what's wrong, and fix it. First one's free.
Get MeatButton