/r/nginx
Nginx (pronounced "engine x") is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. Written by Igor Sysoev in 2005, Nginx now hosts over 14% of websites overall, and 35% of the most visited sites on the internet. Nginx is known for its stability, rich feature set, simple configuration, and low resource consumption.
Note: If your post doesn't appear in the "new" queue after a couple of minutes, it's probably stuck in the spam filter. Send the mods a message and they'll get that fixed for you.
/r/nginx
Hello all,
I am pulling my hair out here, I've spent way too long trying to get this to work. I am a novice in nginx and web development so bare with me.
I had a websocket set up between my React frontend, and my flask backend. It worked great locally.
I want to deploy this and so have set up nginx for a reverse proxy.
Here is my nginx.conf file:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
# Route requests to React frontend
location / {
proxy_pass http://frontend:6969;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Route API requests to Flask backend
location /api/ {
proxy_pass http://flask_api:5000/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Route WebSocket traffic to Flask backend
location /socket.io/ {
proxy_pass http://flask_api:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
On my react frontend, I have sent my websocket connection to http://<server_ip>/socket.io/
, thus from my understanding, all client requests at /socket.io/ are sent to http://flask_api:5000, which is what worked when I ran in locally without nginx.
When I load the websocket on the client, I get the following logs:
WebSocket connection to 'ws://192.168.0.69/socket.io/?EIO=4&transport=websocket' failed: WebSocket is closed before the connection is established.
On my nginx and flask_api, I get the following logs:
nginx | 192.168.0.13 - - [17/Nov/2024:01:55:25 +0000] "GET /_next/static/YD3dZ0yFNKi16Ra3iW-FH/_buildManifest.js HTTP/1.1" 200 867 "http://192.168.0.69/audit/FMP0001/CHEP/DM001" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Mobile Safari/537.36"
flask_api | (1) accepted ('172.24.0.7', 36260)
flask_api | XrLFapFjUd7XW-g1AAAA: Sending packet OPEN data {'sid': 'XrLFapFjUd7XW-g1AAAA', 'upgrades': [], 'pingTimeout': 20000, 'pingInterval': 25000}
flask_api | XrLFapFjUd7XW-g1AAAA: Received request to upgrade to websocket
flask_api | XrLFapFjUd7XW-g1AAAA: Upgrade to websocket successful
nginx | 192.168.0.13 - - [17/Nov/2024:01:55:26 +0000] "GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 101 81 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Mobile Safari/537.36"
flask_api | 192.168.0.13,172.24.0.7 - - [17/Nov/2024 01:55:26] "GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 200 0 0.690318
flask_api | (1) accepted ('172.24.0.7', 36262)
flask_api | CTDxDrM8POStykh8AAAB: Sending packet OPEN data {'sid': 'CTDxDrM8POStykh8AAAB', 'upgrades': [], 'pingTimeout': 20000, 'pingInterval': 25000}
flask_api | CTDxDrM8POStykh8AAAB: Received request to upgrade to websocket
flask_api | CTDxDrM8POStykh8AAAB: Upgrade to websocket successful
flask_api | CTDxDrM8POStykh8AAAB: Received packet MESSAGE data 0/socket.io/,
flask_api | CTDxDrM8POStykh8AAAB: Sending packet MESSAGE data 4/socket.io/,"Unable to connect"
nginx | 192.168.0.13 - - [17/Nov/2024:01:55:27 +0000] "GET /socket.io/?EIO=4&transport=websocket HTTP/1.1" 101 123 "-" "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Mobile Safari/537.36"
From this, it looks like the client is communicating with my websocket, however the connection is rejected.
ANY help is GREATLY appreciated!
So I have a django project, where I have to manage routes with nginx, they are in two different repos. Now I want to add cloudwatch logs in AWS and the project should be deployed in aws fargate. So , what are the steps for dev , staging/prod. I am using Docker. So how to deploy project in Aws fargate and see the logs in Cloudwatch?
I accidentally discovered that if my nginx config file contains a location noted as, say, location /git_shenanigans/ {}
or location /backend_test1 {}
and I try to reach URL mydomainname.org/git/ or mydomainname.org/backend/, browser shows the main page of my site.
Why does it happen? Is it documented?
New to Nginx, We have Azure B2C as our identity solution. I am currently trying to authenticate traffic to upstream servers using the auth_request module.
I would prefer to isolate the b2c authentication to one server, as opposed to each upstream running its own authentication.
Digging has yielded few resources, and in my experience I find that means I am doing something nobody has done before, or I am approaching the problem from the wrong angle. I think it is the latter.
Anybody have any experience with a setup like this who can offer some guidance?
I'm kinda new to nginx and therefor not fully familar what I need to search for to find this. I'm currently migrating websites from a Windows IIS host to a Debian Nginx system. However we have some users that repeatedly spam a single url (500+ request per hour). On Windows, I just added their IP for 48h to the firewall via a small C# console application. But I assume Nginx might have something build in to prevent this? In our case, Nginx works as proxy for the dotnet ASP website which is running in a container.
https://github.com/patternhelloworld/docker-blue-green-runner
- No Unpredictable Errors in Reverse Proxy and Deployment
- Zero-downtime Deployment from Your .env & Dockerfile
- Easily supports proxy configurations by only configuring .env at the root:
- HTTP (nginx) → HTTP (your container)
- HTTPS (nginx) → HTTPS (your container)
- HTTPS (nginx) → HTTP (your container)
- Track Git SHA for Your Running Container
I have an backend app that runs on multiple ports on multiple machines, e.g the app answers on 50 ports on each machine and there are 100 machines running this app.
Currently if I try to list all 100 machines and 50 ports in the upstream, 5000 server lines, all the nginx workers on the separate load balancers hit 99% cpu and stay there. If I take chunks of 500 and use those on my load balancers, they perform fine with cpu down below 50% most of the time.
Is there a way to configure nginx for such a large set of upstream backends, or is this a case where I need to add another reverse proxy in the middle, so each of the 100 backends would run nginx and only proxy to the ports on that machine?
nginx/1.22.1
I am using nginx as a reverse proxy for an OPNsense firewall's web UI. OPNsense has various dashboard widgets, some of which display live graphs, for example this CPU usage graph.
When viewed through my reverse proxy, the graph doesn't update, like this:
I have examined the HTTP GET request as captured on the firewall's network interface when loading this graph, both through nginx and not, and there are differences, but I don't know what to do with them.
direct:
GET /api/diagnostics/cpu_usage/stream HTTP/1.1
Host: opnsense.example.org
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:128.0) Gecko/20100101 Firefox/128.0
Accept: text/event-stream
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://opnsense.example.org/ui/core/dashboard
DNT: 1
Connection: keep-alive
Cookie: PHPSESSID=xxxxxxxxxxxxxxxxxxxx
Sec-GPC: 1
Priority: u=4
Pragma: no-cache
Cache-Control: no-cache
nginx:
GET /api/diagnostics/cpu_usage/stream HTTP/1.0
Host: 172.31.0.1
Connection: close
user-agent: Mozilla/5.0 (X11; Linux x86_64; rv:128.0) Gecko/20100101 Firefox/128.0
accept: text/event-stream
accept-language: en-US,en;q=0.5
accept-encoding: gzip, deflate, br, zstd
referer: https://opnsense.example.org/ui/core/dashboard
dnt: 1
sec-fetch-dest: empty
sec-fetch-mode: cors
sec-fetch-site: same-origin
sec-gpc: 1
priority: u=4
pragma: no-cache
cache-control: no-cache
cookie: PHPSESSID=xxxxxxxxx
/etc/nginx/conf.d/opnsense.conf:
server {
listen 443 ssl http2;
server_name opnsense.example.org;
location / {
proxy_pass http://172.31.0.1;
}
}
Any recommendations as to how I can modify opnsense.conf to get this graph working through nginx?
edit: I had the two GET requests labelled backwards.
Hi,
I set up a proxy to an arbitrary website (in this case example.com). Here's my code:
worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 90;
server_name localhost;
location / {
proxy_pass example.com;
}
}
}
I want to be able to navigate to this site via the proxy, login, be able to close my current browser session, open a new one and still be logged in when i navigate to the proxy. Is this possible?
I developed an Android app that makes calls to my API. In my backend, I use NGINX, which forwards requests to an HTTP IP (a microservice in Docker).
The issue I'm facing is that some of these requests from the Android app return errors such as SSL Handshake, Timed out, or Connection closed by peer.
To troubleshoot the problem, I implemented a simple API in Node.js hosted on Vercel in my app. This setup never generates an error and always returns quickly and successfully. This leads me to believe the issue may be related to some configuration in NGINX.
Note: When using Postman, the APIs that pass through NGINX do not produce any errors.
Can anyone help?
I have a couple of servers configured with SSL in nginx with a wildcard SSL cert defined in nginx.conf. All of these sites load fine in a browser and the certificate shows valid.
I also have a default config file with the intention that any client not specifically using one of the defined server names should get a 404 error, but when I open https://random\_name.example.org in a browser, I get redirected to one of my named servers.
My default config looks like this:
server {
listen 80 default_server;
server_name _;
return 404;
}
server {
listen 443 ssl;
server_name _;
return 404;
}
What am I doing wrong?
I have a PHP app running on a dockerized environment. For my /uploads route, that accepts POST request I want to have 20M of client_max_body_size, and for the rest of the routes I want to have 1M of client_max_body_size. I have defined client_max_body_size 1MB in the http block, however I am having difficulties with defining the client_max_body_size of 20MB for my /uploads route only.
So far it only works if I define the client_max_body_size in both the /uploads and ~ ^/index\.php location blocks, but this is not a solution, because if I will have client_max_body_size 20MB; inside the ~ ^/index\.php location block, it will make all the routes in my app accept 20MB as everything gets passed to the index.php location. (I think that if i define the body size only in /uploads, it then passes the request to index.php location block, and the body size resets to 1MB there, as it is the global body size value defined in the http block)
Essentially, I want to be able to have 20MB of client_max_body_size ONLY for /uploads. (the example bellow also doesn't work it's just an example of what I would like to achieve).
location /uploads {
try_files $uri $uri/ /index.php$is_args$args;
client_max_body_size 20M;
}
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ ^/index\.php {
include fastcgi_params;
fastcgi_pass php:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_buffer_size 16k;
fastcgi_buffers 8 16k;
fastcgi_busy_buffers_size 32k;
fastcgi_max_temp_file_size 0;
}
I'm trying to make a stream reverse proxy for port 7777, and I'm getting the 'nginx: [emerg] "stream" directive is not allowed here' error. I believe I need to add something to my .conf file, but I'm not really sure what. This is my sites-enabled file:
stream {
server {
# Port number the reverse proxy is listening on
listen 7777;
# The original server address
proxy_pass ip:7777;
}
}
stream {
server {
# Port number the reverse proxy is listening on
listen 7878;
# The original server address
proxy_pass ip:7878;
}
}
Hey, guys.
I try to run QGIS with QWC2 und QWC2_admin_gui as docker containers.
Everything works except QWC2_admin.
Docker:
qwc-admin-gui:
image: sourcepole/qwc-admin-gui:latest-2024-lts
environment:
<<: *qwc-service-variables
# Don't enable JWT CSRF protection for admin gui, it conflicts with CSRF protection offered by Flask-WTF
JWT_COOKIE_CSRF_PROTECT: 'False'
# When setting user info fields, make sure to create corresponding columns (i.e. "surname", "first_name", "street", etc) in qwc_config.user_infos
# USER_INFO_FIELDS: '[{"title": "Surname", "name": "surname", "type": "text", "required": true}, {"title": "First name", "name": "first_name", "type": "text", "required": true}, {"title": "Street", "name": "street", "type": "text"}, {"title": "Z
>
#TOTP_ENABLED: 'False'
GROUP_REGISTRATION_ENABLED: 'True'
#IDLE_TIMEOUT: 600
DEFAULT_LOCALE: 'en'
MAIL_SUPPRESS_SEND: 'True'
MAIL_DEFAULT_SENDER: 'from@example.com'
ports:
- "0.0.0.0:5031:9090"
volumes:
- ./pg_service.conf:/srv/pg_service.conf:ro
- ./volumes/config:/srv/qwc_service/config:ro
# required by themes plugin:
# - ./volumes/config-in:/srv/qwc_service/config-in:rw
# - ./volumes/qwc2:/qwc2
# - ./volumes/qgs-resources:/qgs-resources
# - ./volumes/info-templates:/info_templates
# qwc-registration-gui:
# image: sourcepole/qwc-registration-gui:latest-2024-lts
# environment:
# <<: *qwc-service-variables
# SERVICE_MOUNTPOINT: '/registration'
# DEFAULT_LOCALE: 'en'
# ADMIN_RECIPIENTS: 'admin@example.com'
# MAIL_SUPPRESS_SEND: 'True'
# MAIL_DEFAULT_SENDER: 'from@example.com'
# # ports:
# # - "127.0.0.1:5032:9090"
# volumes:
# - ./pg_service.conf:/srv/pg_service.conf:ro
nginx.conf:
server {
listen 80;
server_name localhost;
proxy_read_timeout 90;
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Disables emitting nginx version on error pages and in the “Server” response header field.
# http://nginx.org/en/docs/http/ngx_http_core_module.html#server_tokens
#
server_tokens off;
location /auth/ {
proxy_pass http://qwc-auth-service:9090;
}
location /ows {
proxy_pass http://qwc-ogc-service:9090;
}
location /api/v1/featureinfo {
proxy_pass http://qwc-feature-info-service:9090;
}
location /api/v1/legend {
proxy_pass http://qwc-legend-service:9090;
}
location /api/v1/permalink {
proxy_pass http://qwc-permalink-service:9090;
}
location /elevation {
proxy_pass http://qwc-elevation-service:9090;
}
location /api/v1/mapinfo/ {
proxy_pass http://qwc-mapinfo-service:9090;
}
location /api/v2/search {
proxy_pass http://qwc-fulltext-search-service:9090;
}
location /api/v1/data {
proxy_pass http://qwc-data-service:9090;
}
# location /api/v1/print {
# proxy_pass http://qwc-print-service:9090;
# }
# location /api/v1/ext {
# proxy_pass http://qwc-ext-service:9090;
# }
location /qwc_admin {
proxy_pass http://qwc-admin-gui:9090;
}
# location /registration {
# proxy_pass http://qwc-registration-gui:9090;
# }
location / {
proxy_pass http://qwc-map-viewer:9090;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
When I try to access http://server:5031 , http://server:5031/ , http://server:5031/qwc_admin , http://server:5031/qwc_admin/ I always get ERR_TOO_MANY_REDIRECTS.
URL looks like this after the redirect:
Anybody has an idea what the cause could be?
I have been trying to reverse proxy and get contents from docs.example.com/a.php to example.com/a.php
I am facing this error right now. Refused to apply style from 'https://example.com/css/property.css?v=0.02' because its MIME type ('text/html') is not a supported stylesheet MIME type, and strict MIME checking is enabled.
'https://docs.example.com/css/property.css?v=0.02' exists and loads the css file
when I further exapnd the error, it displays the html file.
This is my configuration
server {
listen 443 ssl;
server_name example.com;
root /var/www/html/example/public;
index index.php index.html;
ssl_certificate /etc/ssl/certs/example.com.crt;
ssl_certificate_key /etc/ssl/private/example.com.key;
# Proxy Pass Settings
location /a {
proxy_ssl_server_name on;
proxy_set_header Host docs.example.com;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://docs.example.com/a.php;
}
location /css/ {
proxy_ssl_server_name on;
proxy_set_header Host docs.example.com;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass https://docs.example.com/css$request_uri;
}
location / {
try_files $uri $uri/ /index.php?$query_string;
autoindex off;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
fastcgi_pass unix:/run/php/php8.2-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
location ~ /\.ht {
deny all;
}
add_header Content-Security-Policy "default-src 'self'; style-src 'self' 'unsafe-inline' https://docs.example.com https://fonts.googleapis.com https://tagmanager.google.com https://use.fontawesome.com ; script-src 'self' https://ajax.googleapis.com 'unsafe-inline' https://www.google-analytics.com; img-src 'self' data: https://example.com https://www.example.com; font-src 'self' https://use.fontawesome.com https://fonts.googleapis.com https://fonts.gstatic.com;";
}
Hi guys. I'm new to nginx. I'm trying to setup this nginx because my brother keep procrastinating. I don't want him to access youtube and facebook ... and some corn sites.... I know there is way to block it but... I want the redirecting way.. so this is my nginx.config and... not working at all. I already tried restarting the nginx but still not working. Please help me.
#user nobody;
worker_processes 1;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
server {
listen 80;
server_name facebook.com youtube.com
return 301 https://google.com$request_uri;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
This is the 8G Firewall version for Nginx, official link from Jeff Starr
How do i decode the jwt token and attach one of the claims to the headers. I am not trying to verify the token so i don't want to provide my jwt secret in the nginx conf.
One solution that I've looked at is this repo. But it seems to be verifying the token and i don't see a way to skip the verification and just extract the claims.
https://github.com/patternhelloworld/docker-blue-green-runner
No Unpredictable Errors in Reverse Proxy and Deployment
From Scratch
run.sh
script is designed to simplify deployment: "With your .env
, project, and a single Dockerfile, simply run 'bash run.sh'." This script covers the entire process from Dockerfile build to server deployment from scratch.Focus on zero-downtime deployment on a single machine
for deployments involving more machines, traditional Layer 4 (L4) load-balancer using servers could be utilized.
Since freenginx forked in feb 2024 there has been a lot of discussion at the time, but I am interested if there are recent experience reports of people using freenginx in production for a longer period of time? How does it compare so far? Anything?
Edit: i can see that the codebase has already diverged a bit (see https://freenginx.org/en/CHANGES vs https://nginx.org/en/CHANGES). It looks to me that the bugfixes from nginx are properly being applied also to freenginx, as visible in 1.27.1, but I would love to hear other people's thoughts and analyses.
ive been having this issue for over a year. Any time i make a change to the html file, even if i restart nginx, restart my pc, redownload nginx, it never updates and keeps the old one. Even if i perm delete the file. Nothing fixed it. However i found out that if i change the port it'll update it, but i can never go back to an old port or it goes back to that website. It used to just randomly update but now its stuck. Nothing i can do besides change the port.
I am hosting multiple docker containers inside an EC2 Ubuntu instance.
The overall interactions are something like the below.
I am running my images like the following (with different host ports each time, of course)
sudo docker run -d -p 3010:3000 -p 5010:5000 --name myimage-instance-1 myimage
This image has 2 Node applications running on ports 3000 and 5000.
My Nginx configuration (/etc/nginx/sites-enabled/default) is as follows
location /04d182f47cbf625d6/preview {
proxy_pass http://localhost:5010;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Accept-Encoding gzip;
}
location /04d182f47cbf625d6 {
proxy_pass http://localhost:3010;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection upgrade;
proxy_set_header Accept-Encoding gzip;
}
In this configuration, when I visit https://mywebsite.com/04d182f47cbf625d6
I can view the first application. But when I visit https://mywebsite.com/04d182f47cbf625d6/preview
I cannot get the second application to be loaded but I do get a blank webpage with the title reflected correctly. This indicates that some part of the app on port 5000 inside the container is accessible from outside the container. But the rest of the application is not loading.
I have checked the Nginx access and error logs but do not see any errors.
On checking the URL for port 5010, I get the following header from inside the Docker container as well as the EC2 instance.
HTTP/1.1 200 OK
X-Powered-By: Express
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: *
Access-Control-Allow-Headers: *
Content-Type: text/html; charset=utf-8
Accept-Ranges: bytes
Content-Length: 1711
ETag: W/"6af-+M4OSPFNZpwKBdFEydrj+1+V5xo"
Vary: Accept-Encoding
Date: Sun, 03 Nov 2024 08:28:37 GMT
Connection: keep-alive
Keep-Alive: timeout=5
First time I am trying Nginx for reverse proxying, what am I doing wrong? Are my expectations incorrect?
how does nginx -s know where the pid file is?
let's say there are 2 subsequent commands:
- nginx -c <some_config> that sets a custom pid file
- nginx -s reload that needs to know the pid
how does the master process of the new nginx -s command know which pid to send the HUP to?
is it possible to run nginx -c <config_dir> -s reload? that would be the only way i could figure out.
(Im trying to replicate nginx architecture in another server)
Hello everyone,
I am experiencing slow response times with my NGINX setup, and I would appreciate any insights or suggestions for troubleshooting.
NGINX Proxy Manager: Installed in an LXC container on Proxmox.
I have a subdomain set in duckdns.org for my home setup environment (like home.mydomain.duckdns.org. Every time I try to access to this subdomain I have a delay of 3-5 seconds before that my page appears.
# run nginx in foreground
#daemon off;
pid /run/nginx/nginx.pid;
user npm;
# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;
# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;
error_log /data/logs/fallback_error.log warn;
# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;
events {
include /data/nginx/custom/events[.]conf;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
sendfile on;
server_tokens off;
tcp_nopush on;
tcp_nodelay on;
client_body_temp_path /tmp/nginx/body 1 2;
keepalive_timeout 90s;
proxy_connect_timeout 90s;
proxy_send_timeout 90s;
proxy_read_timeout 90s;
ssl_prefer_server_ciphers on;
gzip on;
proxy_ignore_client_abort off;
client_max_body_size 2000m;
server_names_hash_bucket_size 1024;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";
proxy_cache off;
proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m;
proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;
log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_>
log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_>
access_log /data/logs/fallback_access.log proxy;
# Dynamically generated resolvers file
include /etc/nginx/conf.d/include/resolvers.conf;
# Default upstream scheme
map $host $forward_scheme {
default http;
}
# Real IP Determination
# Local subnets:
set_real_ip_from 10.0.0.0/8;
set_real_ip_from 172.16.0.0/12; # Includes Docker subnet
set_real_ip_from 192.168.0.0/16;
# NPM generated CDN ip ranges:
include /etc/nginx/conf.d/include/ip_ranges.conf;
# always put the following 2 lines after ip subnets:
real_ip_header X-Real-IP;
real_ip_recursive on;
# Custom
include /data/nginx/custom/http_top[.]conf;
# Files generated by NPM
include /etc/nginx/conf.d/*.conf;
include /data/nginx/default_host/*.conf;
include /data/nginx/proxy_host/*.conf;
include /data/nginx/redirection_host/*.conf;
include /data/nginx/dead_host/*.conf;
include /data/nginx/temp/*.conf;
# Custom
include /data/nginx/custom/http[.]conf;
}
stream {
# Files generated by NPM
include /data/nginx/stream/*.conf;
# Custom
include /data/nginx/custom/stream[.]conf;
}
# Custom
include /data/nginx/custom/root[.]conf;
What could be causing the slow response times when accessing my NGINX server?
Thank you for any help you can provide!
How do I force nginx to always return a specific status code (and error page associated with it if there is one) to all requests?
EDIT: SOLVED! See first comment
Hi friends,
I'm trying to set up a reverse proxy from subdomain .example.com
an SPA being served on 127.0.0.1:8000
. After some struggle I swapped my SPA to a simple process that listens to port 8000 and sends a success response, which I can confirm by running curl "127.0.0.1:8000"
.
The relevant chunk in my Nginx config looks like this:
server {
listen 80;
server_tokens off;
server_name ;
location / {
proxy_set_header Host $host;
proxy_pass ;
proxy_set_header Host $host;
proxy_redirect off;
add_header Cache-Control no-cache;
expires 0;
}
}subdomain.example.com
For some reason this doesn't work. Does anyone have any ideas to why?
What do I need to change for this to work?
And what changes will I have to make once this works and I want to move back to my SPA and have all requests to this subdomain direct to the same endpoint that will handle the routing on the client?
Many thanks 💙
I trying to run my services in the Raspberry Pi. So I’ve got two services running on different ports. Is there a way of configuring nginx to do
www.mydomain.blah/service1 -> “localhost:9000” and www.my domain.blah/service2 -> “localhost:5000”
Thanks all Nigel
This is just a quick post with some instructions and information about getting the benefits of a server proxy to hide the real external IP of servers while also getting around the common problem of all clients joining the server to have the IP of the proxy server.
After spending a long while looking around the internet I could not find a simple post, form, or video achieving this goal, but many posts of people having the same question and goal. A quick overview of how the network stack will look is: Client <-Cloudflare-> Proxy Server (IP that will be given to Clients) <--> Home Network/Server Host's Network (IP is hidden from people who are connecting to the game server).
In short you give people an IP or Domain address to the proxy server and then their request will be forwarded to the game server on a different system/network keeping that IP hidden while also retaining the clients IP address when they connect so IP bans and server logs are still usable. Useful in games like Minecraft, Rust, DayZ, Unturned, Factorio, Arma, Ark and others.
Disclaimer: I am not a network security expert and this post focuses on setting up the proxy and getting outside clients to be able to connect to the servers, I recommend looking into Surricata and Crowd-sec for some extra security on the Proxy and even your Home Network.
Follow the steps again other than the DNS and SRV records if games need supporting ports other than just the main connection port like Minecraft voice-chat mods.
Let me know if you have any questions or recommendations.
Tools/Programs used:
Instructions:
Info:
Two sets of ports:
Game ports: 27000-27999 (for actual game server)
Proxy ports: 28000-28999 (related ports for game servers i.e 28001 -> 27001)
Unfortunately SNI is not something that can be used with most if not all game servers using tcp or udp as there is not an SSL handshake to read the data from, meaning that you will need to port forward each game port from the machine running the game servers to your proxy server and also create SRV records.
If there is another way to only have a single port open and then reverse proxy these game servers please let me know I could not find a way
Step 1:
Set new Cloudflare DNS for server address GAMESERVER.exampledomain.com
Point it at the Oracle VM with Cloudflare proxy ON or OFF
E.X: mc1.exampledomain.com 111.1.11.11 proxy=ON
Step 2:
Make a SRV record with priority 0, weight 5, and port RELATED-PROXY-PORT (port that relates to the final game port i.e 28000 (proxy port) -> 27000 (game server port)
Configure _GAMENAME._PROTOCOL(TCPorUDP).GAMESERVER
E.X: _minecraft._tcp.mc1
Step 3.1:
Make sure RELATED-PROXY-PORT tcp/udp is open and accepting in Oracle VM cloud network settings
Source CIDR: 0.0.0.0/0
IP Protocol: TCP or UDP
Source Port: ALL
Destination Port: RELATED-PROXY-PORT
Step 3.2:
Make sure RELATED-PROXY-PORT tcp/udp is open on the oracle vm using UFW
sudo ufw allow 28000/tcp
sudo ufw allow 28000/udp
Step 4.1 (ONE time setup):
Install Nginx:
sudo apt install nginx -y
sudo systemctl start nginx
sudo systemctl enable nginx
Step 4.2:
Open Nginx config in the proxy server
sudo nano /etc/nginx/nginx.conf
Add this section to the bottom:
####
stream {
#Listening Ports for Server Forwarding
server {
#Port to listen on (where SRV is sending the request) CHANGEME
listen 28000;
#Optional Config
proxy_timeout 10m;
proxy_connect_timeout 3s;
#Necessary Proxy Protocol
proxy_protocol on;
#Upstream to Forward Traffic CHANGEME
proxy_pass GAME-SERVER-HOST-EXTERNAL-IP:28000;
}
server {
#Port to listen on (where SRV is sending the request) CHANGEME
listen RELATED-PROXY-PORT;
#Optional Config
proxy_timeout 10m;
proxy_connect_timeout 3s;
#Necessary Proxy Protocol
proxy_protocol on;
#Upstream to Forward Traffic CHANGEME
proxy_pass GAME-SERVER-HOST-EXTERNAL-IP:RELATED-PROXY-PORT;
}
}
####
Step 4.3:
Adding new servers:
In Oracle VM Nginx open sudo nano /etc/nginx/nginx.conf
Add a new server{} block with a new listen port and proxy_pass
Step 4.4:
Refresh Nginx
sudo systemctl restart nginx
Step 5.1:
Make port forward for PROXY PORTS in Firewalls
In PfSense add a NAT:
Interface: WAN
Address Family: IPv4
Protocol: TCP/UDP
Source: VPN\_Proxy\_Server (alias or IP)
Source Port: Any
Destination: WAN addresses
Destination port: RELATED-PROXY-PORT
Redirect Target IP: Internal-Game-server-VM-IP
Redirect port: RELATED-PROXY-PORT
Step 5.2
Port forward inside of the Game server System (system where the game server actually is)
sudo ufw allow 28000/tcp
sudo ufw allow 28000/udp
Step 6.1 (ONE time setup):
Install go-mmproxy: https://github.com/path-network/go-mmproxy
sudo apt install golang
go install [github.com/path-network/go-mmproxy@latest](http://github.com/path-network/go-mmproxy@latest)
Setup some routing rules:
sudo ip rule add from [127.0.0.1/8](http://127.0.0.1/8) iif lo table 123
sudo ip route add local [0.0.0.0/0](http://0.0.0.0/0) dev lo table 123
sudo ip -6 rule add from ::1/128 iif lo table 123
sudo ip -6 route add local ::/0 dev lo table 123
Step 6.2:
Create a go-mmproxy launch command:
sudo \~/go/bin/go-mmproxy -l 0.0.0.0:RelatedProxyPort -4 127.0.0.1:GameServerPort -6 \[::1\]:GameServerPort -p tcp -v 2
Notes: check GitHub for more detail on the command. If you need UDP or Both change -p tcp to -p udp
Logging can be changed from -v 0 to -v 2 (-v 2 also has a nice side effect to show if any malicious IPs are scanning your servers and you can then ban them in your proxy server)
If using crowdsec use the command:
sudo cscli decisions add --ip INPUTBADIP --duration 10000h
This command bans the IP for about a year
The Game Server port will be the port that the actual game server uses or the one you defined in pterodactyl
If you are going to run these in the background there is no need for logs do -v 0
Step 7.1 (ONE time setup):
Create a auto launch script to have each go-mmproxy run in the background at startup
sudo nano /usr/local/bin/start_go_mmproxy.sh
Paste this inside:
#!/bin/bash
sleep 15
ip rule add from 127.0.0.1/8 iif lo table 123 &
ip route add local 0.0.0.0/0 dev lo table 123 &
ip -6 rule add from ::1/128 iif lo table 123 &
ip -6 route add local ::/0 dev lo table 123 &
# Start the first instance
nohup /home/node1/go/bin/go-mmproxy -l 0.0.0.0:28000 -4 127.0.0.1:27000 -6 [::1]:27000 -p tcp -v 0 &
# Start the second instance
nohup /home/node1/go/bin/go-mmproxy -l 0.0.0.0:28001 -4 127.0.0.1:27001 -6 [::1]:27001 -p tcp -v 0
Step 7.2 (ONE time setup):
sudo chmod +x /usr/local/bin/start_go_mmproxy.sh
Step 7.3:
Every time you want a new server or to forward a new port to a server you need to create a new command and put it in this file don't forget the & at the end to run the next command EXCEPT for the last command
Step 8.1 (ONE time setup):
sudo nano /etc/systemd/system/go-mmproxy.service
Paste this inside of the new service:
####
[Unit]
Description=Start go-mmproxy after boot
[Service]
ExecStart=sudo /bin/bash /usr/local/bin/start_go_mmproxy.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
####
Step 8.2 (ONE time setup):
sudo systemctl daemon-reload
Step 8.3 (ONE time setup):
sudo systemctl start go-mmproxy.service
sudo systemctl enable go-mmproxy.service
Hello all. I am new to nginx. I am able to deny access based on IP or network. But I can't make it to work to ban access if someone is coming from a specific domain. I tried several solutions I found on google but nothing seems to work. It either errors out or I still can access it. I managed to make it work in httpd but I can't make it work in nginx. Can someone point me towards the right direction?
Below is my config from /etc/nginx/nginx.conf Very simple setup.
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
root /usr/share/nginx/html;
deny 192.168.0.22;
allow all;
# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;
location / {
}
error_page 404 /404.html;
location = /40x.html {
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}