/r/nginx

Photograph via snooOG

Nginx (pronounced "engine x") is a free, open-source, high-performance HTTP server and reverse proxy, as well as an IMAP/POP3 proxy server. Written by Igor Sysoev in 2005, Nginx now hosts over 14% of websites overall, and 35% of the most visited sites on the internet. Nginx is known for its stability, rich feature set, simple configuration, and low resource consumption.

Note: If your post doesn't appear in the "new" queue after a couple of minutes, it's probably stuck in the spam filter. Send the mods a message and they'll get that fixed for you.

/r/nginx

12,360 Subscribers

1

Help solve 'unknown "http" variable' - I'm completely new to this

Server is a pretty small computer set up pretty much only for Jellyfin, running Ubuntu 24.04.1 LTS, Nginx 1.24.0, and Jellyfin 10.10.5+ubu2404. Jellyfin itself is working well, both on it's own computer and over LAN, but in trying to use nginx to access it via a Squarespace subdomain (only using Squarespace since I already had a main site for other things) I seem to have hit a roadblock. I've been following this guide, but after copying the example /etc/nginx/conf.d/jellyfin.conf and using sudo nginx -t, I only get the error 'unknown "http" variable' and 'nginx: configuration file /etc/nginx/nginx.conf test failed'. I can go to jellyfin . mydomain . com (without the spaces obviously) and see the 'Welcome to nginx!' page, but not my Jellyfin. The base conf file is completely unedited, and I just cannot for the life of me figure out the error.

For some reason the code blocks do not want to function correctly, so I've put my /nginx.conf and /conf.d/jellyfin.conf in a github repo for access. Please tell me someone here knows what's going on, I feel like I'm losing my mind.

2 Comments
2025/02/03
06:05 UTC

1

Open source nginx instance manager

Is there is an alternative for nginx instance manager that is open source

2 Comments
2025/02/01
19:48 UTC

0

Found a proxy list on github (update every 5 min), sorted valid proxies by checker and trying to do request. which site I would not specify I get this response. What is it guys, can you help?

https://preview.redd.it/ofazh89suhge1.png?width=1629&format=png&auto=webp&s=425419e92bfa46f6d70ef88a16f52aa785e3a6ad

REMOTE_ADDR = 35.159.194.126

REMOTE_PORT = 51251

REQUEST_METHOD = GET

REQUEST_URI = http://www.nbuv.gov.ua/

REQUEST_TIME_FLOAT = 1738401340.89743

REQUEST_TIME = 1738401340

HTTP_HOST = www.nbuv.gov.ua

HTTP_PROXY-AUTHORIZATION = Basic dXNlcm5hbWU6cGFzc3dvcmQ=

HTTP_USER-AGENT = curl/8.9.1

HTTP_ACCEPT = */*

HTTP_PROXY-CONNECTION = Keep-Alive

3 Comments
2025/02/01
09:15 UTC

1

Multiple CORS locations causing strangeness with PHP-FPM

Running NGINX 1.14.1 on AlmaLinux 9, all updated. I want to enable CORS from .mydomain and http://localhost. for development. I do this using if statements in the NGINX config as at the bottom. HOWEVER, if I simply enable the if statements in the location /{} block, then PHP-FPM starts throwing weird errors about "File not found." and from the nginx.error logs: "Primary script unknown".

Uncommenting everything CORS and adding these to the "Location / {} " block causes this to happen:

    set $cors_origin '';
    # Dynamically allow localhost origins with any port
    if ($http_origin ~* (http://localhost.*)) {
        set $cors_origin $http_origin;
        }
    if ($http_origin ~* (https://.*\.shareto\.app)) {
        set $cors_origin $http_origin;
        }

I've heard that "if is Evil" on Nginx; what are best practices for enabling CORS on multiple domains in NGINX? (EG: *.mydomain, localhost, *.affiliatedomain, etc)

/etc/nginx/conf.d/mydomain.conf:

server { 
  server_name: mydomain;
  root /var/www/docroot;
  index fallback.php;
  location / {
    index fallback.php;
    try_files $uri /fallback.php?$args;
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/run/php-fpm/www.sock;
    fastcgi_index /fallback.php;

    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param SCRIPT_NAME $fastcgi_script_name;

    include fastcgi_params;

   set $cors_origin '';
    # Dynamically allow localhost origins with any port
    if ($http_origin ~* (http://localhost.*)) {
        set $cors_origin $http_origin;
        }
    if ($http_origin ~* (https://.*\.shareto\.app)) {
        set $cors_origin $http_origin;
        }

    # Add CORS headers
    add_header 'Access-Control-Allow-Origin' "$cors_origin" always;
    add_header 'Access-Control-Allow-Origin' * always;

    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
    add_header 'Access-Control-Allow-Headers' 'Content-Type, Authorization' always;

    if ($request_method = OPTIONS) {
        return 204;
        }
    }
  listen 443 ssl; # managed by Certbot
  # SNIP # 
  }
0 Comments
2025/02/01
01:30 UTC

0

Quero usar outro website dentro de uma mesma instância da digitalocean usando nginx

já estou rodando um site no nginx usando docker compose, eu criei outro com as "configurações iguais" e criei dois locations para cada website, mas quando vou acessar o segundo location no navegador ele não aparece, alguem me ajuda?

```

server{

listen 80;

root /var/www/html;

index index.html;

error_page 404 /index.html;

location / {

root /var/www/html/front-vistas/dist;

proxy_pass http://localhost:5173;

proxy_redirect off;

add_header Cache-Control no-cache;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Host $server_name;

}

location /api {

proxy_pass http://localhost:8080;

proxy_redirect off;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Host $server_name;

client_max_body_size 50M;

location /api/vistas {

add_header Content-Disposition 'inline';
# Adicionalmente, defina o tipo MIME se necessário

types {

application/pdf pdf;

}

}

if ($request_method = OPTIONS) {

add_header Access-Control-Allow-Origin "*";

add_header Access-Control-Allow-Methods "GET, POST, OPTIONS, DELETE, PUT";

add_header Access-Control-Allow-Headers "Authorization, Content-Type, Accept";

add_header Content-Length 0;

add_header Content-Type text/plain;

return 204;

}

}

# Configuração para documentos

location /api/vistas {

proxy_pass http://localhost:8080/api/vistas;

proxy_redirect off;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Host $server_name;

# Habilita cache no cliente

add_header Cache-Control "public, max-age=86400, immutable";
# Habilita cache no servidor

proxy_cache documents_cache;

proxy_cache_valid 200 1h; # Cache para respostas 200 por 1 hora

proxy_cache_valid 404 1m; # Cache para respostas 404 por 1 minuto

proxy_cache_use_stale error timeout updating;

# Desativa buffering para streaming

proxy_buffering off;

}

#Configuracao do segundo app

location /sght {

root /var/www/html/front-sght/dist;

index index.html;

error_page 404 /index.html;

proxy_pass http://localhost:5174;

proxy_redirect off;

add_header Cache-Control no-cache;

proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Host $server_name;

}

location /api/sght {

proxy_pass http://localhost:8081;

proxy_redirect off;
proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Host $server_name;

}

}

```

0 Comments
2025/01/31
20:48 UTC

1

Website Suddenly Broke – Next.js + Node Backend on GCP VM – Strange System Logs & Nginx Issue

Hey everyone,

I have a React and a (Next.js) frontends and a Node.js backend running on a Google Cloud VM instance (Ubuntu). Out of nowhere, my website stopped working. So, I decided to rebuild my Next.js app on the VM.

What I Did

Rebuilt the Next.js app → Build was successful

After the build completed, I started seeing these system logs:

less

Copy

Edit

Jan 31 19:48:49 ubuntu-node-website systemd[1]: snapd.service: State 'stop-sigterm' timed out. Killing.

Jan 31 19:48:54 ubuntu-node-website systemd[1]: snapd.service: Killing process 21384 (snapd) with signal SIGKILL.

Jan 31 19:48:59 ubuntu-node-website systemd[1]: snapd.service: Main process exited, code=killed, status=9/KILL

Jan 31 19:49:07 ubuntu-node-website systemd[1]: snapd.service: Failed with result 'timeout'.

Jan 31 19:49:17 ubuntu-node-website systemd[1]: Failed to start Snap Daemon.

Jan 31 19:49:27 ubuntu-node-website systemd[1]: snapd.service: Scheduled restart job, restart counter is at 2.

Jan 31 19:49:30 ubuntu-node-website systemd[1]: Stopped Snap Daemon.

Jan 31 19:49:36 ubuntu-node-website systemd[1]: Starting Snap Daemon...

🔹 Is this normal? Does it have anything to do with Next.js or my app crashing?

And, I am algo getting nginx error when running the url of my site? Can anyone help me?

0 Comments
2025/01/31
20:17 UTC

2

I use WireGuard to router then internal LAN is NGINX as well overkill?

If I access my backend services which are docker containers on VM on proxmox then should I be adding nginx or not? I do want to secure http to SSL and I do want friendly domains but don’t want a performance hit passing data through nginx like docs photos and vids. Trying to work out best config. Thanks.

1 Comment
2025/01/31
09:02 UTC

1

Help with serving Wordpress site on a sub-path of a Django project

I'm hosting a Django project on a Nginx server and want to serve a Wordpress site on a sub-path.

With my current config, when I go to /freebies it returns this:

Not Found The requested resource was not found on this server.

And when I tried going to /freebies/index.php the same thing happens.

I don't know what I'm doing wrong.

This is my current config:

upstream php-handler {
    server unix:/var/run/php/php8.3-fpm.sock;
}

server {
    server_name example.com www.example.com;
    root /home/user/djangoproject;

    location = /favicon.ico { access_log off; log_not_found off; }

    location /static/ {
        alias /var/www/example.com/static/;
    }

    location /media/ {
        alias /var/www/example.com/media/;
    }


    location /freebies {
        alias /mnt/HC_Volume_102017505/example.com/public;
        index index.php index.html;

        try_files $uri /$uri /freebies/index.php?$args;

        location ~ \.php$ {
            include snippets/fastcgi-php.conf;
            fastcgi_param SCRIPT_FILENAME $request_filename;
            fastcgi_pass php-handler;
        }

        location ~ /\.ht {
            deny all;
        }

        location = /freebies/robots.txt {
            allow all;
            log_not_found off;
            access_log off;
        }

        location \~\* \\.(js|css|png|jpg|jpeg|gif|ico)$ {
            alias /mnt/HC_Volume_102017505/example.com/public/wp-content/uploads;
            expires max;
            log_not_found off;
        }

    }

    location / {
        include proxy_params;
        proxy_redirect off;
        proxy_pass http://unix:/run/gunicorn.sock;
    }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {

    if ($host = www.example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    if ($host = example.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    server_name example.com www.example.com;

    listen 80;

    return 404; # managed by Certbot

}
0 Comments
2025/01/31
08:53 UTC

1

Unable to hide nginx version

I'm using nginx 1.20.1. I mentioned server_tokens off in the http section, yet I can see the version in my response headers as well as the error message. Any guidance would mean a lot!

0 Comments
2025/01/30
18:32 UTC

1

Need Help with 502 Bad Gateway Error on NGINX

Hi everyone,

I've recently been hired as an IT professional and I'm encountering a "502 Bad Gateway" error on our NGINX server. Here's the context:

  • The website code is stored in GitLab.
  • The site is hosted on Google Cloud.
  • In the Google Cloud Console, I noticed that the site is running on an Ubuntu VM instance.

I'm not sure how to resolve this error and would appreciate any guidance. Here are some specific questions I have:

  1. What are the common methods to troubleshoot and fix a 502 Bad Gateway error in NGINX?
  2. Are there specific steps I should follow given that the site is hosted on Google Cloud and the code is in GitLab?
  3. Any tips on checking the configuration or logs that might help identify the issue?

I have no idea how to get rid of this error, so any help would be greatly appreciated!

2 Comments
2025/01/30
12:56 UTC

1

Need help

Ok, so I have Nginx proxy manager working with other services on my server. But for the life of me I cannot get it to reverse proxy immich.

I was using namecheap dns since this is where I bought my domain. I just moved my dns over to cloudfare. All set up with type A DNS records.

So i have it set up just like everything else.

Router is port forwarded ports 80 to 1880, and 443 to 18443.

I have jellyfin reverse proxy using streemxxx.net

I have jellyseerr reverse proxy using request.xxx.net

I have wizarr reverse proxy to signup.xxx.net

All work fine. I setup immich for photos.xxx.net, and i get nothing. Shows it is not available with the cloudfare page coming up saying the server is down. I can access it locally, and port forward to it in my router and connect to it with my ip address. The port is correct in Nginx.

Am I missing something or configuring something incorrectly?

0 Comments
2025/01/30
02:56 UTC

1

PHP 8.3 fpm in nginx no POST available

I have a symfony application and getting a POST request from a remote service. When receiving with an Apache webserver with php 8.3, i can get the POST data with $data = file_get_contents("php://input").

It's not working on a Nginx webserver. then $data is empty. The difference is apache PHP is a module, on nginx it's fpm.

(cross posting from r/PHPhelp

0 Comments
2025/01/29
18:18 UTC

0

Single config to multiple config files

I have a VPS with two domains pointing at it. It was working quite well with a single nginx.conf file:

events {}
http {
    # WebSocket
    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }
    # Http for certbot
    server {
        listen 80;
        server_name domain1.dev domain2.dev;
        # CertBot
        location ~/.well-known/acme-challenge {
            root /var/www/certbot;
            default_type "text-plain";
        }
    }
    # HTTPS for domain1.dev
    server {
        listen 443 ssl;
        server_name domain1.dev;
        ssl_certificate /etc/letsencrypt/live/domain1.dev/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/domain1.dev/privkey.pem;
        root /var/www/html;        # Grafana
        location /monitoring {
            proxy_pass http://grafana:3000/;
            rewrite  ^/monitoring/(.*)  /$1 break;
            proxy_set_header Host $host;
        }
        # Proxy Grafana Live WebSocket connections.
        location /api/live/ {
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            proxy_set_header Host $host;
            proxy_pass http://grafana:3000/;
        }        # Prometheus
        location /prometheus/ {
            proxy_pass http://prometheus:9090/;
        }        # Node
        location /node {
            proxy_pass http://node_exporter:9100/;
        }
    }

    # HTTPS for domain2.dev
    server {
        listen 443 ssl;
        server_name domain2.dev;
        ssl_certificate /etc/letsencrypt/live/domain2.dev/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/domain2.dev/privkey.pem;
        root /var/www/html;
        # Odoo
        location / {
            proxy_pass http://odoo_TEST:8070/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Forwarded-Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_redirect off;
        }
    }
}

It started getting a bit cluttered so i decided to use multiple config files:

nginx.conf:

events {}

http {
    # Additional configurations
    include /etc/nginx/conf.d/*.conf;
    # Certificates Renewal
    server {
        listen 80;
        server_name domain1.dev domain2.dev;
        # CertBot
        location ~/.well-known/acme-challenge {
            root /var/www/certbot;
            default_type "text-plain";
        }
    }
    # Websocket
    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }
}

domain1.conf:

server {
    # Certificates
    listen 443 ssl;
    server_name domain1.dev;
    ssl_certificate /etc/letsencrypt/live/domain1.dev/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/domain1.dev/privkey.pem;
    root /var/www/html;
    # Grafana
    location /monitoring {
        proxy_pass http://grafana:3000/;
        rewrite  ^/monitoring/(.*)  /$1 break;
        proxy_set_header Host $host;
    }
    # Proxy Grafana Live WebSocket connections.
    location /api/live/ {
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
        proxy_set_header Host $host;
        proxy_pass http://grafana:3000/;
    }
    # Prometheus
    location /prometheus/ {
        proxy_pass http://prometheus:9090/;
    }
    # Node
    location /node {
        proxy_pass http://node_exporter:9100/;
    }
}

domain2.conf:

server {
    # Certificates
    listen 443 ssl;
    server_name domain2.dev;
    ssl_certificate /etc/letsencrypt/live/domain2.dev/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/domain2.dev/privkey.pem;
    root /var/www/html;
    # Odoo
    location / {
        proxy_pass http://odoo_TEST:8070/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_redirect off;
    }
}

Heres my docker-compose.yaml:

networks:
  saas_network:
    external: true

services:
  nginx:
    container_name: nginx
    image: nginx:latest
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./nginx/:/etc/nginx/conf.d/
      - ../certbot/conf:/etc/letsencrypt
    networks:
      - saas_network
    restart: unless-stopped

I keep getting this error:

/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh nginx | /docker-entrypoint.sh: Configuration complete; ready for start up nginx | 2025/01/28 02:19:38 [emerg] 1#1: "events" directive is not allowed here in /etc/nginx/conf.d/nginx.conf:1 nginx | nginx: [emerg] "events" directive is not allowed here in /etc/nginx/conf.d/nginx.conf:1 How can I solve this? or should I keep the single nginx.conf file?

I thik I solved this issue as shogobg mentions, I was recursively including nginx.conf so i moved the additonal configs to sites enabled.

Heres the main nginx.conf:

events {}
http {
    # THIS LINE
    include /etc/nginx/sites-enabled/*.conf;

    # Certificates Renewal (Let’s Encrypt)
    server {
        listen 80;
        server_name domain1.dev domain2.dev;
        location /.well-known/acme-challenge {
            root /var/www/certbot;
            default_type "text-plain";
        }
    }

    # Websocket
    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }
}

Then Ive also added it in the compose:

networks:
  saas_network:
    external: true

services:
  nginx:
    container_name: nginx
    image: nginx:latest
    ports:
      - 80:80
      - 443:443
    volumes:
      # THESE 3 LINES
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/domain1.conf:/etc/nginx/sites-enabled/domain1.conf
      - ./nginx/domain2.conf:/etc/nginx/sites-enabled/domain2.conf
      - ../certbot/conf:/etc/letsencrypt
    networks:
      - saas_network
    restart: unless-stopped
3 Comments
2025/01/28
18:56 UTC

1

Custom nginx module question

Not sure if this is the place to ask, but here goes...

My scenario:

  1. Take an incoming request and transform it into something else (some message header, and the non-buffered original body, and a possible footer).
  2. Send that "something else" to an upstream http server, streaming the body (which at this point in time is the "someting else" composed of a header, original body, and a footer)
  3. Get a response from the upstream, again, streamable body.

3.1. If certain conditions in the response are met, send it again to the same upstream. Goto 3.
3.2. Otherwise, make an http request somewhere else.
4. Return the response from the 3 or 3.2 as the response to the request received in 1.

What would be the way to implement this in a custom nginx module? I thought about an http handler with subrequests, or an upstream module, but i'm not sure if I can intercept the upstream flow to transform the request body, or the response (and just keep doing intermediate requests, if required), or if it just forwards the body to the upstream. Ideally it would round-robin the upstreams being sent to, but I don't know if there's a way to achieve 3.* in an upstream/proxy module.

2 Comments
2025/01/28
15:06 UTC

2

Help please: Cannot find Gunicorn socket

Edit

Found the answer: as of jan/2025, if you install nginx following the instructions on Nginx.org for Ubuntu, it will install without nginx-common and will never find any proxy_pass that you provide. Simply install the version from the Ubuntu repositories and you will be fine. Find the complete question below, for posterity.


Hi all.

I´m trying to install a Nginx/Gunicorn/Flask app (protocardtools is its name) in a local server following this tutorial.

Everything seems to work fine down to the last moment: when I run sudo nginx -t I get the error "/etc/nginx/proxy_params" failed (2: No such file or directory) in /etc/nginx/conf.d/protocardtools.conf:22

Gunicorn seems to be running fine when I do sudo systemctl status protocardtools

Contents of my /etc/nginx/conf.d/protocardtools.conf:

server {
    listen 80;
    server_name cards.proto.server;

    location / {
        include proxy_params;
        proxy_pass http://unix:/media/media/www/www-protocardtools/protocardtools.sock;
    }
}

Contents of my /etc/systemd/system/protocardtools.service:

[Unit]
Description=Gunicorn instance to serve ProtoCardTools
After=network.target

[Service]
User=proto
Group=www-data
WorkingDirectory=/media/media/www/www-protocardtools
Environment="PATH=/media/media/www/www-protocardtools/venv/bin"
ExecStart=/media/media/www/www-protocardtools/venv/bin/gunicorn --workers 3 --bind unix:protocardtools.sock -m 007 wsgi:app

[Install]
WantedBy=multi-user.target

Can anyone please help me shed a light on this? Thank you so much in advance.

3 Comments
2025/01/28
14:33 UTC

1

How to Configure `proxy_set_header` for Nginx Upstream with Two Different Domains?

I have an Nginx configuration where I’m load-balancing traffic between two different domains in an upstream block. For example:

upstream backend {
    server domain1.com;  # First domain
    server domain2.com;  # Second domain
}

My problem is that the Host header sent to the upstream servers is incorrect. Both upstream servers expect requests to include their own domain in the Host header (e.g., domain1.com or domain2.com), but Nginx forwards the client’s original domain instead.

What I’ve Tried

  1. Using proxy_set_header Host $host; in the location block:

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;  # Sends the client's domain, not upstream's
    }

    This doesn’t work because $host passes the client’s original domain (e.g., your-proxy.com), which the upstream servers reject.

  2. Hardcoding the Host header for one domain (e.g., proxy_set_header Host domain1.com;) works for that domain, but breaks the other upstream server.

A way to dynamically set the Host header to match the domain of the selected upstream server (e.g., domain1.com or domain2.com) during load balancing.

Here’s a simplified version of my setup:

http {
    upstream backend {
        server domain1.com;  # Needs Host: domain1.com
        server domain2.com;  # Needs Host: domain2.com
    }

    server {
        listen 80;
        server_name your-proxy.com;

        location / {
            proxy_pass http://backend;
            # What to put here to dynamically set Host for domain1/domain2?
            proxy_set_header Host ???;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}
1 Comment
2025/01/28
10:37 UTC

0

Help Needed: 502 Bad Gateway Error with Nginx

Hi everyone,

I'm encountering a 502 Bad Gateway error with Nginx on Google Cloud Console, my website is stored on google cloud console. I can successfully ping my website, and nslookup is also running fine. Any suggestions on how to resolve this issue?

Thanks in advance!

2 Comments
2025/01/27
20:06 UTC

0

Redirecting a specific port?

Trying to figure out how to solve this situation I am in. Google-fu has failed me, so here I am.

I have a domain from namecheap such as my-server.net. I run an app on port 1234 with an web interface.

So if I go to http://www.my-server.net:1234/ I get to the log in screen for the app. Now obviously I don't want my log in credentials to be transmitted in the open with the http requests and I don't really like adding the port number to the end.

So I made an A record "app" and a rule in nginx (with ssl cert from cerbot) to redirect app.my-server.net to https and to port 1234. So now https://app.my-server.net "securely" gets me to the web app at port 1234.

However, you can still go to http://www.my-server.net:1234/ ... What I would like is for this URL to also redirect to https://app.my-server.net/ . Just as a preventive measure. I made credentials for family members to also use the app and I am concerned (perhaps unnecessarily) that they (or a bad actor) might access the app via the exposed http://www.my-server.net:1234/

>what about wireguard or other VPN

Getting them to use this was a non-starter. So https with username and password management and cellphone 2FA is what I am using now.

This SHOULD be doable I think, but I can't seem to get it to work.

12 Comments
2025/01/27
17:43 UTC

2

Third Party Module for URL manipulation or handling

Hi, I want to build something like: Transform Rules (cloudflare) in top of Nginx ( server block, location, ngx_http_rewrite_module)

Do you know some Third Party Module for URL handling, manipulation, rewriting, etc ?

Do you know code internals nginx related to URL handling, manipulation, rewriting, etc ?

Thanks

2 Comments
2025/01/26
10:42 UTC

1

Nginx Temp Directory Location Change (Windows)

Before I ask my question, I wll say that I use Nginx on Debian Linux to develop web apps. I'm in a situation where I'm doing some work on Windows 10 with Nginx. The system has two disks, where we've able to set the location of /nginx/html/ and /nginx/logs/, to another disk by changing the appropriate settings in the nginx.config file.

We're unable to change the location of the /nginx/temp/ directory. My question being, is this possible?

Not a show stopper, it's now more of a curiosity than anything else.

0 Comments
2025/01/24
01:28 UTC

2

NGINX + Cloudflare Proxy - Unraid

Hi All,

Firstly thanks for reading the post.

I have recently been trying to get overseerr to work via cloudflare and nginx proxies.

I have it working through nginx but when I change my dns record to use cloudflare proxy the site no longer works.

I have imported my Origin Server certificate from Cloudflare, imported this into NGINX and assigned it to the proxyhosts but the website instantly shows as offline in nginx with that cert but when I change back to Lets Encrypt it works fine.

I followed this youtube

Unraid Tutorial: Cloudflare CDN + Domain Purchase & NGINX Setup

but I think I am missing something simple but haven't figured it out.

Ports are open to overseerr, and accessible when cloudflare isnt configured to use Proxy.

Thanks again.

7 Comments
2025/01/23
16:14 UTC

1

Setup of F5 Nginx-ingress (not kubernetes ingress-nginx)

 

Hi everyone,

I'm trying to deploy nginx-ingress by F5(not the kubernetes-ingress-nginx) because we need ingress mergeable resources.

I face a lot of newbie and atypical problems. 

I have a two nodes K8S cluster(1 controller and 1 worker) in a cloud environment but we need to treat it as a baremetal deployment.

Known limitations of our cloud hosting are below:

  1. No IPv6 at all
  2. DNS names are translated to public Ips but cloud hosting uses routable Ips from private subnets.
  3. Only one IPv4 per node is allowed.
  4. No external load balancing available

Please note that change of cloud provider is not possible :-(

 

I'm deploying with extra parameters below:

hostNetwork: true

nodeSelector:

  kubernetes.io/hostname: team

We use nodeSelector to make sure we have only one instance of nginx-ingress controller in the setup. HostNetwork makes sure we can directly bind to host IP as no load-balancing is available due to limitations above.

 

Desired results:

  1. Sg.publicdomain.com is the main DNS name.
  2. I'd like to use basic auth(and later SSO) to protect the website.
  3. I'd like to server simple index.html with a few links for particular components:
  4. /index.html - can be served from nginx proxy itself or using another backend server.

Prefix /srvA - namespaceA, port xy

Prefix /srvB - namespaceB, port xyz

Prefix /srvC - namespaceC, port zyx

(Potential) Prefix /NginxGUI - namespace nginx-ingress, controller itself

 

My questions:

  1. Howto create nginx.org/basic-auth-secret from CLI? All available online resources show examples only for incompatible nginx.ingress.kubernetes.io/auth-secret. I played with example for creating license secret and change params with no success(https://docs.nginx.com/nginx-ingress-controller/installation/create-license-secret/)
  2. Howto access webgui in this setup? I tested deployment using deployment and node ports type as same as daemon-set. I usually get http code 400. Howto configure this?
  3. Howto handle /index.html to be served from 
    1. Ingress controller itself
    2. Backend nginx(preferable as we will have more complicated website later).

I tried to deal make this as a part of ingress master but failed with bunch of different errors. 

Thank you

 

https://preview.redd.it/409fuygz4ree1.png?width=1282&format=png&auto=webp&s=0870728dd57cc1ec2adda222f5b73997ea70ccee

0 Comments
2025/01/23
14:16 UTC

16

how can handled 160,000 domains and config file with nginx?

Hi everyone,

I have 160,000 nginx configs and I can't merge them because they are for different subdomains and I have to set a separate header for each subdomain. But when I restart it takes a long time and kills the OS process. And when I run nginx -t , it takes a long time but it gives me the error "could not build optimal server_names_hash, you should increase either" considering that the server_names_hash_max_size is 40960.

Has anyone ever had this happen? What solution did you use?

All ideas are welcome.

31 Comments
2025/01/22
14:10 UTC

1

Nginx reverse proxy + Astro.js: static blog content loses base path '/blog' during navigation

I'm experiencing a frustrating routing issue with my dockerized Astro.js blog behind an Nginx reverse proxy. While the base blog path functions perfectly, any subpaths are unexpectedly losing their /blog prefix during navigation.

Current Setup:

  • Blog: Astro.js (dockerized, running on localhost:7000)
  • Nginx as reverse proxy (main domain serves other content on localhost:3000)
  • SSL enabled (managed by Certbot)

The Issue: The base blog path works flawlessly - domain.com/blog serves content correctly. However, when navigating to any subpath, the URL automatically transforms to remove the /blog prefix. For example:

This behavior exactly matches what was described in this article about moving from Gatsby subdomain to subpath: https://perfects.engineering/blog/moving\_blog\_to\_subpath. The author encountered the identical issue where subpaths would lose their /blog prefix, resulting in 404 errors and asset loading failures. I've attempted to implement their solution with Astro, but haven't been successful.

Nginx configuration (sanitized):

  server { 
     server_name example.com;

     location /blog {
        proxy_pass http://localhost:7000/;
        proxy_redirect off;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
    }

    location / {
        proxy_pass http://localhost:3000/;
        proxy_redirect off;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Port $server_port;
    }

    # SSL configuration managed by Certbot
}

Astro config:

export default defineConfig({
  site: "https://example.com",
  base: "/blog",
  integrations: [mdx(), sitemap(), tailwind()],
  markdown: {
    rehypePlugins: [sectionize as unknown as [string, any]],
    syntaxHighlight: false, 
  },
  image: {
    domains: ["img.youtube.com"],
  },
});

Like the blog post suggested, this doesn't appear to be a server-side redirect - the network requests indicate it's happening client-side. The dockerized applications work perfectly in isolation, and the base path functions correctly, suggesting this is specifically a routing/path-handling issue.

I have spent countless hours trying to resolve this issue, so any help or insights would be immensely appreciated!

0 Comments
2025/01/18
21:17 UTC

1

Load balance

My apologies for a basic question but I figured it's better to ask and be pointed in the right direction than assume.

I have a website running on an Ubuntu instance with nginx on gcp. It sends data via API to another host for data back and forth. I would like to load balance the inbound traffic. Can I create another VM and use nginx to balance traffic?

Or should I have one host with nginx and then load balance the API requests behind it? Just thinking through this for sure reliability.

We are not big enough for a full load balance and costs associated with it in gcp, but maybe my math is wrong and it's not so bad.

0 Comments
2025/01/16
17:24 UTC

1

NGINX + PROXY_CACHE to cache media files

I have a Centos with aaPanael, Ngnix, large pool of ram, that is serving some streaming media.
I noticed i have a bottleneck on Disk IO reading media files (<1GB size), so i was wondering how to put to good use 63Gb of free ram of that server to cache in mem files used maybe more than a couple of time.
I did some researching, and the easiest way seems to use tmpfs with proxy_cache.
I did create a tmpfs of 50GB ,and mounted it in the /www/server/nginx/proxy_cache_dir path

then i modified the path in /www/server/nginx//conf/proxy.conf

proxy_cache_path /www/server/nginx/proxy_cache_dir levels=1:2 keys_zone=cache_one:20480m max_size=0 inactive=7d use_temp_path=off ;

and added

location / {
proxy_cache cache_one;
}

In the web site configuration
but the mounted path is aleays empty, wont get any file cached

what i'm missing?
is there any other way, better than this, to lift some work from my hdds, like varnish, redis, memcached?

btw, i'm not even sure that nginx can cache local files or only upstream ones...

2 Comments
2025/01/16
13:46 UTC

0

Need help setting up nginx on AWS Amazon Linux 2023 Instance

Hello,

I'm having issues setting up nginx on my AWS Linux 2023 Instance. I am trying to redirect web traffic from port 80 to port 3000.

I followed this tutorial https://dev.to/0xfedev/how-to-install-nginx-as-reverse-proxy-and-configure-certbot-on-amazon-linux-2023-2cc9

however when I visit my website, the default 'Welcome to nginx' screen still shows instead of my app.

This is what my configuration file in /etc/nginx/conf.d looks like:

server {

listen 80;

listen [::]:80;

server_name <WEBSITE DOMAIN>;

location / {

proxy_pass http://<AWS PRIVATE IP4 ADDR>:3000;

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;

proxy_set_header Connection 'upgrade';

proxy_set_header Host $host;

proxy_cache_bypass $http_upgrade;

}

}

I tried switching out the IP address in proxy pass with localhost and it still does not work.

The nginx configuration files are different on this linux distro than on others. For example, there is no 'sites-available' folder.

Thank you for your help.

0 Comments
2025/01/14
20:00 UTC

1

Need help setting up nginx on AWS Amazon Linux 2023 Instance

Hello,

I'm having issues setting up nginx on my AWS Linux 2023 Instance. I am trying to redirect web traffic from port 80 to port 3000.

I followed this tutorial https://dev.to/0xfedev/how-to-install-nginx-as-reverse-proxy-and-configure-certbot-on-amazon-linux-2023-2cc9

however when I visit my website, the default 'Welcome to nginx' screen still shows instead of my app.

I tried switching out the IP address in proxy pass with localhost and it still does not work.

The nginx configuration files are different on this linux distro than on others. For example, there is no 'sites-available' folder.

Thank you for your help.

1 Comment
2025/01/14
20:09 UTC

6

Openai not respecting robots.txt and being sneaky about user agents

About 3 weeks ago I decided to block openai bots from my websites as they kept scanning it even after I explicity stated on my robots.txt that I don't want them to.

I already checked if there's any syntax error, but there isn't.

So after that I decided to block by User-agent just to find out they sneakily removed the user agent to be able to scan my website.

Now i'll block them by IP range, have you experienced something like that with AI companies?

I find it annoying as I spend hours writing high quality blog articles just for them to come and do whatever they want with my content.

https://preview.redd.it/7xf8ig2xeyce1.png?width=2535&format=png&auto=webp&s=5ff10f2c420d73131831b96fa97315bbbd34ffa3

2 Comments
2025/01/14
12:37 UTC

Back To Top