/r/docker
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
/r/docker
I'm running a local WordPress installation in docker containers, one for the Wordpress installation and one for the database. When I started the project yesterday, I was able to access the landing page on localhost:8000 as well as on 192.168.4.xx:8000 from other devices on my network. I then added a plugin that handles user authentication. The pages associated with the plugin work fine on localhost. If I try to access them from remote devices on the network, it looks as though the request never reaches the web server. I'm not sure if this is a Wordpress issue, a docker issue, or a 'my ISP does some stupid crap in my router' issue.
Any ideas would be appreciated. Thanks!
EDIT: Resolved. The issue is Wordpress related. The links are switching from IP to 'localhost' when accessing from the remote devices.
Hello everyone,
I have developed a Laravel application that uses PHP 8.1, Apache, and MySQL, and I would like to distribute this application to my clients in the simplest way possible. The goal is for clients to be able to install it on their own (or with minimal intervention from me) using Docker.
I’d like to know what the best practices or most common solutions are for: 1. Creating a Docker configuration that includes Laravel, Apache, and MySQL, and is easy to use. 2. Automating the initial installation, including steps like creating the .env file, running migrations, etc. 3. Managing future software updates, making the process as simple as possible for both me and the clients.
If you have experience with similar challenges or suggestions on how to approach this, I’d be really grateful!
Thanks in advance
https://hub.docker.com/_/redis theres no docs on docker compose here and It is really annoying am I just meant to know or figure out the variables somehow?
For context on what I'm trying to do: its just add redis to nextcloud, but I have seen some other pages with poor documentation
Hi, I am new to Docker. I installed Docker Desktop for M1 chip on my MB Air M1. In System Settings/General/Login Items/Allow in the Background, why can't i disable docker in "Allow in the background" for Mac?
There is both Docker and Docker Inc listed. I can disable the second one, but the first is reenabled when i close and reopen settings. It's not causing any issues, and its not opening on startup and doesn't seem to actually be running in the background, just curious why it behaves like this.
Probably a trivial question, but I thought anything you install inside a container is saved only in the container. Why is it that when I install a package in a container, exit, and re-run the container, the package remains?
So, in a moment of stupidity (always check what directory you're in first) I deleted my docker-compose.yml file. The containers are still running, and I have portainer running as well - is there a way to regenerate my file from command line or from portainer? I shouldn't need to make changes, but I'd feel better with the insurance of having the file there, just in case.
Hi everyone, I have a weird problem with a Docker overlay network.
I have a very small home-lab setup with two different machines (let's call them A and B) running different containers, all with compose files. That's precisely what I want/need at the moment, I'm not interested in HA or similar.
I sometimes need to talk with a docker container, running on machine A, from another container running on machine B. I can achieve this easily using a bridged network and publishing ports, something like curl http://machine_A:1234
.
However, if I need to refer to a docker container running on the same host, I can directly use the container's hostname thanks to Docker's embedded DNS.
I wanted to achieve something similar but across the two different machines, and the first results that pop up are about using an overlay network. Therefore, I set up a two-nodes swarm and created the overlay network, and run some test containers. I can correctly ping containers running on the other machine by using their hostname, however... if I try to establish an HTTP connection, it starts correctly but then hangs forever.
The containers have a small web-server for testing purposes. Therefore, curl <container_hostname>:1234
(where 1234 is the internal port of the container, not published to the host) works correctly if <container_hostname> is running on the same host (I see the HTML stream in the CLI), but if it's running on the other machine, the HTML stream is truncated at some point, and the command hangs.
I already tried some other solutions about similar problems, including:
ethtool -K <interface> tx off
, with <interface> being the docker overlay network interface (correct?)ip link show
inside the container already shows that the MTU for the overlay network is lowered of 50 bytes from the value I set / default value. As far as I understood, this means that the VXLAN overhead should already be accounted for...I hope I gave enough context. Can someone help me understand the problem?
Thanks in advance!
Looking for some advice if anyone can help please.
I currently run a number of docker containers on an ageing synology ds218+ (sonarr, etc - mainly linuxserver containers). I've got about 20 containers running all from a docker compose file with variables setup for the volume path, etc. It's become a bit slow and I've had the odd issue so was thinking of a mini pc to replace it and got a good deal on an intel N100 mini pc.
I'm hoping to use the mini pc to run the docker containers and have it hard wired to the NAS which will still act as storage but with docker, network access, etc disabled. I've installed docker for windows but not quite sure where to go from here.
I've read that docker for windows can run linux containers so is this as simple as porting my docker compose file over to the mini pc, editing the volume path prefix in my compose file to point to the NAS folders then starting it up?
Any help would be greatly appreciated, thanks.
Hey there! I posted in the past regarding optimizing building a cluster of NodeJS apps by leveraging pnpm's caching system (if you're not familiar with it: it basically stores the packages in a folder and symlinks them around instead of downloading them each time).
After doing some research, I decided to use a multi-stage building system, where I have a Dockerfile that builds a base image from Ubuntu and solves all the dependencies, such as:
/pnpm-cache/<app-name>/node_modules
)This ensures all of the subsequent images only need to actually build the apps, without solving dependencies, which are soft-linked into place. This proved to be fast and efficient (since I delete the un-needeed deps before exporting the image).
I organize this using docker-compose and signaling that all apps depend on said base image, so that I can store them in different Dockerfiles.
The problem comes from the docker's internals context time on the subsequent images; from my understanding, said context is loaded from the host directory and not from the base images that are used; I'm using a .Dockerignore file in the root of the context (in my case, the parent directory of the docker-compose.yaml, so that:
-apps: /project/src )
Any clue as to why this might be the case? The context that is sent over seems suspiciously close to the weight of the /project, so I wonder if something is wrong with my ignore file or what...
To exclude folders, I just need to write "folder_name" in the .Dockerignore right? Will it exclude all <folder_name> also from subdirs or do I need to write **/<folder_name>? Or maybe I'm missing slashes to signal that they are dirs..?
I'd also be happy to have other suggestions on how to speed up the process; I'm kind of new to this :')
Tysm, I love working with docker, it's such a cool tool :)
Hopefully this can also help other that have the same problem with optimizing NodeJS clusters.
Is it possible to load profiles for binaries in docker just like on host?
For example I want to allow network for ping, but restrict it for lzip. On my main system I can write different profiles for ping and for lzip. As far as I know I can apply only one profile for docker.
I think you should checkout this article:--
I've read most of "Docker Deep Dive". It's OK but not actually very low-level. I'd be interested in a book that did more to explain Docker in the context of namespaces and cgroups, for example, or that otherwise did more to explore the technology and make you see where there might be limitations.
I've messed with Docker quite a bit on my old server that was running unraid, now I've spun up a proxmox instance and am banging my head against the wall attempting to enable ipv6.
Previously I set up docker in an lxc using one of tteck's helper scripts and was able to enable ipv6 on the bridge by adding the specified lines to the daemon.json based on this video.
Now I've spun up another lxc and am attempting to use compose, but nothing I do seems to affect the bridge. I've added the same lines to the daemon.json file, and modified the compose file to (in my understanding) enable ipv6, yet any network inspect I run still leaves "EnableIPv6": false
Here's what the daemon.json looks like
{
"ipv6": true,
"fixed-cidr-v6": "fd00:0:0:0:1::/80"
}
And what the compose looks like
networks:
default:
driver: bridge
enable_ipv6: true
ipam:
config:
- subnet: fd00:0:0:0:1::/80
Now I've tried many combinations of including subnet & gateway etc, but nothing I do seems to change anything. I know that IPv6 issues w/ Docker are persistent to say the least, but I figured I might as well ask if there's something obvious that I might be missing.
I'm doing all of this in the hope of adding a matter server (which requires IPv6) to my home assistant stack. Thanks!
Hey everyone,
I’m trying to get WebGL running under Puppeteer inside a Docker container. I’m working on a Node.js application, using Hono, that uses Puppeteer to take screenshots of a Three.js canvas. Everything works fine locally, but once I run the code inside Docker, I start getting Error creating WebGL context
What I’ve Tried:
ENV LIBGL_ALWAYS_SOFTWARE=1
in my Dockerfile to make Mesa choose a software renderer.libgl1-mesa-dri
, libosmesa6
, etc.).--use-gl=swiftshader
and --use-gl=egl
.--disable-gpu
in case it was blocking software fallback.DISPLAY
), and tried Xvfb with non-headless mode as well.DISPLAY
if I’m going fully headless, since having it set when no real display exists can confuse Chromium.chromium
package and google-chrome-stable
.Has Anyone Got WebGL Working with Puppeteer in Docker?
Any guidance, tips, or proven configurations would be hugely appreciated. I’ve been going in circles for days and would love to hear from anyone who’s solved a similar problem!
Here's my puppeteer setup
const browser = await puppeteer.launch({
headless: true,
executablePath: process.env.PUPPETEER_EXECUTABLE_PATH || 'chromium',
args: ['--no-sandbox', '--disable-setuid-sandbox'],
})
const page = await browser.newPage()
await page.goto(url, { waitUntil: 'networkidle0' })
const canvasImage = await page.evaluate(() => {
const canvas = document.querySelector('canvas')
if (!canvas) {
throw new Error('No canvas element found')
}
return canvas.toDataURL('image/png')
})
await browser.close()
Here's my Dockerfile
FROM node:20-bullseye AS base
RUN apt-get update && apt-get install -y --no-install-recommends \
fonts-liberation \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libcups2 \
libdrm2 \
libgbm1 \
libgtk-3-0 \
libnspr4 \
libnss3 \
libx11-xcb1 \
libxcomposite1 \
libxdamage1 \
libxrandr2 \
libxinerama1 \
libxi6 \
xdg-utils \
libu2f-udev \
libxshmfence1 \
libglu1-mesa \
mesa-utils \
libgl1-mesa-dri \
libgl1-mesa-glx \
libosmesa6 \
libxrender1 \
chromium \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
WORKDIR /app
FROM base AS builder
COPY package*json tsconfig.json src ./
RUN npm ci && npm run build && npm prune --production
FROM base AS runner
WORKDIR /app
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium
RUN groupadd --system nodejs && useradd --system --gid nodejs hono
COPY --from=builder /app/node_modules /app/node_modules
COPY --from=builder /app/dist /app/dist
COPY --from=builder /app/package.json /app/package.json
USER hono
EXPOSE 9001
CMD ["node", "/app/dist/index.js"]
Sorry this is more of a rant, but I'm in charge of maintaining a legacy product for the big company I work for (who I don't want to name, but it rhymes with "Snapple." It's not Snapple.)
The entire app was created and deployed using Docker Swarm. The use case for Swarm is supposed to be light "clusters" that don't really justify the bigger lift of larger orchestration systems like Kubernetes.
But in a combination of Not Created Here Syndrome and just plain laziness, this entire system I support -- which includes multiple databases, a separate control plane, Redis, CRDB, and a zillion more moving parts -- is all in Swarm. Despite the fact that this system I inherited is clearly better suited to something like k8s, it's all in Swarm.
As a result, the hoops I have to jump through to deploy this thing (especially in China where there are... a lot of very carefully thought out security restrictions because, well, China...) are ridiculous. Where I could have predictable, incremental deployments with k8s, the deployment for this tool is... just a mess of custom scripts, makefiles, and basically tribal knowledge that the creator of the system -- of course -- has now moved on from, leaving literally nobody who knows how it works.
And before you excoriate not-Snapple too much, I'm a dev contractor with ~30 years of experience so I can say this with some authority: it's the same f*cking thing everywhere. You get all these prima donna devs who
This isn't really a rant about Swarm; it seems... fine for smaller systems. And I'm sure you can build bigger, more complex systems with it -- my project is a case in point. But like with so many things software development related, the people building it (who built it long after k8s was basically "the norm" in container orchestration) felt like they could reinvent the wheel better than basically the entire world. What, because you work at not-Snapple? The breathtaking hubris...
No matter how smart you are, resist this belief. You can't beat the wisdom of the crowd, especially in things like software development. There aren't that many real "ninjas" out there, just a bunch of working schlubs like me and, I'd reckon, readers of this forum.
When I'm architecting a new system, I strive to make it boring. Unless there's a very compelling reason, deciding to "color outside the lines" (say, implement your own TLS ciphersuite, or this case...) never, ever ends well in software development.
Thank you for letting me rant. I love Docker, except for it's new, extractive business model.
As you were.
I've done what troubleshooting I can, but my knowledge is pretty limited, so here we are.
I'm running a VPS with Debian. It's a fresh install, and I installed docker and docker-compose from apt (docker reports its version as 20.10.24+dfsg1
).
I've attempted to set up an instance of Kitchen Owl, using the docker-compose.yml
from its docs. The only change I made was the port, since I'm planning on using a subdomain.
Running docker-compose everything comes up okay, and there's nothing in the docker logs that suggests any errors as far as I can tell.
I have a reverse proxy set up with nginx, and am using certbot. The server entry for the container is:
server {
index index.html index.htm index.nginx-debian.html;
server_name [subdomain.domain.com]; # managed by Certbot
location / {
proxy_pass http://localhost:[port];
}
listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/[subdomain]/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/[subdomain]/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}server {
if ($host = [subdomain.domain.com]) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 ;
listen [::]:80 ;
server_name [subdomain.domain.com];
return 404; # managed by Certbot
}
Pointing a browser to the subdomain, it just hangs trying to load, with no error from the server. Running curl localhost:[port]
from the VPS does the same thing. On the other hand, pointing a browser to http://[domain]:[port]
(so not the subdomain, and without SSL) works.
I'm using iptables
, which has a rule for port 80, and even stopping it entirely hasn't fixed anything.
I don't know what else I can try or where else to look for what the problem might be, so any help is appreciated!
I have come across an interesting issue, I have Uptime Kuma running in a docker container, I backed up the volume data and copied it over to a new server, I ran the docker compose file (find attached) and it started up fine but without the heartbeat data, though it has the host data. I stopped the container and copied the data again and started the container and the heartbeat data is now there. Why do you think this is? It's also strange that the host name data was there meaning the back data was copied to the right place.
docker compose file:
uptime-kuma:
container_name: uptime-kuma
image: louislam/uptime-kuma:latest
restart: unless-stopped
environment:
- UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=true
ports:
- "3002:3001"
volumes:
- ./uptime-kuma:/app/data
Wrote a very simple docker manager is bash. I find it useful. Others might too. Let me know what you think.
I’m working on a setup where I run multiple VPN clients inside Linux-based containers (e.g., Docker/LXC) on a single VM, each providing a unique external IP address. I’d then direct traffic from a Windows VM’s Python script through these container proxies to achieve multiple unique IP endpoints simultaneously.
Has anyone here tried a similar approach or have suggestions on streamlining the setup, improving performance, or other best practices?
-----------------------
I asked ChatGPT, and it suggested this. I'm unsure if it's the best approach or if there's a better one. I've never used Linux before, which is why I'm asking here. I really want to learn if it solves my issue:
dante-server
, or an HTTP proxy like tinyproxy
)LinuxVM_IP:1080
(SOCKS5)LinuxVM_IP:1081
LinuxVM_IP:1082
, and so forth.requests
module with SOCKS5 proxies via requests[socks]
:import requests # Thread 1 uses container 1’s proxy session1 = requests.Session() session1.proxies = { 'http': 'socks5://LinuxVM_IP:1080', 'https': 'socks5://LinuxVM_IP:1080' } # Thread 2 uses container 2’s proxy session2 = requests.Session() session2.proxies = { 'http': 'socks5://LinuxVM_IP:1081', 'https': 'socks5://LinuxVM_IP:1081' } # and so forth...In Summary:
This approach gives you a clean separation between the environment that manages multiple VPN connections (the Linux VM with containers) and the environment where you run your main application logic (the Windows VM), all while ensuring each thread in your Python script gets a distinct IP address.
I have a new Macbook Pro with an M4 pro chip. I've installed multiple versions of Docker desktop for Mac (with Apple chip) but when booting it immediately crashes with the following message;
running engine: waiting for the Docker API: engine linux failed to run: running VM: qemu exited unexpectedly: exit status 1
I couldn't find much about it so I hope anybody here has a solution. Any input is much appreciated!
At work we are moving our IOT devices over to Ubuntu Core. The downside is everything must be installed via Snap. I have a docker image of the software we run. Could someone direct me on how to build this image into a Snap package?
I have two volumes, er, bind mounts on a docker container. One contains static files the other contains a sqlite db. The db updates fine when the app in the container updates it, *or* when I update the db externally (from the host). The static files *stay* static (any changes to the files on the host are ignored, unless I rebuild the container). Is this expected behavior?
Here are the steps I have taken to troubleshoot:
So there is caching going on somewhere, but as far as I can tell, not on the client side (curl wouldn't use caching, right?).
Hi, I am using DockerDesktop (v.: 4.36.0) with wsl2 (ubuntu)
while I am editing files in wsl (with the PhpStorm) the changes doesn't reflect in the container.
The files are mounted as volumes (driver: bridge)
Any help will be much apprecheated.
Hi, please help. I'm using codeserver on a Synology NAS and I can access the /volume1/web directory, but now I only see the system in codeserver and I can't access the volume1 directory... Before that, I stopped all Stacks and edited in portrainer
volumes:
- /volume1/docker/
to
volumes:
- /volume1/git/
and then moved the affected directories. Everything works for me, except codeserver. Do you know what could be the cause? I tried removing everything and creating a new stack.
So I'm hoping someone smarter than me could offer some insight. I've been following a Docker course for the past few weeks, and we have an assignment that I just can't solve.
We get three files: an html, a dockerfile, and a docker-compose.yml. All three files are in the same directory. The dockerfile uses Windows IIS as an image to build a webserver, and the webpage should be available at http://localhost:123
We need to alter the dockerfile (only the dockerfile!), so that it uses mcr.microsoft.com/windows/servercore:ltsc2019 instead of mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 . The result needs to be the same: the html should be shown at http://localhost:123 .
This is the html:
<!DOCTYPE html>
<head>
<title>Firstname</title>
</head>
<body>
<h1>Lastename</h1>
</body>
</html>
This is the .yml:
services:
iisserver:
image: iis-website:latest
build:
context: .
dockerfile: Dockerfile
ports:
- "123:80"
networks:
- webnet
networks:
webnet:
driver: nat
This is the original dockerfile:
# Use the IIS-base-image
FROM mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
# Copy the website files to the IIS wwwroot map
COPY ./index.html C:/inetpub/wwwroot
# Expose port 123 for webtraffic
EXPOSE 123
I've tried many different things, but none seemed to work:
# escape=`
# Use Windows Server Core 2019 as base image
FROM mcr.microsoft.com/windows/servercore:ltsc2019
# Install IIS
RUN powershell -Command Install-WindowsFeature -Name Web-Server
# Copy the website files to the IIS wwwroot map
COPY ./index.html C:/inetpub/wwwroot/
# Expose port 123 for webtraffic
EXPOSE 80
# Start IIS and keep the container running
CMD ["powershell", "-Command", "Start-Service w3svc; while ($true) { Start-Sleep -Seconds 60 }"]
I have two NICs installed on the host.
If I run: ifconfig - a I can see that I have enp3s0 and enp5s1 up and running.
How can I set a qbittorrent container to only use enp5s1 or at least show that as an option in the interfaces option within qbit?
Do I need to create a docker network and if so which type?
When creating the container I can specify the interface, but it shows an error of unavailable/non existant in the logs. Substituting enp5s1 for eth1 doesn't work either.
Do I need to do anything special on the host?
I've been trying to setup my pipeline, however I cannot get the container to run. Get the following error no matter what I try
/direct_server: error while loading shared libraries: libssl.so.3: cannot open shared object file: No such file or directory
I don't have any openssl dependencies.
Ive tried musl with scratch and same error.
What am I doing wrong here?
Dockerfile
# Use the Rust image for building
#FROM rust:latest as builder
FROM rust:bookworm as builder
# Set offline mode to prevent runtime preparation
ENV SQLX_OFFLINE=true
# Install necessary dependencies and git
RUN apt-get update && apt-get install -y \
libssl-dev \
pkg-config \
build-essential \
git && \
apt-get clean
# Use secrets to pass GitHub token securely
RUN --mount=type=secret,id=RDX_GITHUB_TOKEN \
git clone https://$(cat /run/secrets/RDX_GITHUB_TOKEN)@github.com/acct/infrastructure.git infrastructure && \
git clone https://$(cat /run/secrets/RDX_GITHUB_TOKEN)@github.com/acct/database.git database
# Copy the source code
COPY . .
# Copy the `.sqlx` folder (generated by cargo sqlx prepare)
COPY .sqlx .sqlx
# Update paths in Cargo.toml to use relative paths
RUN sed -i 's|{ path = "../infrastructure" }|{ path = "infrastructure" }|' Cargo.toml
RUN sed -i 's|{ path = "../database" }|{ path = "database" }|' Cargo.toml
# Build the Rust binary
RUN cargo build --release
# Minimal runtime image
FROM debian:bookworm-slim
# Install necessary runtime libraries (including libssl3) and verify installation
RUN apt-get update && apt-get install -y \
libssl3 \
libssl-dev \
pkg-config \
ca-certificates && \
apt-get clean && \
ldconfig && \
ls -l /usr/lib/x86_64-linux-gnu/libssl.so.3
# Copy the built binary from the builder stage
COPY --from=builder /target/release/direct_server /direct_server
# Set the entrypoint
ENTRYPOINT ["/direct_server"]
What's the best option for firewall in a server where docker is runing is it iptables or ufw. I know that docker override the rules for iptables is it the same for ufw? Thanks in advance.
Checkout my newer article on medium:-- https://blog.stackademic.com/what-is-docker-the-complete-beginners-guide-to-docker-concepts-adda7313b98a