/r/docker
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.
/r/docker
Hi there,
I'm a newbie so please bear with me. I've setup a container on Proxmox and have installed docker and Selenium Standalone Chrome (in the console it's coming back as having installed the repository for this).
I can access the Selenium Grid (at http://192.168.1.126:4444/ rather than localhost) and I can see it on the grid.
However, I can't access http://192.168.1.126:4444/wd/hub from a browser (says it can't find the handler) and when I try the following command it says 'too many arguments.
from selenium import webdriver
Any idea where to go next?
What I'm trying to do is to use Home Assistant (running on a separate VM in Proxmox) to pull the data from the following Python Script using the UKBinCollection integration found here (https://github.com/robbrad/UKBinCollectionData)
python collect_data.py ForestOfDeanDistrictCouncil https://community.fdean.gov.uk/s/waste-collection-enquiry -s -p "XXXX XXX" -n XX -w http://192.168.1.126:444/
Where I would enter the information at p as my postcode; n being the house number and w being the remote selenium URL.
u heard me! lol
I'd like to containerise my entire shell environment so that anytime I need it, all I need is Docker and I'm set.
It all works, however, startup times are a bit slow due to having to recreate the container all the time, as the image is currently ran with the --rm
flag. I can remove this and tried using -d
to run the container in the background, however it just exits immediately with exit code 1 before I even get a chance to exec into it.
How can I achieve this? Is this a suitable Docker use case? Let me know as I'm still learning.
The Dockerfile is here: https://github.com/cyrus01337/shell-devcontainer/blob/caa1e3e8db78b7afe76f6cd132102e01302b2665/Dockerfile#L11
Hey guys, I'm trying to get back to hobby projects. I'm developing a firmware project and trying to integrate some extra bits for learning. I'm using GitHub as host and the objective is to:
I love developing in Linux, but currently only have windows... So I wanted to make the project as platform agnostic as possible (if there is such a thing) Just clone the repo and run everything from docker.
I created a Dockerfile for an image with everything I need to develop the project, gcc, asciidoctor, robot framework and so on.
What I'm missing is how to run things in both platforms, plus GitHub actions, as close together as possible. Since windos doesn't have make installed, a makefile is out of the question.
Already turned to gpt (sorry) and the suggestion was to make a bash script and powershell script that could call docker-compose, and docker-compose would then call my makefile inside my docker image. (If I understood correctly)
Is this a good approach? After some searching I read that compose is mainly used for multi container projects and such, so I fear I'm using a battleship to cross a puddle
Also, while trying GitHub actions with this method I get some permissions denied while trying to run the bash script, but this is probably something for other subs
Sorry for the long post, any help would be greatly appreciated 😅 Btw, I'm just starting to learn about docker, sorry if I missed some obvious solutions
I'm thinking of getting macbook for the first time with m3 pro/m4 pro only used linux in the past for dev work. I do a lot of docker, docker compose daily, building images, running images, restarting, pushing to registries etc. What do I need to know in 2024 about docker on m series macbooks. A lot of reddit posts are very old talking about m1 and old versions of docker. How's the performance now? Does it have some weird bugs still? is it still slower than mid range chip like 7840hs on linux?
how to delete this app data folder with 8gb i unnistall docker a year ago and now i figure this folder it cant be deleted i try ths scripts online not working
any help
\AppData\Local\Docker\wsl
I run influxdb from within a docker container and it works great when i access it via localhost:8086
BUT, i cant figure out what's required to take that running instance and expose the containers ip address and port number such that other PC's can access this instance
note: im using https://github.com/alekece/tig-stack and docker-compose up -d
anyone have any insight?
I’m trying to install Xilinx Vivado on a USB and run it on Docker Desktop, but I have no what I’m doing.
I have a MacBook Air M1 and I need to install Vivado for some assignments, but A) it’s not available for mac and B) it makes too much space on disk, so I decided to install Vivado on a USB and run it on docker (bc docker special vm stuff), but I’m incapable to do so, I’ve trying for a week now and tbh I don’t know what I’m doing, so any help of any kind would be amazing!
I'm looking for a docker image that can convert, shrink, crop, combine, etc. image (png, jpg, bmp, etc.) files. Something I can self-host and just use to quickly modify files.
I'm somewhat new to containers. Typically for containers, everything just runs as root and I have root access on my system so everything works "fine".
However, I'm not sure the best practice in a collaborative environment.
For example, let's say I setup a container with matlab. They offer a Dockerfile. It creates a user in the container "matlab" and it runs under that account in that home folder (/home/matlab). I have it setup with all the packages everyone needs. The first problem, I can't open x11 windows. I use -v $HOME/.Xauthroity:/home/matlab/.XAuthority:ro but now the container can't access the file because the UID's don't match. I run into this type of permissions issue all the time and don't know the "right" solution.
Overall, I think there are two general ideas as a "goal".
However, it always seems to be a struggle to be either.
I must be missing something. Docker hub has thousands of containers out with all sorts of applications. They can't all be webservers, some need to access files. How are permissions handled? Does everyone just run everything as root all the time? I find that hard to believe.
I'm trying to set up Feedbin using the angristan/feedbin-docker project (https://github.com/angristan/feedbin-docker), but I'm running into a Ruby version conflict during the build process.
The error occurs when trying to install bundler. The container uses Ruby 2.7.6, but bundler requires Ruby 3.0.0 or higher:
ERROR: Error installing bundler:
The last version of bundler (>= 0) to support your Ruby & RubyGems was 2.4.22. Try installing it with `gem install bundler -v 2.4.22`
bundler requires Ruby version >= 3.0.0. The current ruby version is 2.7.6.219.
I really liked how Feedbin looks and its features, which is why I chose it. Would really appreciate any help getting this working!
This is my compose.yml
and DATABASE_URL
environment variable
services:
client:
container_name: client
build:
context: packages/client
dockerfile: Dockerfile
ports:
- "3000:3000"
networks:
- shared-network
depends_on:
- server
server:
container_name: server
build:
context: packages/server
dockerfile: Dockerfile
ports:
- "8000:8000"
networks:
- shared-network
depends_on:
- database
database:
container_name: database
image: postgres:15-alpine
restart: always
environment:
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: ${POSTGRES_DB}
ports:
- "5432:5432"
expose:
- "5432"
networks:
- shared-network
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
networks:
shared-network:
driver: bridge
DATABASE_URL="postgresql://postgres:postgres@localhost:5432/postgres?schema=public"
I have followed:
https://docs.docker.com/engine/daemon/ipv6/
but can't route IPv6 to the container.
I have an ISP IPv6 allocation: 2404:xxxx:4314::/48
On my router (Mikrotik RB5009) I an using 2404:xxxx:4314:0::/64
for my internal subnet and it all works perfectly.
I want to use 2404:xxxx:4314:1::/64
for my Docker hosts but I want them publicly accessible (RB5009 providing IPv6 firewall)
I have nothing in /etc/docker/daemon.json
I created a routed IPv6 Docker network:
docker network create --ipv6 --subnet 2404:xxxx:4314:1::/64 -o com.docker.network.bridge.gateway_mode_ipv6=routed ip6net
and then run up an Alpine shell:
➜ docker run -it --rm --network ip6net alpine /bin/sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
49: eth0@if50: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 2404:xxxx:4314:1::2/64 scope global flags 02
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe12:2/64 scope link
valid_lft forever preferred_lft forever
/ #127.0.0.1/8
It is allocated 2404:xxxx:4314:1::2/64
On the router I have an IPv6 route set up for 2404:xxxx:4314:1::/64
pointing to the Docker host 2404:xxxx:4314:0:xxxx:xxxx:xxxx:xxxx
address and can ping 2404:xxxx:4314:1::1/64
from the router but cannot access 2404:xxxx:4314:1::2/64
at all.
What am I doing wrong?
My goal is to setup vscode to access docker remotely for modifying existing containers. There is one thing which I don't understand. In vscode you have to setup a .devcontainer file when first beginning to create this file in vscode you're presented with alot of questions, which I do not know which to choose.
There is web site which posts this;
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.117.1/containers/docker-existing-dockerfile
{
"name": "VS Code Remote Demo",
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerFile": "../docker/nvidia.Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": null
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Uncomment the next line to run commands after the container is created - for example installing git.
// "postCreateCommand": "apt-get update && apt-get install -y git",
// Uncomment when using a ptrace-based debugger like C++, Go, and Rust
// "runArgs": [ "--cap-add=SYS_PTRACE", "--security-opt", "seccomp=unconfined" ],
"runArgs": ["--gpus", "device=0"],
// Uncomment to use the Docker CLI from inside the container. See https://aka.ms/vscode-remote/samples/docker-in-docker.
// "mounts": [ "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind" ],
// Uncomment to connect as a non-root user. See https://aka.ms/vscode-remote/containers/non-root.
// "remoteUser": "vscode"
// Using volume
// "image": "ubuntu-remote-test:0.0.1", // Or "dockerFile"
// "workspaceFolder": "/workspace",
// "workspaceMount": "source=remote-workspace,target=/workspace,type=volume"
// Using bind
// /home/leimao/Workspace/vs-remote-workspace/ is a directory on the remote host computer
// "workspaceFolder" is the folder in the Docker container as workspace
// target=/workspace is the folder in the Docker container that the workspace on the host server are going to bind to
"workspaceFolder": "/workspace",
"workspaceMount": "source=/home/leimao/Workspace/vs-remote-workspace/,target=/workspace,type=bind,consistency=cached"
}
Although, I don't understand compared to just arbitrarily choosing options when choosing Add Dev Container: Configuration Files what is the difference. And here is using the above mentioned of what I have;
// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/docker-outside-of-docker
{
  "name": "Docker outside of Docker",
  // Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
  "image": "mcr.microsoft.com/devcontainers/base:bullseye",
  "features": {
    "ghcr.io/devcontainers/features/docker-outside-of-docker:1": {
      "version": "latest",
      "enableNonRootDocker": "true",
      "moby": "true"
    }
  },
  // Use this environment variable if you need to bind mount your local source code into a new container.
  "remoteEnv": {
    "LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}"
  },
  "context"
  // Use 'forwardPorts' to make a list of ports inside the container available locally.
  // "forwardPorts": [],
  // Use 'postCreateCommand' to run commands after the container is created.
  // "postCreateCommand": "docker --version",
  // Configure tool-specific properties.
  // "customizations": {},
  // Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
  // "remoteUser": "root"
}
Quite a difference, between what the user posted on their site and based on the options I've chosen when running Add Dev Container: Configuration files.
With this information, what are these options and what must I have setup based on what I want to do which is modify existing running containers, specifically some of them have configuration files which I want to modify and change. I understand that I have to SSH into docker, which is sorta a separate subject. Although I know both work together, otherwise I can't modify files in vscode without having vscode has access to docker whether that is locally, which I don't want to do or remotely.
I'm facing an issue with web searching in the OpenUI. I've tried different search engines like DuckDuckGo and Jina, but none of them are working.
Hi there, I just started using docker for the first time and I am looking how to share folders from my host machine to a docker container. I am setting up apps such as plex and radarr/ sonarr but I seem to run into issues when creating my compose file. I cant share my folders with the file path E:\Movies and E:\Tv. Below is an example for my sonarr compose.
sonarr:
image: lscr.io/linuxserver/sonarr:latest
container_name: sonarr
network_mode: "service:gluetun"
environment:
- PUID=0
- PGID=0
- TZ=Europe/London
volumes:
- /E/docker/sonarr/config:/app/docker/sonarr/config
- /E/Tv:/app/Tv
This is how I have laid it out in my compose file but it will not access the files. As I said I am new to this and I am sure I have made a stupid mistake so if anyone could help in how to sort it out properly it would be greatly appreciated.
So, I am have been teaching my self programming and web development for the better part of the last year, and am working on getting into teaching myself Docker. All in all I have a decent grasp of how to make a Dockerfile, docker-compose file, debug issues in the Docker Desktop console, and so on, but I have come across one problem that just has me beating my head against the wall, and that is getting HMR (Hot Module Reload) to work while running my Vite-React app in a container and accessing it through my browser. I have tried more ways to make this work than I can remember at this point and have finally decided to ask for help.
Like I said, I can make a container that runs the app, and is accessible through the browser, but HMR has just been a no go no matter what I do.
Does anyone have any tips on fixing this? Maybe point me towards an image they use that works for them or a GitHub repo?
Thanks in advance for any help here.
Docker noob here. Running docker desktop and was wondering what happens if I edit conf files under the container? Do the changes persist across container restarts? I have an app and I would like to make a very small configuration change.
I have a docker set up for my MERN Stack project and everything runs smoothly. However when I try to connect through compass so I can view the actual documents it only shows me one collection with 0 documents when I know there's at least one in it (created it with postman and I can see it with commandline results)
services:
 backend:
  build:
   context: ./backend
   dockerfile: Dockerfile
  ports:
   - "4000:4000"
  environment:
   - MONGO_URI=mongodb://mongo:27017/pr-mngt-db  # Database name set as mongoosedb
   - PORT=4000
  volumes:
   - ./backend:/app
  depends_on:
   - mongo
 frontend:
  build:
   context: ./frontend
   dockerfile: Dockerfile
  ports:
   - "3000:3000"
  environment:
   - REACT_APP_BACKEND_URL=http://127.0.0.1:4000  # Backend URL
 mongo:
  image: mongo
  ports:
   - "27017:27017"  # Expose MongoDB default port
  volumes:
   - mongo_data:/data/db  # Persist MongoDB data
volumes:
 mongo_data:
This is my docker compose,
SERVER_PORT=4000
MONGO_URI=mongodb://localhost:27017/pr-mngt-db
SESSION_SECRET=3!aST%qRizW^jy$2HjwO*2w^^pWZkK
This is my backend .env
I'm trying to connect through compass using the connection string: mongodb://localhost:27017/pr-mngt-db
I'm getting an error pop up after finishing the setup of docker on my windows PC. For reference i am using the latest version of Docker as of time of writing (4.35.1). The error is WSL update failed:
wsl update failed: update failed: updating wsl: exit code: 4294967295: running WSL command wsl.exe C:\WINDOWS\System32\wsl.exe --update --web-download: Class not registered
Error code: Wsl/CallMsi/Install/REGDB_E_CLASSNOTREG
: exit status 0xffffffff
I am a novice at this but I've tried some troubleshooting myself as i've read online that this is an issue with WSL. I've tried to uninstall WSL to then reinstall it but i get the error (in cmd):
Class not registered
Error code: Wsl/CallMsi/Install/REGDB_E_CLASSNOTREG
Upon seeing this I've again tried some troubleshooting by running sfc /scannow in cmd which found corruptions but was unable to fix them. I then found the following steps online to resolve this:
1. Type "cmd" in windows search bar
2. Right click on "Command Prompt"
3. Select "Run as Administrator"
4. Type "DISM /Online /Cleanup-Image /CheckHealth" without quote and press ENTER
5. Type "DISM /Online /Cleanup-Image /ScanHealth" without quote and press ENTER
6. Type "DISM /Online /Cleanup-Image /RestoreHealth" without quote and press ENTER
These steps resolved the sfc /scannow corruptions. But i still can't uninstall/reinstall WSL.
Sorry for the length of the post!
Appreciate any help :)
I've tried a plethora of solutions from Stack overflow, YouTube tutorials, and ChatGPT debugging. However, none of these solutions have worked. When I deploy my script with this dockerfile, this is the error I get in return. Any help would be appreciated!
Dockerfile: FROM python:3.10-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install - y chromium-driver chromium fonts-liberation
libnss3
libx11-6
libatk-bridge2.0-0
libatspi2.0-0
libgtk-3-0
libxcomposite1
libxcursor1
libdamage1
libxrandr2
libgom-dev
&& apt-get clean && rm -rf /var/lib/apt/lists/*
ENV CHROME_BIN=/us/bin/chromium
CHROME_DRIVER_BIN=/us/bin/chromedriver
WORKDIR /app
COPY requirements.txt.
RUN pip install --no-cache-dir -r requirements.txt
COPY ..
EXPOSE 8080
Error:
File "/workspace/app/scraper.py", line 21, in driver = webdriver.Chrome(options=chrome_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/.heroku/python/lib/python3.12/site-packages/selenium/webdriver/chrome/webdriver.py", line 45, in init super().init( File "/workspace/.heroku/python/lib/python3.12/site-packages/selenium/webdriver/chromium/webdriver.py", line 55, in init self.service.start() File "/workspace/.heroku/python/lib/python3.12/site-packages/selenium/webdriver/common/service.py", line 105, in start self.assert_process_still_running() File "/workspace/.heroku/python/lib/python3.12/site-packages/selenium/webdriver/common/service.py", line 118, in assert_process_still_running raise WebDriverException(f"Service {self._path} unexpectedly exited. Status code was: {return_code}") selenium.common.exceptions.WebDriverException: Message: Service /workspace/.cache/selenium/chromedriver/linux64/130.0.6723.69/chromedriver unexpectedly exited. Status code was: 127
Still relevant today, this video shows how to efficiently package go modules in a docker container with multi-stage builds.
this is a hypothical question, if I have a single monolith server application, this is not designed to scale.
This server application is containerised.
Is it possible to have a single container that runs, and is abstracted so it could underneath be running across multiple servers. So the container sees like one large compute and storage, but underneath is a scalable server and storage that can grow and add new servers as needed without effecting the container ?
Anyone here with experience using Docker in QNAP? I'm struggling to execute even a simple command through SSH (Terminal or Kitty on Mac, same result) because i can't find the root directory for docker. I think it might be
/share/ZFS531_DATA/.qpkg
But i cannot find it in filestation and SSH doesn't recognise it as a file location to pull a Docker image to.
Any help gratefully received!
Edit: I have enabled hidden folders in file station.
I was running a container in WSL2 Ubuntu and decided to install Docker Desktop (Win11) to have a GUI and the "run at startup" because I had to open WSL and run the container every time.
After installing Docker Desktop and enabling Ubuntu in settings, the container runs at startup but it doesn't show neither in Docker Desktop or the Ubuntu CLI. I am unable to locate it via docker ps
or docker compose ps
.
I made a copy of the volumes at/var/lib/docker/volumes
, reinstalled docker desktop and tried to work with the two contexts listed in WSL.
I am unable to find a solution in google nor chatgpt. Can somebody help, please?
root@user:~# docker context ls
NAME DESCRIPTION DOCKER ENDPOINT ERROR
default * Current DOCKER_HOST based configuration unix:///var/run/docker.sock
desktop-linux Docker Desktop npipe:////./pipe/dockerDesktopLinuxEngine
root@usr:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
root@user:~# docker context use desktop-linux
desktop-linux
Current context is now "desktop-linux"
root@user:~# docker ps
Failed to initialize: protocol not available
I am running several docker containers on separate docker hosts. Some of the hosts are in different physical locations with public internet in between.
What is the best way to link the two networks for file transfers between the docker containers, so traffic is protected and shielded from outside access.
My Setup
Server 1 Debian (OMV) - Docker Host1 -- docker container 1, 2,3
INTERNET
Server 2 Debian (OMV) Docker Host 2 - docker container 4,5,6
Potential Options:
Option 1: Establish a wireguard link between Server 1 and 2, so the host networks are connected. Then continue from there?
Option 2: Establish a docker-docker connection directly? Is this possible?