/r/docker

Photograph via snooOG

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.

/r/docker

216,794 Subscribers

1

How to 'use' a NFS volume in a container? Portainer vs yml (Trying to learn)

Hi there,

First and foremost, I'm an absolute beginner at this. I've been following various guides and trying to put an "Arr" server. But I'm already stuck at setting up qbittorrent.

My NAS is a Terramaster. Docker is running on a raspberrypi.

I was tempted to use Portainer to make life simple, but I'm keen to learn how to do this via SSH using docker compose, etc.

I have 'connected' my NAS to Docker by creating a volume. When I use docker volume inspect, get the following: pastebin

When I try to 'use' the volume with a container (qbit as an example), on portainer, it seems quite simple and qbit connects to my NAS. I use this to add the volume to a container: imgur

Now, when I try to do this using docker compose + .env files, I just cannot get it to work.

  • My .env file is like this: pastebin.
  • My qbit yml looks like this: pastebin
  • The docker compose file looks like this: pastebin

So whilst the portainer method works, my attempts to achieving this through yml files doesn't seem to work.

I have tried the following:

  • in the .env file, I have updated $DATADIR="NAS_Entertainment", but that doesn't seem to work.
  • I've also updated the docker compose file to try and define the volumes using "volumes: NAS_Entertainment:", but this seems to create a new volume called "docker_NAS_Entertainment".

Apologies if I am completely mixing up terminology. Any guidance is appreciated.

Thank you!

6 Comments
2024/12/03
17:02 UTC

1

Winget or msiexec in Windows Docker image

Has anyone worked with Windows Docker images? I’ve noticed they don’t come with a package manager or even msiexec. How do you usually install necessary applications? Do you just COPY files and folders?

4 Comments
2024/12/03
16:25 UTC

0

Docker not starting after upgrade

On my windows 11, I just clicked to upgrade docker desktop some 45mins ago.

It failed with an error and has refused to start up. Only gives the options of either resetting to factory defaults or quitting.

I have raised an issue with the diagnostic id but not sure anything will come from that.

All my data/ docker and compose files are in bind mounts, asides having backups.

I am looking for pointers from anyone who has resolved same and/or minimal steps to get back to where I was before this horrible event.

21 Comments
2024/12/03
14:53 UTC

0

Docker images list on docker desktop and the list from docker cli is not same

4 Comments
2024/12/03
13:52 UTC

6

My script is not executed when I run my container

Hello, I'm a freshman year student and I have this assignment that I've been sitting on for around 4days and can't get it done. so I'm using linux ubuntu and need to create a container image using Dockerfile. after starting the container, my name have to be printed using figlet command and after that my script has to be installed in the container and ran. then I have to show that my script was properly ran after the container was started.
Figlet command works fine, but my script doesn't run after I try running my image.
here are the steps I do:
Dockerfile:

FROM ubuntu

RUN apt update && apt install -y figlet bash

COPY script.sh /root/script.sh

RUN chmod +x /root/script.sh

CMD bash -c "figlet 'my name' && /root/script.sh"

my script.sh:
#!/bin/bash

for x in $(seq 1 5)

do

mkdir "directory$x"

done

for x in $(seq 1 5)

do

amount=0

for y in $(seq 1 10)

do

ind=$(shuf -i 1-1000 -n 1)

echo "$ind" > "directory$x/file$y.txt"

amount=$((amount + ind))

done

echo "directory$x amount: $amount" >> common.txt

done

then I write:
docker build -t image .
docker run -it image
docker run -it image bash

and I can't find the directories and files that had to be created, meaning the script didn't run.
what am I doing wrong?

6 Comments
2024/12/03
10:27 UTC

1

How to deploy docker images on aws ec2?

I have an app running locally on docker with this services:

  • frontend with nextjs
  • backend with expressjs
  • database with postgresql with drizzle orm
  • pgadmin4 docker image
  • nginx server image

So i want to host it somewhere like aws ec2 but it seems a bit complicated to configure and connect each service, to make it go online and link my domain in it. Thanks in advance !

EDIT: i have compose file setup for the services

6 Comments
2024/12/03
08:53 UTC

0

Automatic Gluetun port foward for qBit in Compose through Dockge

I'm running one compose with both Gluetun and qBit on TrueNAS Scale EE Dockge running flawlessly; zero issues with torrenting and port forwarding. As you know, when Gluetun boots up or returns an unhealty check, it picks another random port to forward which I then have to change in qBit.

Is there a way to have qBit detect the forwarded port and adjust it appropriately? If possible I'd love to have this code within the compose to keep it simple and easy. I see within the terminal anytime the port gets forwarded by Gluetun, the port gets logged within a file:

INFO [port forwarding] writing port file /tmp/gluetun/forwarded_port

I also would like this change to be constantly updated during uptime to catch whenever Gluetun changes its port during an unhealthy check.

If this isn't possible through the compose, how could I get this to work within TrueNAS scale? All I have is Dockge on it running all my stacks.

services:
  gluetun:
    image: qmcgaw/gluetun
    container_name: gluetun
    restart: unless-stopped
    cap_add:
      - NET_ADMIN
    devices:
      - /dev/net/tun:/dev/net/tun
    ports:
      - 8888:8888/tcp # HTTP proxy
      - 8388:8388/tcp # Shadowsocks
      - 8388:8388/udp # Shadowsocks
      - 8080:8080 # qbit
      - 6881:6881 # qbit
      - 6881:6881/udp # qbit
    volumes:
      - /mnt/Swimming/Sandboxes/docker/gluetun/config:/gluetun
    environment:
      - TZ=Australia/Sydney
      - PUBLICIP_API=ipinfo
      - PUBLICIP_API_TOKEN=###########
      - VPN_SERVICE_PROVIDER=protonvpn
      - VPN_TYPE=openvpn
      - VPN_PORT_FORWARDING=on
      - OPENVPN_USER=############+pmp
      - OPENVPN_PASSWORD=###########################
      - UPDATER_PERIOD=24h
  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent:latest
    container_name: qbittorrent
    environment:
      - PUID=3000
      - PGID=3000
      - TZ=Australia/Sydney
      - WEBUI_PORT=8080
      - TORRENTING_PORT=6881
    volumes:
      - /mnt/Swimming/Sandboxes/docker/qbittorrent/config:/config
      - /mnt/Swimming/MediaServer/downloads/torrents:/mediaserver/downloads/torrents
    restart: unless-stopped
    network_mode: service:gluetun
networks: {}
9 Comments
2024/12/03
06:23 UTC

0

Thought on Nala over Apt/Apt-get?

I've been working with and tweaking my shell environment that uses Debian as a base image. I wanted to know if anybody else uses Nala over Apt or Apt-get within their Docker files.

Ever since I installed it out of curiosity, it's been able to download many packages faster which I think others would perceive as an unnecessary performance boost and possibly bloat.

What are your thoughts on using Nala within a Docker environment that pulls in dozens of packages? Is it worthwhile? Should it even be considered in the final build image?

13 Comments
2024/12/03
06:21 UTC

0

unable to create containers using docker-compose

version: '3.7'

services:

my-app:

build: .

ports:

- 8080:8080

networks:

- s-network

depends_on:

- "mysql"

mysql:

image: mysql:latest

ports:

- 3307:3306

environment:

MYSQL_ROOT_USER: root

MYSQL_ROOT_PASSWORD: root

MYSQL_DATABASE: collegeproject

networks:

- s-network

networks:

s-network:

driver: bridge

Dockerfile

FROM openjdk:22-jdk

COPY /target/college.jar /app/college.jar

WORKDIR /app

CMD ["java", "-jar", "college.jar"]

application.properties

spring.application.name=collegeProject

spring.datasource.url=jdbc:mysql://mysql:3306/collegeproject

spring.datasource.username=root

spring.datasource.password=root

spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver

spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect

spring.jpa.hibernate.ddl-auto=update

spring.jpa.show-sql=true

error::

org.hibernate.exception.JDBCConnectionException: unable to obtain isolated JDBC connection [Communications link failure

i am unable to create a docker containers help me with thisssss

5 Comments
2024/12/03
03:24 UTC

0

Unifi docker-compose file not working

I have a seperate docker-compose file at the moment for the controller and mongo db. I would like to make this into one docker-compose file.

Any ideas, why the following docker-compose file is not working?

The error i keep getting is:

ERROR: yaml.parser.ParserError: while parsing a block mapping
  in "./docker-compose.yml", line 1, column 1
expected <block end>, but found '<block mapping start>'
  in "./docker-compose.yml", line 40, column 3

I have put this in the init-mongo.js file:

db.getSiblingDB("unifi").createUser({user: "unifi", pwd: "PASSWORD", roles: [{role: "dbOwner", db: "unifi"}, {role: "dbOwner", db: "MONGO_DBNAME_stat"}]});

Docker-compose file:

version: "3.5"
services:
  unifi-network-application:
    image: lscr.io/linuxserver/unifi-network-application:latest
    container_name: unifi-network-application
    networks:
        docker-network:
          ipv4_address: 172.39.0.200 # IP address inside the defined range
          ipv6_address: 2a**:****:****:9999::200
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Amsterdam
      - MONGO_USER=unifi
      - MONGO_PASS=PASSWORD
      - MONGO_HOST=unifi-db
      - MONGO_PORT=27017
      - MONGO_DBNAME=unifi
      - MEM_LIMIT=2048 #optional
      - MEM_STARTUP=1024 #optional
    volumes:
      - /docker/unifi:/config
    depends_on:
      - unifi-db
    ports:
      - 8443:8443
      - 3478:3478/udp
      - 10001:10001/udp
      - 8080:8080
      - 1900:1900/udp #optional
      - 8843:8843 #optional
      - 8880:8880 #optional
      - 6789:6789 #optional
      - 5514:5514/udp #optional
    restart: unless-stopped
networks:
    docker-network:
        name: docker-network
        external: true

  unifi-db:
    image: docker.io/mongo:7.0
    container_name: unifi-mongodb
    networks:
        docker-network:
         ipv4_address: 172.39.0.201 # IP address inside the defined range
         ipv6_address: 2a**:****:****:9999::201
    volumes:
      - /docker/unifi-mongodb/db:/data/db
      - /docker/unifi-mongodb/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro
    restart: unless-stopped
networks:
    docker-network:
        name: docker-network
        external: true
7 Comments
2024/12/03
02:40 UTC

1

Docker.io unreachable

I'm trying to build an image, but build process hangs at [internal] load metadata for docker.io/arm64v8/python:3.10-slim-buster. When I try to ping docker.io, it resolved the IP, but the request times out. I ask a friend to test the same ping at his place and same behavior. Anybody else has the same issue or knows what is going on?

Edit: I am using Docker Desktop version 4.36.0. I also cannot pull the hello-world image nor the python:3.10-slim-bookworm images. I tried to pull the hello-world image on a linux box and I had no issue. Starting to think that this is a Docker Desktop on Windows issue.

3 Comments
2024/12/02
21:03 UTC

4

Updated Docker Desktop, folder/files missing from home folder

I updated docker desktop for windows. I had a folder in the Home folder with files in it, that are no longer there. The containers that were built from those files are still working, but I'm not sure why they disappeared. Did I do something wrong?

11 Comments
2024/12/02
20:26 UTC

2

Beginner Web App Deployment with Docker

I am looking to start hosting a web application of mine on an official web domain and need a little help. Right now, I have a full stack web application in JavaScript and Flask with a MySQL server. Currently, I run the website through ngrok with a free fake domain they create, but I am looking to buy a domain and run my app through that .com domain. I also have a Docker environment set up to run my app from an old computer of mine while I develop on my current laptop. What exactly would I need to run this website? I am thinking of buying the domain from porkbun or namecheap and then using GitHub and netlify to send my app code to the correct domain. Should I be using something with docker instead to deploy the app given I have a database/MySQL driven app? Should I use ngrok? Any help explaining what services and service providers I need to put in place between domain hosting and my Flask/JS app would be appreciated.

5 Comments
2024/12/02
19:23 UTC

2

Issues routing Pi-hole traffic to docker container

Hi,

Be really grateful for some advice on getting my IoT traffic routing to my pihole docker container which im struggling with.

I have docker installed on my ubuntu host which is on vlan 200 192.168.200.3, I am managing the containers via portainer stacks. I have created a macvlan and setup a pihole container with a dedicated ip on the macvlan network (192.168.200.0/24) the ip it has is 192.168.200.4. I want to allow traffic from all my IoT network to go through the pihole container. The IoT network is 192.168.20.0/24, I have created a firewall rule on my unfi udm router to allow traffic from the IoT network to the IP 192.168.200.4 which is the pihole container. The traffic doesnt seem to be getting to the container.

Do i also need to allow IoT traffic to the docker host on 192.168.200.3 as well for this to work? Not sure if i have the macvlan setup correctly

appreciate any advice

Thank you

2 Comments
2024/12/02
15:53 UTC

1

Docker engine Almalinux 9

Is it possible to install Docker Engine only on Almalinux 9 on wsl2? I'm wanting to avoid Docker Desktop because of the licence.

4 Comments
2024/12/02
15:09 UTC

2

How to Manage Temporary Docker Containers for Isolated Builds?

Hi everyone,

I'm working on a project where I need to handle isolated build environments for individual tasks. Here's what I want to achieve:

  1. Each task/project gets its own Docker container.
  2. Inside each container, there's a temporary folder (e.g., build) where files from a cloud storage service (like S3) are copied locally.
  3. The build process involves running commands like npm install and executing the code within this folder.
  4. If a container is inactive (i.e., no requests) for more than an hour, it should automatically clean itself up to save resources.
  5. When a new request comes in for a project, it should either route to the existing container or spin up a new one if no container exists for that project.

I’ve written the compiler in Go, and the system uses containers to isolate builds. I’m wondering:

  • What’s the best way to efficiently manage these temporary containers and ensure proper cleanup?
  • How can I route requests to the right container or create a new one dynamically when needed?
  • Which platform would be best for publishing such a setup? Would Docker Hub or Google Cloud Run work better?

Any advice, insights, or relevant tools for orchestrating this kind of system would be greatly appreciated!

Thanks!

4 Comments
2024/12/02
07:57 UTC

1

Consolidation for simplicity

Hello, I’m having issues with my containers currently, they are mostly out of date and all over the place. The main issue is the ones that I set up when I was still very new. I don’t really want to have to remake them and potentially lose all the data in them, but the volumes all need to be in a better location.

I’ve tried downloading docker desktop, but I can’t see a way to import existing containers? It also appears to slow down everything ALOT!

I’d also like to just be able to click/run update and they all just kinda do it. I could do this with a big compose file I guess, but I need to move all the data first and not lose any config options.

Does anybody have any advice on how I can achieve this ?

Edit: I’m running Ubuntu LTS

8 Comments
2024/12/02
05:43 UTC

1

Issues connecting to containers in local network

I've been having some pain accessing my services in my local network.

The system works perfectly in my main network, i'm able to connect from as many devices as i want, but when i try it on a new router it does not work.

Is there any difference bewteen the two routers? Yes the one that works is connected to the internet, the other is not.

Why am i changing router?, i need to make a presentation and i want to avoid any problems in a foreign network, so im bringing my own router.

Have i tried connecting the other router to the internet? Yes but sadly my ISP only allows me to connect to the router they provided :/ so i cant establish connection to the internet

Did this worked for another person? Yes, this docker container has been deployed and tested in 4 different networks.

I managed to deploy in localhost (outside docker) and i was able to connect from another hosts in the same network, so its not a firewall issue.

Thanks for your help!

Here's my docker-compose

services:
  fastapi:
    build: .
    ports:
      - "8000:8000"
    env_file:
      - .env
    environment:
      - WATCHFILES_FORCE_POLLING=true
      - PYTHONUNBUFFERED=1
    volumes:
      - ./app:/app
    depends_on:
      - mqtt5
      - mongo
    restart: unless-stopped
    networks:
      - cleverleafy
      - default

  mqtt5:
    image: eclipse-mosquitto
    container_name: yuyo-mqtt5
    ports:
      - "1883:1883"
      - "9001:9001"
    user: "${UID}:${GID}"
    volumes:
      - ./mosquitto/config:/mosquitto/config
      - ./mosquitto/data:/mosquitto/data
    restart: unless-stopped

  mongo:
    image: mongo:latest
    container_name: mongodb
    ports:
      - "27017:27017"
    env_file:
      - .env
    environment:
      - MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_USER}
      - MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_PASSWORD}
    volumes:
      - ./mongodb/mongo_data:/data/db
      - ./mongodb/init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js
    restart: unless-stopped

volumes:
  config:

networks:
  cleverleafy:
    name: cleverleafy
    driver: bridge
9 Comments
2024/12/02
01:39 UTC

0

Rebooted server. All containers and volumes gone. One service still running fine?

So I set up docker and portainer to run crafty and host a minecraft server and after much ado I got everything functioning.

I wanted to mess with the hardware so I shut it down tried some stuff and started it back up. All of my containers are gone. Weird thing... Crafty is still running. Portainer retained my stacks, but nothing else and I was able to re-fire up my ip tunnel using the stack in Portainer before it realized that Portainer is also a container and no longer exists. So trying to reinstall (re run) portainer via command line and it's erroring out because the ports are already reserved.

I think I should wipe everything and start over now that I have a pretty good grip on it but how do I do that?

9 Comments
2024/12/02
01:11 UTC

3

Recent Docker update broke Tunnel Interfaces?

For context I am running a few Debian 12.8, 6.1.0-28-amd64 kernel servers. I usually keep my servers updated with auto updates scheduled weekly. Aside from advice for NOT doing this...lol, I started having issues in the last few days on all my servers with this update and containers that use tunnel interfaces. Specifically, let's start with Tailscale. On all of my servers they've not been able to connect any more without running in privileged mode. They all have NET_ADMIN and NET_RAW and worked just fine previously. The errors that the logs spit out is: "CONFIG_TUN enabled in your kernel? `modprobe tun` failed with: modprobe: can't change directory to '/lib/modules': No such file or directory? It doesn't seem to be able to configure a tunnel interface." I have another docker as well that can't create openvpn tunnel connections as well on multiple servers (same docker across them). AGAIN the fix after a few hours of troubleshooting was to re-run the containers with the --privileged flag. I am a bit new to docker/linux so apologies but have been running over 100 containers on various home lab servers for about a year now, so I'm getting my feet wet a bit. Any way- it just seems like there was a Docker update that broke the ability for Docker Containers, even with NET_ADMIN and NET_RAW capabilities the ability to do what they need to do to create/modify tunnel interfaces. Any ideas on how to move forward without giving these containers elevated privilege? Thank you for your help/suggestions.

0 Comments
2024/12/01
23:56 UTC

0

How to run Windows based server applications on a ubuntu server

Hello, I am running a ubuntu server, and I am trying to create a docker container that can run a windows application with Wine or something similar, I am looking to automate the process, and just have the app auto start, and to auto start an RDP server or something similar to allow for the GUI to be controlled, and to open the ports it requires.

The use case for this would be to run server applications that would typically run on Windows, but on ubuntu. Problem is, I just don't quite know how to handle this task, so I wanted to ask here.
Is this a possibility?

Edit: I forgot to mention, the RDP part is for the applications that don't have a console, so they can only be used with a GUI.

15 Comments
2024/12/01
22:53 UTC

3

Container to stop other containers

I am wondering if there is a good container that can be configured to stop all containers properly on a schedule, then start them on a schedule.

Basically I am looking to stop them, so I can back up the files that are on the host (persistent data) then start them, Some services lock some files and cannot be copied to back up.

Thanks

20 Comments
2024/12/01
20:44 UTC

1

Docker network issues

Hi! I'm dealing with a recurrent problem with docker networks where I run a nginx reverse proxy SWAG on my arch, with public IP pointing to it, I used to have firewalld running fine with it a couple years ago until it didn't, firewalld stopped properly allowing containers to receive data from outside and after weeks trying to have it work I gave up and removed firewalld in favor of ufw, reenabled docker iptables by removing the custom /etc/docker/daemon.json and allowed the ports I wanted manually, now 2 years later I have the same issue with ufw* where my reverse proxy works when I access it directly with the domain and with localhost, all other containers are unnavailable. Rebooting makes everything work properly for a few minutes and then it goes dark again. Tried running ufw-docker rules with no changes I'll provide any configs required in the comments. Below are snippets of my docker-compose.yml running all containers related to the reverse proxy:

services:
  swag:
    image: lscr.io/linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=${TZ}
      - URL=${URL}
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN={DNSPLUGIN}
      - ONLY_SUBDOMAIN=true
      - EMAIL=${DO_EMAIL}
      # - DOCKER_MODS=linuxserver/mods:swag-dashboard
    volumes:
      - ./swag:/config
    networks:
      local:
        ipv4_address: 172.18.0.2
    ports:
      - 443:443
      - 80:80
    restart: unless-stopped

  jellyfin:
    image: lscr.io/linuxserver/jellyfin:latest
    container_name: jellyfin
    networks:
      local:
        ipv4_address: 172.18.0.10
    environment:
      - DOCKER_MODS=linuxserver/mods:jellyfin-amd
      - PUID=1000
      - PGID=1000
      - TZ=${TZ}
      - JELLYFIN_PublishedServerUrl=${JELLYFIN_URL}
    volumes:
      - ./jellyfin:/config
      - /mnt/data/media:/media
    devices:
      - /dev/dri:/dev/dri
      - /dev/kfd:/dev/kfd
    restart: unless-stopped

networks:
  local:
    name: local
    driver: bridge
    ipam:
      config:
        - subnet: 172.18.0.0/16
          gateway: 172.18.0.1

All my containers connected to the reverse proxy have fixed IPs in the docker network because I had a issue with an update where docker stopped using the container name as alias, but it works now.

  • fixed a typo
2 Comments
2024/12/01
11:45 UTC

25

Dockerizing dev environment

Hi everyone. Newbie here. I find the idea of dockerizing a development environment quite interesting, since it keeps my host machine tidy and free of multiple toolchains. So I did some research and ended up publishing some docs here: https://github.com/DroganCintam/DockerizedDev

While I find it (isolating dev env) useful, I'm just not sure if this is a right use of Docker, whether it is good practice or anti-pattern. What's your opinion?

35 Comments
2024/12/01
05:58 UTC

0

No internet on container

So I've been running the dock droid by sickcodes n all of a sudden I stopped having Internet access on the container.

Im kinda new to this so any idea how to diagnose n solve this? I have other containers who have Internet access without any issues.

0 Comments
2024/12/01
05:26 UTC

1

permission and run as privileges noob question

I recently re-configured my plex server / home lab. Ended up creating a series of scripts to install everything. It was run as root, which is why (I suspect) I must use `sudo` or `root` to be able to run `docker compose [command]`

I don't think this is the best practice, so I wanted to check in and get some help with correcting my setup.

My script created a new user `dockeruser`. Containers use `dockeruser`'s PUID and PGID as env variables. So I expect they are running as `dockeruser`.

The directories used as volumes in the containers are set with the following permissions:
`drwxr-xr-x    dockeruser  docker `
(docker is a group that contains both my personal user and `dockeruser`)

So I think the only problem is that sudo or root must be used to run `docker` commands. That doesn't seem appropriate. I should be able to run `docker compose` with my personal user.

Any help or corrections are appreciated

3 Comments
2024/12/01
02:28 UTC

6

Invisible docker containers for Minecraft servers

Hi, I'm new to docker and I've ran into an issue I don't really understand. Essentially I wanted to use docker containers for my Minecraft servers and wanted to make it so they start on server boot so i added restart: unless-stopped.

Here is my docker-compose.yml:

version: '3'

services:
  # Vanilla server
  mc_vanilla:
    image: itzg/minecraft-server
    container_name: mc_vanilla
    ports:
      - "25565:25565"
    volumes:
      - /home/xerovesk/gameservers/minecraft/vanilla:/data
    restart: unless-stopped
    environment:
      - VERSION=1.21.1
      - EULA=TRUE
      - JAVA_OPTS=-Xmx3G -Xms1G



  mc_atm9:
    image: itzg/minecraft-server
    container_name: mc_atm9
    ports:
      - "25566:25565"
    volumes:
      - /home/xerovesk/gameservers/minecraft/atm9:/data
    restart: unless-stopped
    environment:
      - EULA=TRUE
    command: "startserver.sh"

Now, i already don't know if i set this up properly but on running docker-compose up -d both servers launch correctly. After testing the reboot process using sudo reboot the servers do start up successfully. I am able to join the servers and they run fine. The problem is that the containers do not show up whenever I input docker ps -a.

I've tried closing down the containers by using sudo docker system prune -f (ChatGPT suggestion) and it outputted:

Deleted Containers: 
fc499e248a987c79a05740c789c09ebd1ae2d51e90996a5c39cc6abbbad28124 612bb078433617735ef92650339da0ee0fb172b18349c3ea5d45398a07f4e386

However, the servers are still up and I'm still able to join them. After repeating this command nothing happens and the servers are also still up.

I'm really not sure how to go about debugging this or fixing it. Have any of you experienced this before or see the problem?

Edit: the problem has been fixed. I was using docker that came with ubuntu installation. The fix was to install it from the official source

8 Comments
2024/11/30
23:30 UTC

0

runr.sh - The set and forget CLI docker container update tool

Hello everyone!

If you use docker, one of the most tedious tasks is updating containers. If you use 'docker run' to deploy all of your containers the process of stopping, removing, pulling a new image, deleting the old one, and trying to remember all of your run parameters can turn a simple update for your container stack into an hours long affair. It may even require use of a GUI, and I know for me I'd much rather stick to the good ol' fashioned command line.

That is no more! What started as a simple update tool for my own docker stack turned into a fun project I call runr.sh. Simply import your existing containers, run the script, and it easily updates and redeploys all of your containers! Schedule it with a cron job to make it automatic, and it is truly set and forget.

runr.sh start up message

I have tested it on both MacOS 15.2 and Fedora 40 SE, but as long as you have bash and a CLI it should work without issue.

Here is the Github repo page, and head over to releases to download the MacOS or GNU/Linux versions.

GitHub releases

I did my best to get the start up process super simple, and the Github page should have all of the resources you'll need to get up and running in 10 minutes or less. Please let me know if you encounter any bugs, or have any questions about it. This is my first coding project in a long time so it was super fun to get hands on with bash and make something that can alleviate some of the tediousness I know I deal with when I see a new image is available.

Key features:

- Easily scheduled with cron to make the update process automatic and integrative with any existing docker setup.

https://preview.redd.it/hfg9z7acb34e1.png?width=684&format=png&auto=webp&s=18f2c269ddc249936bedcd4e4587ec2433c13f8d

- Ability to set always-on run parameters, like '-e TZ=America/Chicago' so you don't need to type the same thing over and over.

- Smart container shut down that won't shut down the container unless a new update is available, meaning less unnecessary downtime.

smart shutdown

- Super easy to follow along, with multiple checks and plenty of verbose logs so you can track exactly what happened in case something goes wrong.

https://preview.redd.it/rzw8vdfpb34e1.png?width=740&format=png&auto=webp&s=28f9feb0d29cc093216b0651867e3ded9fb4117f

My future plans for it:

- Multiple device detection: easily deploy on multiple devices with the same configuration files and runr will detect what containers get launched where.

- Ability to detect if run parameters get changed, and relaunch the container when the script executes.

Please let me know what you think and I hope this can help you as much as it helps me!

3 Comments
2024/11/30
19:29 UTC

1

How to connect two containers on different Docker hosts within the same network?

Hi everyone! How to make Docker containers communicate with each other? Both Docker instances are configured to run Proxmox machines that are on the same local network, while the containers on these two machines are running separately.

I am trying to access a specific machine (e.g., with IP 192.168.100. x) hosts Nginx Proxy Manager and another machine (IP 192.168.100. x) runs a Nextcloud instance. I want to be able to manage communication in between the services and I do not want to install Nginx Proxy Manager on both of the instances.

I’ve read about solutions like Docker overlay networks, or using a reverse proxy, but I’m unsure what would work best for my set up.

Any advice! Thanks for all!

4 Comments
2024/11/30
19:02 UTC

7

File Caching and Container Memory – What Docker stats isn't telling you

Hey folks, I published a post about Docker stats and the misleading memory reporting when we have a file-cache intensive application like a database.

Any feedback or experiences from your side are more than welcome

0 Comments
2024/11/30
18:42 UTC

Back To Top