/r/docker

Photograph via snooOG

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.

Docker is an open-source project to easily create lightweight, portable, self-sufficient containers from any application. The same container that a developer builds and tests on a laptop can run at scale, in production, on VMs, bare metal, OpenStack clusters, public clouds and more.

/r/docker

218,405 Subscribers

1

Issue with Wordpress, might be docker-related

I'm running a local WordPress installation in docker containers, one for the Wordpress installation and one for the database. When I started the project yesterday, I was able to access the landing page on localhost:8000 as well as on 192.168.4.xx:8000 from other devices on my network. I then added a plugin that handles user authentication. The pages associated with the plugin work fine on localhost. If I try to access them from remote devices on the network, it looks as though the request never reaches the web server. I'm not sure if this is a Wordpress issue, a docker issue, or a 'my ISP does some stupid crap in my router' issue.

Any ideas would be appreciated. Thanks!

EDIT: Resolved. The issue is Wordpress related. The links are switching from IP to 'localhost' when accessing from the remote devices.

8 Comments
2024/12/21
20:34 UTC

0

I just ran my first container using Docker

3 Comments
2024/12/21
18:32 UTC

5

Tips for Deploying a Laravel App with Docker (Simplified Installation and Updates)

Hello everyone,

I have developed a Laravel application that uses PHP 8.1, Apache, and MySQL, and I would like to distribute this application to my clients in the simplest way possible. The goal is for clients to be able to install it on their own (or with minimal intervention from me) using Docker.

I’d like to know what the best practices or most common solutions are for: 1. Creating a Docker configuration that includes Laravel, Apache, and MySQL, and is easy to use. 2. Automating the initial installation, including steps like creating the .env file, running migrations, etc. 3. Managing future software updates, making the process as simple as possible for both me and the clients.

If you have experience with similar challenges or suggestions on how to approach this, I’d be really grateful!

Thanks in advance

2 Comments
2024/12/21
16:00 UTC

0

Why is there no redis documentation on docker compose?

https://hub.docker.com/_/redis theres no docs on docker compose here and It is really annoying am I just meant to know or figure out the variables somehow?

For context on what I'm trying to do: its just add redis to nextcloud, but I have seen some other pages with poor documentation

10 Comments
2024/12/21
10:27 UTC

1

Disable docker in Allow in the background for Mac

Hi, I am new to Docker. I installed Docker Desktop for M1 chip on my MB Air M1. In System Settings/General/Login Items/Allow in the Background, why can't i disable docker in "Allow in the background" for Mac?

There is both Docker and Docker Inc listed. I can disable the second one, but the first is reenabled when i close and reopen settings. It's not causing any issues, and its not opening on startup and doesn't seem to actually be running in the background, just curious why it behaves like this.

4 Comments
2024/12/21
04:50 UTC

0

If I install a python package in a docker container (shell), how does it it installed outside of it?

Probably a trivial question, but I thought anything you install inside a container is saved only in the container. Why is it that when I install a package in a container, exit, and re-run the container, the package remains?

10 Comments
2024/12/21
03:40 UTC

5

Rebuilding docker-compose.yml file

So, in a moment of stupidity (always check what directory you're in first) I deleted my docker-compose.yml file. The containers are still running, and I have portainer running as well - is there a way to regenerate my file from command line or from portainer? I shouldn't need to make changes, but I'd feel better with the insurance of having the file there, just in case.

9 Comments
2024/12/20
21:11 UTC

2

Docker overlay network problem - ping works, http not

Hi everyone, I have a weird problem with a Docker overlay network.

I have a very small home-lab setup with two different machines (let's call them A and B) running different containers, all with compose files. That's precisely what I want/need at the moment, I'm not interested in HA or similar.

I sometimes need to talk with a docker container, running on machine A, from another container running on machine B. I can achieve this easily using a bridged network and publishing ports, something like curl http://machine_A:1234.

However, if I need to refer to a docker container running on the same host, I can directly use the container's hostname thanks to Docker's embedded DNS.

I wanted to achieve something similar but across the two different machines, and the first results that pop up are about using an overlay network. Therefore, I set up a two-nodes swarm and created the overlay network, and run some test containers. I can correctly ping containers running on the other machine by using their hostname, however... if I try to establish an HTTP connection, it starts correctly but then hangs forever.

The containers have a small web-server for testing purposes. Therefore, curl <container_hostname>:1234 (where 1234 is the internal port of the container, not published to the host) works correctly if <container_hostname> is running on the same host (I see the HTML stream in the CLI), but if it's running on the other machine, the HTML stream is truncated at some point, and the command hangs.

I already tried some other solutions about similar problems, including:

  • ethtool -K <interface> tx off, with <interface> being the docker overlay network interface (correct?)
  • decreasing the MTU (I tried 1300, 1400 and 1450). However, ip link show inside the container already shows that the MTU for the overlay network is lowered of 50 bytes from the value I set / default value. As far as I understood, this means that the VXLAN overhead should already be accounted for...

I hope I gave enough context. Can someone help me understand the problem?

Thanks in advance!

4 Comments
2024/12/20
19:22 UTC

0

Migrating from docker containers on synology to windows mini pc

Looking for some advice if anyone can help please.

I currently run a number of docker containers on an ageing synology ds218+ (sonarr, etc - mainly linuxserver containers). I've got about 20 containers running all from a docker compose file with variables setup for the volume path, etc. It's become a bit slow and I've had the odd issue so was thinking of a mini pc to replace it and got a good deal on an intel N100 mini pc.

I'm hoping to use the mini pc to run the docker containers and have it hard wired to the NAS which will still act as storage but with docker, network access, etc disabled. I've installed docker for windows but not quite sure where to go from here.

I've read that docker for windows can run linux containers so is this as simple as porting my docker compose file over to the mini pc, editing the volume path prefix in my compose file to point to the NAS folders then starting it up?

Any help would be greatly appreciated, thanks.

7 Comments
2024/12/20
19:20 UTC

2

Docker context with multi-stage is huge..?

Hey there! I posted in the past regarding optimizing building a cluster of NodeJS apps by leveraging pnpm's caching system (if you're not familiar with it: it basically stores the packages in a folder and symlinks them around instead of downloading them each time).

After doing some research, I decided to use a multi-stage building system, where I have a Dockerfile that builds a base image from Ubuntu and solves all the dependencies, such as:

  • apt packages
  • awscli stuff
  • node versions
  • all apps' dependencies (saving them in /pnpm-cache/<app-name>/node_modules)

This ensures all of the subsequent images only need to actually build the apps, without solving dependencies, which are soft-linked into place. This proved to be fast and efficient (since I delete the un-needeed deps before exporting the image).

I organize this using docker-compose and signaling that all apps depend on said base image, so that I can store them in different Dockerfiles.

The problem comes from the docker's internals context time on the subsequent images; from my understanding, said context is loaded from the host directory and not from the base images that are used; I'm using a .Dockerignore file in the root of the context (in my case, the parent directory of the docker-compose.yaml, so that:

  • project: /project
  • docker files etc.: /project/docker/docker-compose.yaml
  • docker ignore: /project/. Dockerignore

-apps: /project/src )

Any clue as to why this might be the case? The context that is sent over seems suspiciously close to the weight of the /project, so I wonder if something is wrong with my ignore file or what...

To exclude folders, I just need to write "folder_name" in the .Dockerignore right? Will it exclude all <folder_name> also from subdirs or do I need to write **/<folder_name>? Or maybe I'm missing slashes to signal that they are dirs..?

I'd also be happy to have other suggestions on how to speed up the process; I'm kind of new to this :')

Tysm, I love working with docker, it's such a cool tool :)

Hopefully this can also help other that have the same problem with optimizing NodeJS clusters.

4 Comments
2024/12/20
18:01 UTC

1

Multiple apparmor profiles

Is it possible to load profiles for binaries in docker just like on host?

For example I want to allow network for ping, but restrict it for lzip. On my main system I can write different profiles for ping and for lzip. As far as I know I can apply only one profile for docker.

1 Comment
2024/12/20
17:11 UTC

0

The Right Way: Provisioning a Virtual Machine in Vagrant (With Website Deployment Locally)

3 Comments
2024/12/20
17:05 UTC

1

What book or resource do you recommend to better understand Docker at a low level?

I've read most of "Docker Deep Dive". It's OK but not actually very low-level. I'd be interested in a book that did more to explain Docker in the context of namespaces and cgroups, for example, or that otherwise did more to explore the technology and make you see where there might be limitations.

1 Comment
2024/12/20
16:54 UTC

1

IPv6 on default bridge...what am I doing wrong?

I've messed with Docker quite a bit on my old server that was running unraid, now I've spun up a proxmox instance and am banging my head against the wall attempting to enable ipv6.

Previously I set up docker in an lxc using one of tteck's helper scripts and was able to enable ipv6 on the bridge by adding the specified lines to the daemon.json based on this video.

Now I've spun up another lxc and am attempting to use compose, but nothing I do seems to affect the bridge. I've added the same lines to the daemon.json file, and modified the compose file to (in my understanding) enable ipv6, yet any network inspect I run still leaves "EnableIPv6": false

Here's what the daemon.json looks like

{ 
    "ipv6": true,
    "fixed-cidr-v6": "fd00:0:0:0:1::/80"
}

And what the compose looks like

networks:
  default:
    driver: bridge
    enable_ipv6: true
    ipam:
      config:
        - subnet: fd00:0:0:0:1::/80

Now I've tried many combinations of including subnet & gateway etc, but nothing I do seems to change anything. I know that IPv6 issues w/ Docker are persistent to say the least, but I figured I might as well ask if there's something obvious that I might be missing.

I'm doing all of this in the hope of adding a matter server (which requires IPv6) to my home assistant stack. Thanks!

0 Comments
2024/12/20
16:28 UTC

0

WebGL with Puppeteer in Docker

Hey everyone,
I’m trying to get WebGL running under Puppeteer inside a Docker container. I’m working on a Node.js application, using Hono, that uses Puppeteer to take screenshots of a Three.js canvas. Everything works fine locally, but once I run the code inside Docker, I start getting Error creating WebGL context

What I’ve Tried:

  1. Forcing Software Rendering:
    • Setting ENV LIBGL_ALWAYS_SOFTWARE=1 in my Dockerfile to make Mesa choose a software renderer.
    • Installed a bunch of Mesa and GL-related libraries (libgl1-mesa-dri, libosmesa6, etc.).
  2. Different Puppeteer Launch Flags:
    • Tried --use-gl=swiftshader and --use-gl=egl.
    • Removed --disable-gpu in case it was blocking software fallback.
    • Ran fully headless (no DISPLAY), and tried Xvfb with non-headless mode as well.
  3. No DISPLAY Variable:
    • Made sure not to set DISPLAY if I’m going fully headless, since having it set when no real display exists can confuse Chromium.
  4. Chromium vs. Chrome:
    • Attempted both the Debian chromium package and google-chrome-stable.
    • Ensured versions were up-to-date.

Has Anyone Got WebGL Working with Puppeteer in Docker?

Any guidance, tips, or proven configurations would be hugely appreciated. I’ve been going in circles for days and would love to hear from anyone who’s solved a similar problem!

Here's my puppeteer setup

    const browser = await puppeteer.launch({
      headless: true,
      executablePath: process.env.PUPPETEER_EXECUTABLE_PATH || 'chromium',
      args: ['--no-sandbox', '--disable-setuid-sandbox'],
    })
    const page = await browser.newPage()

    await page.goto(url, { waitUntil: 'networkidle0' })

    const canvasImage = await page.evaluate(() => {
      const canvas = document.querySelector('canvas')
      if (!canvas) {
        throw new Error('No canvas element found')
      }
      return canvas.toDataURL('image/png')
    })

    await browser.close()

Here's my Dockerfile

FROM node:20-bullseye AS base

RUN apt-get update && apt-get install -y --no-install-recommends \
    fonts-liberation \
    libasound2 \
    libatk-bridge2.0-0 \
    libatk1.0-0 \
    libcups2 \
    libdrm2 \
    libgbm1 \
    libgtk-3-0 \
    libnspr4 \
    libnss3 \
    libx11-xcb1 \
    libxcomposite1 \
    libxdamage1 \
    libxrandr2 \
    libxinerama1 \
    libxi6 \
    xdg-utils \
    libu2f-udev \
    libxshmfence1 \
    libglu1-mesa \
    mesa-utils \
    libgl1-mesa-dri \
    libgl1-mesa-glx \
    libosmesa6 \
    libxrender1 \
    chromium \
    && apt-get clean && rm -rf /var/lib/apt/lists/*

WORKDIR /app

FROM base AS builder
COPY package*json tsconfig.json src ./
RUN npm ci && npm run build && npm prune --production

FROM base AS runner
WORKDIR /app
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium

RUN groupadd --system nodejs && useradd --system --gid nodejs hono

COPY --from=builder /app/node_modules /app/node_modules
COPY --from=builder /app/dist /app/dist
COPY --from=builder /app/package.json /app/package.json

USER hono
EXPOSE 9001

CMD ["node", "/app/dist/index.js"]
0 Comments
2024/12/20
16:19 UTC

54

Docker Swarm: WHY??

Sorry this is more of a rant, but I'm in charge of maintaining a legacy product for the big company I work for (who I don't want to name, but it rhymes with "Snapple." It's not Snapple.)

The entire app was created and deployed using Docker Swarm. The use case for Swarm is supposed to be light "clusters" that don't really justify the bigger lift of larger orchestration systems like Kubernetes.

But in a combination of Not Created Here Syndrome and just plain laziness, this entire system I support -- which includes multiple databases, a separate control plane, Redis, CRDB, and a zillion more moving parts -- is all in Swarm. Despite the fact that this system I inherited is clearly better suited to something like k8s, it's all in Swarm.

As a result, the hoops I have to jump through to deploy this thing (especially in China where there are... a lot of very carefully thought out security restrictions because, well, China...) are ridiculous. Where I could have predictable, incremental deployments with k8s, the deployment for this tool is... just a mess of custom scripts, makefiles, and basically tribal knowledge that the creator of the system -- of course -- has now moved on from, leaving literally nobody who knows how it works.

And before you excoriate not-Snapple too much, I'm a dev contractor with ~30 years of experience so I can say this with some authority: it's the same f*cking thing everywhere. You get all these prima donna devs who

This isn't really a rant about Swarm; it seems... fine for smaller systems. And I'm sure you can build bigger, more complex systems with it -- my project is a case in point. But like with so many things software development related, the people building it (who built it long after k8s was basically "the norm" in container orchestration) felt like they could reinvent the wheel better than basically the entire world. What, because you work at not-Snapple? The breathtaking hubris...

No matter how smart you are, resist this belief. You can't beat the wisdom of the crowd, especially in things like software development. There aren't that many real "ninjas" out there, just a bunch of working schlubs like me and, I'd reckon, readers of this forum.

When I'm architecting a new system, I strive to make it boring. Unless there's a very compelling reason, deciding to "color outside the lines" (say, implement your own TLS ciphersuite, or this case...) never, ever ends well in software development.

Thank you for letting me rant. I love Docker, except for it's new, extractive business model.

As you were.

34 Comments
2024/12/20
15:55 UTC

0

Docker container not accessible from Subdomain

I've done what troubleshooting I can, but my knowledge is pretty limited, so here we are.

I'm running a VPS with Debian. It's a fresh install, and I installed docker and docker-compose from apt (docker reports its version as 20.10.24+dfsg1).

I've attempted to set up an instance of Kitchen Owl, using the docker-compose.yml from its docs. The only change I made was the port, since I'm planning on using a subdomain.

Running docker-compose everything comes up okay, and there's nothing in the docker logs that suggests any errors as far as I can tell.

I have a reverse proxy set up with nginx, and am using certbot. The server entry for the container is:

server {

index index.html index.htm index.nginx-debian.html;
server_name [subdomain.domain.com]; # managed by Certbot

    location / {
            proxy_pass http://localhost:[port];
    }

listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/[subdomain]/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/[subdomain]/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}server {
if ($host = [subdomain.domain.com]) {
    return 301 https://$host$request_uri;
   } # managed by Certbot

    listen 80 ;
    listen [::]:80 ;
server_name [subdomain.domain.com];
return 404; # managed by Certbot
}

Pointing a browser to the subdomain, it just hangs trying to load, with no error from the server. Running curl localhost:[port] from the VPS does the same thing. On the other hand, pointing a browser to http://[domain]:[port] (so not the subdomain, and without SSL) works.

I'm using iptables, which has a rule for port 80, and even stopping it entirely hasn't fixed anything.

I don't know what else I can try or where else to look for what the problem might be, so any help is appreciated!

4 Comments
2024/12/20
15:37 UTC

1

Some volume data not showing up after a backup restore but after stopping the container and copying the data again and restarting the container the data is there. Anyone know why?

I have come across an interesting issue, I have Uptime Kuma running in a docker container, I backed up the volume data and copied it over to a new server, I ran the docker compose file (find attached) and it started up fine but without the heartbeat data, though it has the host data. I stopped the container and copied the data again and started the container and the heartbeat data is now there. Why do you think this is? It's also strange that the host name data was there meaning the back data was copied to the right place.

docker compose file:

uptime-kuma:

container_name: uptime-kuma

image: louislam/uptime-kuma:latest

restart: unless-stopped

environment:

- UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=true

ports:

- "3002:3001"

volumes:

- ./uptime-kuma:/app/data

5 Comments
2024/12/20
08:54 UTC

4

dm - Docker Manager

Wrote a very simple docker manager is bash. I find it useful. Others might too. Let me know what you think.

https://github.com/rogirwin/dm

15 Comments
2024/12/20
01:44 UTC

15

Running multiple VPNs in separate containers for unique IPs—best practices?

I’m working on a setup where I run multiple VPN clients inside Linux-based containers (e.g., Docker/LXC) on a single VM, each providing a unique external IP address. I’d then direct traffic from a Windows VM’s Python script through these container proxies to achieve multiple unique IP endpoints simultaneously.

Has anyone here tried a similar approach or have suggestions on streamlining the setup, improving performance, or other best practices?

-----------------------

I asked ChatGPT, and it suggested this. I'm unsure if it's the best approach or if there's a better one. I've never used Linux before, which is why I'm asking here. I really want to learn if it solves my issue:

  1. Host and VM Setup:
    • You have your main Windows Server host running Hyper-V.
    • Create one Linux VM (for efficiency) or multiple Linux VMs (for isolation and simplicity) inside Hyper-V.
  2. Inside the Linux VM:Why a proxy? Because it simplifies routing. Each container’s VPN client will give that container a unique external IP. Running a proxy in that container allows external machines (like your Windows VM) to access the network over that VPN tunnel.
    • Use either Docker or LXC containers. Each container will run:
      • A VPN client (e.g., OpenVPN, WireGuard, etc.)
      • A small proxy server (e.g., SOCKS5 via dante-server, or an HTTP proxy like tinyproxy)
  3. Network Configuration:Make sure the firewall rules on your Linux VM allow inbound traffic to these proxy ports from your Windows VM’s network.
    • Make sure the Linux VM’s network is set to a mode where the Windows VM can reach it. Typically, if both VMs are on the same virtual switch (either internal or external), they’ll be able to communicate via the Linux VM’s IP address.
    • Each container will have a unique listening port for its proxy. For example:
      • Container 1: Proxy at LinuxVM_IP:1080 (SOCKS5)
      • Container 2: Proxy at LinuxVM_IP:1081
      • Container 3: Proxy at LinuxVM_IP:1082, and so forth.
  4. Use in Windows VM:For example, if you’re using Python’s requests module with SOCKS5 proxies via requests[socks]:import requests # Thread 1 uses container 1’s proxy session1 = requests.Session() session1.proxies = { 'http': 'socks5://LinuxVM_IP:1080', 'https': 'socks5://LinuxVM_IP:1080' } # Thread 2 uses container 2’s proxy session2 = requests.Session() session2.proxies = { 'http': 'socks5://LinuxVM_IP:1081', 'https': 'socks5://LinuxVM_IP:1081' } # and so forth...
    • On your Windows VM, your Python code can connect through these proxies. Each thread you run in Python can use a different proxy endpoint corresponding to a different container, thus a different VPN IP.
  5. Scaling:
    • If you need more IPs, just spin up more containers inside the Linux VM, each with its own VPN client and proxy.
    • If a single Linux VM becomes too complex, you can create multiple Linux VMs, each handling a subset of VPN containers.

In Summary:

  • The Linux VM acts as a “router” or “hub” for multiple VPN connections.
  • Each container inside it provides a unique VPN-based IP address and a proxy endpoint.
  • The Windows VM’s Python code uses these proxies to route each thread’s traffic through a different VPN tunnel.

This approach gives you a clean separation between the environment that manages multiple VPN connections (the Linux VM with containers) and the environment where you run your main application logic (the Windows VM), all while ensuring each thread in your Python script gets a distinct IP address.

https://preview.redd.it/zxc2mb92ew7e1.png?width=1387&format=png&auto=webp&s=dd8dc0fa30dc445b92b6a07781973e8f561fc793

8 Comments
2024/12/20
01:01 UTC

0

Docker on Macbook pro (Apple M4 Pro)

I have a new Macbook Pro with an M4 pro chip. I've installed multiple versions of Docker desktop for Mac (with Apple chip) but when booting it immediately crashes with the following message;

running engine: waiting for the Docker API: engine linux failed to run: running VM: qemu exited unexpectedly: exit status 1

I couldn't find much about it so I hope anybody here has a solution. Any input is much appreciated!

5 Comments
2024/12/19
22:10 UTC

0

Docker image in snap

At work we are moving our IOT devices over to Ubuntu Core. The downside is everything must be installed via Snap. I have a docker image of the software we run. Could someone direct me on how to build this image into a Snap package?

5 Comments
2024/12/19
21:43 UTC

1

bind mount -- update of files on host is "variable"

I have two volumes, er, bind mounts on a docker container. One contains static files the other contains a sqlite db. The db updates fine when the app in the container updates it, *or* when I update the db externally (from the host). The static files *stay* static (any changes to the files on the host are ignored, unless I rebuild the container). Is this expected behavior?

Here are the steps I have taken to troubleshoot:

  • run curl --verbose on the web server inside the container
  • returns "Last-Modified" of yesterday
  • update the file on the host (bind mount:rw for good measure)
  • run curl --verbose on the web server inside the container
  • still returns "Last-Modified" of yesterday
  • stop and remove container
  • re-run `docker run` command with same --mount params
  • run curl --verbose on the web server inside the container
  • now it returns the correct "Last-Modified" date of today

So there is caching going on somewhere, but as far as I can tell, not on the client side (curl wouldn't use caching, right?).

5 Comments
2024/12/19
20:47 UTC

0

files not sync

Hi, I am using DockerDesktop (v.: 4.36.0) with wsl2 (ubuntu)
while I am editing files in wsl (with the PhpStorm) the changes doesn't reflect in the container.
The files are mounted as volumes (driver: bridge)
Any help will be much apprecheated.

4 Comments
2024/12/19
18:40 UTC

1

codeserver in NAS Synology

Hi, please help. I'm using codeserver on a Synology NAS and I can access the /volume1/web directory, but now I only see the system in codeserver and I can't access the volume1 directory... Before that, I stopped all Stacks and edited in portrainer

volumes:

- /volume1/docker/

to

volumes:

- /volume1/git/

and then moved the affected directories. Everything works for me, except codeserver. Do you know what could be the cause? I tried removing everything and creating a new stack.

2 Comments
2024/12/19
16:41 UTC

10

Need help with Docker-compose

So I'm hoping someone smarter than me could offer some insight. I've been following a Docker course for the past few weeks, and we have an assignment that I just can't solve.
We get three files: an html, a dockerfile, and a docker-compose.yml. All three files are in the same directory. The dockerfile uses Windows IIS as an image to build a webserver, and the webpage should be available at http://localhost:123
We need to alter the dockerfile (only the dockerfile!), so that it uses mcr.microsoft.com/windows/servercore:ltsc2019 instead of mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019 . The result needs to be the same: the html should be shown at http://localhost:123 .

This is the html:

<!DOCTYPE html>
<head>
    <title>Firstname</title>
</head>
    <body>
        <h1>Lastename</h1>
    </body>
</html>

This is the .yml:

services:
  iisserver:
    image: iis-website:latest
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "123:80"
    networks:
      - webnet
networks:
  webnet:
    driver: nat

This is the original dockerfile:

# Use the IIS-base-image
FROM mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019
# Copy the website files to the IIS wwwroot map
COPY ./index.html C:/inetpub/wwwroot
# Expose port 123 for webtraffic
EXPOSE 123

I've tried many different things, but none seemed to work:

# escape=`

# Use Windows Server Core 2019 as base image
FROM mcr.microsoft.com/windows/servercore:ltsc2019

# Install IIS
RUN powershell -Command Install-WindowsFeature -Name Web-Server

# Copy the website files to the IIS wwwroot map
COPY ./index.html C:/inetpub/wwwroot/

# Expose port 123 for webtraffic
EXPOSE 80

# Start IIS and keep the container running
CMD ["powershell", "-Command", "Start-Service w3svc; while ($true) { Start-Sleep -Seconds 60 }"]
11 Comments
2024/12/19
15:53 UTC

3

How to set a container to use 2nd NIC

I have two NICs installed on the host.

If I run: ifconfig - a I can see that I have enp3s0 and enp5s1 up and running.

How can I set a qbittorrent container to only use enp5s1 or at least show that as an option in the interfaces option within qbit?

Do I need to create a docker network and if so which type?

When creating the container I can specify the interface, but it shows an error of unavailable/non existant in the logs. Substituting enp5s1 for eth1 doesn't work either.

Do I need to do anything special on the host?

5 Comments
2024/12/19
12:35 UTC

0

Docker libssl.so.3: cannot open shared object file

I've been trying to setup my pipeline, however I cannot get the container to run. Get the following error no matter what I try

    /direct_server: error while loading shared libraries: libssl.so.3: cannot open shared object file: No such file or directory

I don't have any openssl dependencies.

Ive tried musl with scratch and same error.

What am I doing wrong here?

Dockerfile

    # Use the Rust image for building
    #FROM rust:latest as builder
    FROM rust:bookworm as builder
    
    # Set offline mode to prevent runtime preparation
    ENV SQLX_OFFLINE=true
    
    # Install necessary dependencies and git
    RUN apt-get update && apt-get install -y \
        libssl-dev \
        pkg-config \
        build-essential \
        git && \
        apt-get clean
    
    # Use secrets to pass GitHub token securely
    RUN --mount=type=secret,id=RDX_GITHUB_TOKEN \
        git clone https://$(cat /run/secrets/RDX_GITHUB_TOKEN)@github.com/acct/infrastructure.git infrastructure && \
        git clone https://$(cat /run/secrets/RDX_GITHUB_TOKEN)@github.com/acct/database.git database
    
    # Copy the source code
    COPY . .
    
    # Copy the `.sqlx` folder (generated by cargo sqlx prepare)
    COPY .sqlx .sqlx
    
    # Update paths in Cargo.toml to use relative paths
    RUN sed -i 's|{ path = "../infrastructure" }|{ path = "infrastructure" }|' Cargo.toml
    RUN sed -i 's|{ path = "../database" }|{ path = "database" }|' Cargo.toml
    
    # Build the Rust binary
    RUN cargo build --release
    
    # Minimal runtime image
    FROM debian:bookworm-slim
    
    # Install necessary runtime libraries (including libssl3) and verify installation
    RUN apt-get update && apt-get install -y \
        libssl3 \
        libssl-dev \
        pkg-config \
        ca-certificates && \
        apt-get clean && \
        ldconfig && \
        ls -l /usr/lib/x86_64-linux-gnu/libssl.so.3
    
    # Copy the built binary from the builder stage
    COPY --from=builder /target/release/direct_server /direct_server
    
    # Set the entrypoint
    ENTRYPOINT ["/direct_server"]
8 Comments
2024/12/19
12:11 UTC

1

Docker and firewall

What's the best option for firewall in a server where docker is runing is it iptables or ufw. I know that docker override the rules for iptables is it the same for ufw? Thanks in advance.

10 Comments
2024/12/19
12:01 UTC

0

What is Docker? The Complete Beginner’s Guide to Docker Concepts

3 Comments
2024/12/19
10:49 UTC

Back To Top