/r/LXD

Photograph via snooOG

LXD is a container "hypervisor" & new user experience for LXC.

It's made of 3 components: * The system-wide daemon (lxd) exports a REST API locally & if enabled, remotely. * The command line client (lxc) is a simple, powerful tool to manage LXC containers, enabling management of local/remote container hosts.

www.linuxcontainers.org

The LXD sub-reddit is NOT intended to be a support forum!

To ask LXD support related questions please visit the LXD Discuss Forum to communicate with Devs & others regarding support questions.

We feel its important to keep Support related Questions and Answers in one location. Thank you.

What is LXD's Relationship with LXC.. LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.

It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.

/r/LXD

2,264 Subscribers

0

CI/CD Tool

Is there a CI/CD tool to build custom container/KVM images? As of now I use bash scripts to create the images.

3 Comments
2024/04/15
08:55 UTC

0

Issue installing NVIDIA driver on my container due to Ubuntu specific Nvidia driver on my host machine

Hello everyone, I am setting up a container with a graphics card share and NVIDIA drivers installed. My host machine is running Ubuntu 22.04 and I'm trying to run an Ubuntu 20.04 container. As far as I know, in order to get the NVIDIA drivers to work, you have to share the graphics card (no problem there), then install exactly the same NVIDIA driver as on the host machine, without the kernel modules. Normally this poses no problem and works correctly, but the drivers installed on my host machine were installed using the Ubuntu utility. It installed version 550.54.15, which according to https://www.nvidia.co.uk/download/driverResults.aspx/222921/en-uk is a specific version of the driver specially designed for Ubuntu 22.04. Looking in the NVIDIA drivers archives https://www.nvidia.com/Download/Find.aspx?lang=en-us (I have an rtx 4060 notebook) I can't find the version installed on my host machine but only the version 550.54.14 which of course doesn't work when I try to install it on my container as it's a different version from the one on the host machine. I therefore tried to install version 550.54.15 designed for Ubuntu 22.04 by downloading the .deb file from the first link. The installation went without a hitch, but even if the package is installed correctly, the drivers don't seem to be present on the container, because I can't run the nvidia-smi command, which doesn't exist.

Have you ever found yourself in this kind of situation? Do you have any idea of what I could try to do in order to have working drivers on my container? Thanks in advance

1 Comment
2024/04/10
23:20 UTC

1

Docker with Linux 6.x kernel (container or VM?)

I want to use Docker with the latest 6.x kernel in an LXD container.

Have recently come across a post (sorry I do not have the link to it) and, basically, what I found was overlay2 is now supported with Docker on a ZFS backend. So how can I make sure that security and performance-wise it's the same? Or should I go with an LXD VM instead? (I'm bit hesitant to go the second route)

As per my knowledge, regardless of whether it is a container or a VM, Docker uses apparmor/SELinux for enforcing some rules and kernel namespaces too for security and resource control (groups). So the docker install will be already secure even without all the isolation that comes with a traditional VM?

Thanks for your time.

1 Comment
2024/04/08
14:31 UTC

5

How To Backup & Restore an LXD Container or VM - excerpt from a forum thread

There was a post some time ago by Stephane Graber which outlined his steps to backup and restore LXD Containers or VMs.

This Thread:https://discuss.linuxcontainers.org/t/backup-the-container-and-install-it-on-another-server/463/13

Assume you have a Container or VM called “cn1” (or "vm1").

To backup CN1 as an image tarball, execute the following:

  1. lxc snapshot cn1 backup
  2. lxc publish cn1/backup --alias cn1-backup
  3. lxc image export cn1-backup .           <<<== Note the "."  (ie export it "current directory")
  4. lxc image delete cn1-backup

Which will put the Tarball (.tar.gz) tarball in your current directory. Note that it will be called something like:

87affa4b9f197667a500d4171abc8a5fcc347d16ad38b39965102f8936b96570.tar.gz

You can rename that .tar.gz to something more meaningful like "ubuntu-container.tar.gz".

To restore and create a container from it, you can then do:

  1. lxc image import TARBALL-NAME --alias cn1-backup
  2. lxc launch cn1-backup new-cn1   <<== # "new-cn1" = any container name
  3. lxc image delete cn1-backup

lxc ls should now show a container named "new-cn1" running.

1 Comment
2024/03/24
14:25 UTC

2

LXD Storage Pools Moving Containers

0 Comments
2024/03/16
04:18 UTC

2

LXD won't start after Ubuntu 22.04 reboot

Hi! I restarted my system, and the lxd service doesn't start. I have lxc version 4.0.9 (migrated a few month ago from 3.0.3). I tried to stop/start the service, but no luck... After lxc info command I'm getting this message:

Error: Get "http://unix.socket/1.0": dial unix /var/snap/lxd/common/lxd/unix.socket: connect: connection refused

Result of journalctl -u snap.lxd.daemon command:

Mar 09 15:02:27 ip-10-184-35-230 lxd.daemon[15848]: Error: Failed initializing storage pool "lxd": Required tool 'zpool' is missing

Mar 09 15:02:28 ip-10-184-35-230 lxd.daemon[15707]: => LXD failed to start

Mar 09 15:02:28 ip-10-184-35-230 systemd[1]: snap.lxd.daemon.service: Main process exited, code=exited, status=1/FAILURE

Mar 09 15:02:28 ip-10-184-35-230 systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.

Mar 09 15:02:28 ip-10-184-35-230 systemd[1]: snap.lxd.daemon.service: Scheduled restart job, restart counter is at 5.

Mar 09 15:02:28 ip-10-184-35-230 systemd[1]: Stopped Service for snap application lxd.daemon.

Mar 09 15:02:28 ip-10-184-35-230 systemd[1]: snap.lxd.daemon.service: Start request repeated too quickly.

Mar 09 15:02:28 ip-10-184-35-230 systemd[1]: snap.lxd.daemon.service: Failed with result 'exit-code'.

Mar 09 15:02:28 ip-10-184-35-230 systemd[1]: Failed to start Service for snap application lxd.daemon.

THis is the result for zpool status:

NAME STATE READ WRITE CKSUM

lxd ONLINE 0 0 0

/var/snap/lxd/common/lxd/disks/lxd.img ONLINE 0 0 0

Any advice?..

10 Comments
2024/03/09
15:12 UTC

1

Splitting config of device between profiles and instances

Hey All,

Is it possible to split the config of a device between a profile and an instance?

For example, if I have a profile assigned to a bunch of instances, with a device eth0.

eth0:

name: eth0

network: lxdbr0

type: nic

Is there some method to, per instance, assign just the following to an instance

eth0:

ipv4.address: 10.38.194.(whatever)

without resorting to IP reservations or anything along those lines?

1 Comment
2024/03/06
16:06 UTC

12

Open sourced LXD / Incus Image server

Hi Everyone

Given that LXD will be losing all access to the community images in May, I decided that it was important for me to invest some time into building / open sourcing an image server to make sure we can continue to access the images we need.

Sandbox Server

The MVP (sandbox server) is up and running already it’s located here https://images.opsmaru.dev

The production server will be up soon, the difference is sandbox server receive more updates / may be more unstable than the production, other than that they’re exactly the same.

Try it out

You can try it out by using the following comand:

LXD

lxc remote add opsmaru https://images.opsmaru.dev/spaces/922c037ec72c5cc4d7a47251 --public --protocol simplestreams 

The above uses a url that will expire after 30 days. This is to enable the community to try it out.

What happens after it expires?

We are developing a UI where you will be able to sign up and issue your own url / token for remote images.

You will be able to issue tokens that never expire if you want. You will also be able to specify which client (LXD / Incus) the token is for. The feed will be generated for your specified client to avoid issues.

You don’t have the images I need

We will add new images, if there is demand, please open an issue or a pull-request here.

We will currently support the following architectures:

  • x86_64 (amd64)
  • aarch64 (arm64)

Will consider adding more if there is demand.

The bulk of the build system is done, I’m using a fork here which pushes to the sandbox server.

You can see the success build action here.

When will the ui be done

This is something of a high priority for us, therefore soon. I’m working on cleaning up the MVP ui to get this thing up in production hopefully beginning March.

Can I self host the image server?

Yes you will be able to self-host the image server if you want. We will provide instructions and an easy guide to enable you to do this.

How it Works

We have a basic architecture diagram here:

https://preview.redd.it/qq7ow6s7ggkc1.png?width=2814&format=png&auto=webp&s=02baf7a1d5d2056a2842e4a8460f3be80e409e15

In Action

Here is an example of it in action

```shelllxc image list opsmaru: +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCHITECTURE | TYPE | SIZE | UPLOAD DATE | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.16 (3 more) | d4e280b3b850 | yes | alpine 3.16 arm64 (20240221-14) | aarch64 | CONTAINER | 2.28MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.16/amd64 (1 more) | 4fbbab01353e | yes | alpine 3.16 amd64 (20240221-14) | x86_64 | CONTAINER | 2.50MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.17 (3 more) | 8edf37df13ec | yes | alpine 3.17 arm64 (20240221-14) | aarch64 | CONTAINER | 2.70MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.17/amd64 (1 more) | 099f83764a67 | yes | alpine 3.17 amd64 (20240221-14) | x86_64 | CONTAINER | 2.93MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.18 (3 more) | 7c31777227b0 | yes | alpine 3.18 arm64 (20240221-14) | aarch64 | CONTAINER | 2.75MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.18/amd64 (1 more) | 37062029ee44 | yes | alpine 3.18 amd64 (20240221-14) | x86_64 | CONTAINER | 2.94MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.19 (3 more) | e44e496455f5 | yes | alpine 3.19 arm64 (20240221-14) | aarch64 | CONTAINER | 2.72MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/3.19/amd64 (1 more) | b392f4461aaf | yes | alpine 3.19 amd64 (20240221-14) | x86_64 | CONTAINER | 2.92MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/edge (3 more) | 34b71a8b87ab | yes | alpine edge arm64 (20240221-14) | aarch64 | CONTAINER | 2.72MiB | Feb 21, 2024 at 12:00am (UTC) | +----------------------------+--------------+--------+---------------------------------+--------------+-----------+---------+-------------------------------+ | alpine/edge/amd64 (1 more) | 4d7c0a086c41 | yes | alpine edge amd64 (20240221-14) | x86_64 | CONTAINER | 2.93MiB | Feb 21, 2024 at 12:00am (UTC) |```

3 Comments
2024/02/24
03:47 UTC

6

Docker compose equivalent for lxc/lxd

Is there a way to represent lxc containers as code and automate setup? This is one of the killer docker features for me personally and I am wondering if there is an equivalent.

13 Comments
2024/01/22
23:02 UTC

Back To Top