/r/LXD

Photograph via snooOG

LXD is a container "hypervisor" & new user experience for LXC.

It's made of 3 components: * The system-wide daemon (lxd) exports a REST API locally & if enabled, remotely. * The command line client (lxc) is a simple, powerful tool to manage LXC containers, enabling management of local/remote container hosts.

www.linuxcontainers.org

The LXD sub-reddit is NOT intended to be a support forum!

To ask LXD support related questions please visit the LXD Discuss Forum to communicate with Devs & others regarding support questions.

We feel its important to keep Support related Questions and Answers in one location. Thank you.

What is LXD's Relationship with LXC.. LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding to create and manage the containers.

It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.

/r/LXD

2,301 Subscribers

1

Copying container to another server fails on second copy/--refresh

To cut a real long story short, I'm trying to copy a container from one server to another, both using encrypted zfs backend pool ("encpool"):

ubuntu@lxd-server:~$ lxc launch ubuntu:24.04 c1 -s encpool
Creating c1
Starting c1
ubuntu@lxd-server:~$ lxc stop c1
ubuntu@lxd-server:~$ lxc copy c1 lxd-backup: -s encpool 
ubuntu@lxd-server:~$ lxc copy c1 lxd-backup: -s encpool --refresh
Error: Failed instance creation: Error transferring instance data: Failed migration on target: Failed creating instance on target: Failed receiving volume "c1": Problem with zfs receive: ([exit status 1 write |1: broken pipe]) cannot receive new filesystem stream: zfs receive -F cannot be used to destroy an encrypted filesystem or overwrite an unencrypted one with an encrypted one

At this point, the c1 container's storage on the backup server is completely lost. So it's a fairly nasty issue.

Surely I can't be the only one having this issue? Ubuntu 22.04 and Lxd version is 6.1 on both servers. I posted a bug report but it got me thinking that the above would (I think) be such a common operation that it must be just me hitting this issue, as it's a fairly simple setup.

10 Comments
2024/11/05
00:40 UTC

1

LXD to LXD host on one NIC, everything else on another?

I have two LXD hosts (not three so I don't think I can cluster them) so I added each to the other as remotes and want to do `lxc copy/move` on the 25GbE direct connect and then have all other traffic (remote API for clients and internet access from containers) run on a separate 10GbE NIC.

Anyone either get two node clustering working so I can use the config `cluster.https_address` on 25GbE and `core.https_address` on 10GbE? Or some other way?

The current config is two hosts with basically the same setup, 1GbE NIC and dual-port 25GbE NIC. 25GbE port 0 is direct attached to the other host with IP `10.25.0.0/24` and port 1 is connected to a 10GbE switch `10.10.0.0/24`. The hope was anytime I needed anything copied between hosts (`scp` or `lxc move/copy`) I could do it on the 25GbE link, then have the containers connect their services over the 10GbE.

I have all physical interfaces slaved to linux bridges and the 10GbE further uses VLAN tagging to isolate services.

So far the VLANs seem to work, and the 25GbE seems to work within the containers (I have elastic search setup as a cluster connecting on the fast network)... Just can figure out how to have LXC move/copy go over the fast interconnect.

13 Comments
2024/10/29
23:14 UTC

1

Can't resize containers volumes

I'm becoming crazy.

I have a .img file with inside zfs volumes for containers. Each cantainer can't resize partitions because non partizions exists, is the only Zfs that manage all.

The main container is 70gib, on any container no limits are set but there is no way to encrease volume size

I followed all the steps to expand the default storage pool in LXD, but I was unable to resize the nginx-stream container as expected. I expanded the storage file, updated the ZFS pool settings, and checked the LXD profile, but the container still shows a reduced size.

Steps Taken:

1.	Expanded the storage file to 70GB.

2.	Enabled automatic expansion for the ZFS pool.

3.	Verified and confirmed the size of the ZFS pool.

4.	Checked the LXD profile to ensure the size is set to 70GB.

5.	Verified the space inside the container (df -h).

Errors Encountered:

•	The command zpool online -e default did not work as expected and returned a “missing device name” error.

lxc storage list

+------------+--------+-----------------------------------------------+-------------+---------+---------+

| NAME | DRIVER | SOURCE | DESCRIPTION | USED BY | STATE |

+------------+--------+-----------------------------------------------+-------------+---------+---------+

| containers | zfs | /var/snap/lxd/common/lxd/disks/containers.img | | 0 | CREATED |

+------------+--------+-----------------------------------------------+-------------+---------+---------+

| default | zfs | /var/snap/lxd/common/lxd/disks/default.img | | 17 | CREATED |

+------------+--------+-----------------------------------------------+-------------+---------+---------+

truncate -s 70G /var/snap/lxd/common/lxd/disks/default.img (No error message if successful)

zpool status default

pool: default

state: ONLINE

scan: scrub repaired 0B in 00:00:50 with 0 errors on Sat Aug 17 13:11:56 2024

config:

NAME STATE READ WRITE CKSUM

default ONLINE 0 0 0

/var/snap/lxd/common/lxd/disks/default.img ONLINE 0 0 0

errors: No known data errors


zpool list default

NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT

default 69.5G 16.4G 53.1G - - 5% 23% 1.00x ONLINE -

zpool set autoexpand=on default (No error message if successful)

lxc profile show default

name: default

description: Default LXD profile

config: {}

devices:

eth0:

name: eth0

network: lxdbr0

type: nic

root:

path: /

pool: default

size: 70GB

type: disk

used_by:

  • /1.0/instances/lxd-dashboard

  • /1.0/instances/nginx-stream

  • /1.0/instances/Srt-matrix1

  • /1.0/instances/Docker

  • /1.0/instances/nginx-proxy-manager

lxc exec nginx-stream -- df -h

Filesystem Size Used Avail Use% Mounted on

/dev/root 8.0G 1.5G 6.5G 20% /

missing device name

usage:

online [-e] <pool> <device> ...

0 Comments
2024/08/17
15:26 UTC

Back To Top