/r/coreos
CoreOS is a new Linux distribution that has been rearchitected to provide features needed to run modern infrastructure stacks.
/r/coreos
Hey, I am trying to use coreos to use coreos and because my server has very little ram I wanted to add a swap partition to my os. My ignition file looks like this https://pastebin.com/raw/bvZf3apQ and it also boots and installes evering i define but it uses the default partition layout and overrides my layout. I only found sources on how to enable swap with a partition file, but that it not recommended for xfs & btrfs filesystems so I would still need to modify the default partition layout. Has someone a idea on how I could create a swap partition throught the ignition file?
I'm running k3s
on a cluster of six coreos VMs. The version of coreos running on them is "Fedora CoreOS 39.20240309.3.0". uname -a
returns...
Linux k0 6.7.7-200.fc39.x86\_64 #1 SMP PREEMPT\_DYNAMIC Fri Mar 1 16:53:59 UTC 2024 x86\_64 GNU/Linux
Is it possible to hold the kernel version back at something much earlier? The reason i'm asking is that the mssql
container image is having trouble with version 6.7
. Until that bug gets resolved i'd like to get my mssql
container running in my cluster again.
I've been trying to install k3s on FCOS on a qemu-kvm virtual machine. Any rpm-ostree dependent operations seem to fail on account of "read-only filesystem". Yeah, I know it's a read-only filesystem, but I though rpm-ostree operations were supposed to precede the mounting of the fs somehow (not sure how it works under the hood?). Anyway, here's a relevant portion of my butane config:
systemd:
units:
- name: "rpm-ostree-install-k3s-dependencies.service"
enabled: true
contents: |
[Unit]
Description=Install k3s dependencies
Wants=network-online.target
After=network-online.target
Before=zincati.service
ConditionPathExists=|!/usr/bin/kubectl
ConditionPathExists=|!/usr/share/selinux/packages/k3s.pp
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=rpm-ostree install --apply-live --allow-inactive --assumeyes kubectl k3s-selinux
[Install]
WantedBy=multi-user.target
This is the failure I see:
core@localhost:~$ journalctl -u rpm-ostree-install-k3s-dependencies
Feb 11 15:25:35 localhost.localdomain systemd[1]: Starting rpm-ostree-install-k3s-dependencies.service - Install k3s dependencies...
Feb 11 15:25:35 localhost.localdomain rpm-ostree[1351]: error: Updating deployment: GDBus.Error:org.gtk.GDBus.UnmappedGError.Quark._g_2dio_2derror_2dquark.Code21: Read-only file system
Feb 11 15:25:35 localhost.localdomain systemd[1]: rpm-ostree-install-k3s-dependencies.service: Main process exited, code=exited, status=1/FAILURE
Feb 11 15:25:35 localhost.localdomain systemd[1]: rpm-ostree-install-k3s-dependencies.service: Failed with result 'exit-code'.
Feb 11 15:25:35 localhost.localdomain systemd[1]: Failed to start rpm-ostree-install-k3s-dependencies.service - Install k3s dependencies.
I've searched for specifics around this error, but everything I've found has only been adjacent and seemingly for different causes. Curious if anyone knows why I would be experiencing this issue?
p.s. does it help do know that I'm using coreos-installer iso customize
to build the ignition file into the iso? I don't have access to qemu-kvm
or virt-install
on the platform I'm deploying the VM to.
Are you passionate about Linux and Unix? 🐧
Do you want to connect with like-minded individuals, from beginners to experts? 🧠
Then you've found your new home. We're all about fostering meaningful connections and knowledge sharing.
🤔 Why We Exist: At the heart of our community is a shared love for Linux and Unix. We're here to connect with fellow enthusiasts, regardless of where you are on your journey, and create a space where our shared passion thrives.
🤨 How We Do It: We foster a welcoming environment where open conversations are the norm. Here, you can share your experiences, ask questions, and deepen your knowledge alongside others who are equally passionate.
🎯 What We Offer:
🔹 Engaging Discussions: Our discussions revolve around Linux and Unix, creating a hub of knowledge-sharing and collaboration. Share your experiences, ask questions, and learn from each other.
🔹 Supportive Environment: Whether you're a newcomer or a seasoned pro, you'll find your place here. We're all about helping each other grow. Our goal is to create a friendly and supportive space where everyone, regardless of their level of expertise, feels at home.
🔹 Innovative Tools: Explore our bots, including "dlinux," which lets you create containers and run commands without leaving Discord—a game-changer for Linux enthusiasts.
🔹 Distro-Specific Support: Our community is equipped with dedicated support channels for popular Linux distributions, including but not limited to:
Arch Linux
CentOS
Debian
Fedora
Red Hat
Ubuntu
Why Choose Us? 🌐
Our server aligns perfectly with Discord's guidelines and Terms of Service, ensuring a safe and enjoyable experience for all members. 🧐 📜 ✔️
Don't take our word for it—come check it out yourself! 👀
Join our growing community of Linux and Unix enthusiasts today let's explore, learn, and share our love for Linux and Unix together. 🐧❤️
See you on the server! 🚀
As im shutting down my Intel NUC with intsalled CoreOS, it always gets stuck at this line:
kauditd_printk_skb: 53 callbacks suppressed
74.8671241 systemd-shutdown[1]: Waiting for process: 5453 (s6-suscan), 5436 (s6-suscan), ........
Can anyone help me get rid of this deadlock?
Ive been trying to wait forever to see if it comes to an end but i think it does not
Fedora CoreOS caused the following error in the boot manager(grub) after QEMU backup snapshot:
、、、 Minimal BASH-Iike line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists possible device or file completions. 、、、
Will snapshot corrupt the partition?
Hi all,
I'm new to CoreOS, and new to Docker (and containers in general).
I have an ESXi hypervisor server which I use for traditional VMs, but I want to run a Minecraft server for my niece to use. Years ago, I would run a MC server within Windows, but times have changed and I want to understand Docker! But... to understand Docker, I feel that I should get to grips with a lightweight Linux OS to run Docker on, hence me finding out about CoreOS. Before anyone mentions it, I know about vSphere Integrated Containers, but I'm using a free licence for ESXi so VICs aren't an option.
Now, the problem I hope somebody can help with. I've set up a new VM using the OVA file from the CoreOS website, and it's booted normally. I know a lot of you will know what I'm about to say... there's a required login which I do not have. I've read that I should set up this user account with something called Butane? My question is... how the hell do I configure this when I don't have a user account for the machine and I don't have direct access to the OS files because it was set up with an OVA file? I've seen suggestion that I add an 'autologin' command to GRUB, but the GRUB on my machine looks different to all the examples I've seen online.
This all seems way too complicated and the barrier to entry feels huge to me as a newbie, so any help would be greatly appreciated.
I'm tempted to just install Debian instead, but wanted to learn something new...
Hey everybody, can someone help me to directly mount an nfs volume into a docker container?
I get Permission Denied
using the following:
---
version: "3.2"
# version: "2.1"
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
privileged: true
#user: 1000:1000
#group_add:
# - "107"
network_mode: "host"
#devices:
# - /dev/dri:/dev/dri
## VAAPI Devices (examples)
#- /dev/dri/renderD128:/dev/dri/renderD128
#- /dev/dri/card0:/dev/dri/card0
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
- JELLYFIN_PublishedServerUrl=192.168.178.55 #optional
volumes:
- /var/home/core/dvol/jellyfin_config:/config:Z
- /var/home/core/dvol/jellyfin_cache:/cache:Z
#- nfs_media:/data/media:Z
- type: volume
source: nfs_media
target: /data/media
volume:
nocopy: true
ports:
- 8096:8096
- 8920:8920 #optional
- 7359:7359/udp #optional
- 1900:1900/udp #optional
restart: unless-stopped
volumes:
nfs_media:
driver_opts:
type: "nfs"
o: "addr=192.168.178.57,nolock,soft,rw"
device: ":/mnt/tank/tank/media"
Hi there,
is there a way to stop and start the whole docker-daemon for the purpose of backing up all volumes, preventing docker services to change files during backup
systemctl stop docker
systemctl start docker
could basically work but it seems that docker.socket immediately keeps everything alive
I switched to CoreOS very recently and im not a professional user regarding file permissions etc
I shifted all of my bind-mount docker-volumes to the coreos folder
/var/home/core/dvol
Owner is core:core and chmod is 777 (for testing purposes)
Most of my containers do not start due to permission problems
Portainer is running
Has anyone a good advice? First time im running docker on a restricted system like CoreOS
One of my portainer-stacks does look like this below, usind PUID and GUID 1000
services:
homer:
image: b4bz/homer:latest
container_name: homer
environment:
- PUID=1000
- PGID=1000
volumes:
- "/var/home/core/dvol/homer_config:/www/assets"
ports:
- 8081:8080
restart: unless-stopped
I am currently testing on my TrueNAS SCALE to run a VM running CoreOS
Everything fine so far but I was not able to mount a nfs share from TrueNAS SCALE to CoreOS
Using the mount command I get the response "No route to host"
At the moment I am relatively confident, that the root cause is that TrueNAS only allows the user named root to be mounted
But the CoreOS privileged user is named "core"
Does anyone have a good advice for me how to solve this?
I decided to try CoreOS for hosting containers out a while back and ended up running a few (~6) on it.
I now want to setup some NFS mounts for a podman rootless container, and want to use systemd unit files to automount them as needed. But it doesn't seem like it's possible to modify an existing installation like that. Am I missing something?
It makes sense if you look at CoreOS from a "containerization" point of view. Everything is ephemeral as possible, etc.
But, I have a bunch of info present on the server that I'd like to preserve, and rebuilding with the new unit files will wipe that out. I can back it up, but if I can modify the installation, that would be easier.
As a side note, maybe I'm just not using CoreOS right? From an overall methodology, I mean. Maybe I should have more NFS mounts and use those for dev work so I can just build/test/wipe installs like the containers they host. Is there any documentation/info besides the main Fedora site that might be helpful?
Thanks!
I try to use fcos as a VM on the Proxmox hypervisor in my homelab. Imho this should be a great way to have a secure and reproducible environment. In order to persist the container volume data to disk, I chose VirtIO-FS for having the data directly on the host. But passing SELinux xattr metadata to the VM doesn't work well.
Therefore is it possible to use rootless Podman without the :z
trickery and not having to worry about missing permissions inside the container?
I’ve been trying to build a prototype edge device using some PC Engine boards and FCOS. Each edge will be using containerized Open vSwitch to manage the physical ports, as well the container virtual ports. One of the physical ports will be left as a dedicated management port, where OVS will connect to a central controller over WireGuard VPN in the cloud.
All other ports are bonded and trunked to a physical switch for containers to use. Configs are pushed out and managed using Ansible.
I managed to get everything running using containers from https://github.com/servicefractal/ovs, but I can’t figure out the best way to get container interfaces automatically created and attached to the OVS bridge. I created a custom OVS CNI plug-in using bash scripts, but it’s not ideal. The interfaces are created when containers get spun up, but the bash handler doesn’t seem to fire when they are shut down. It’s also problematic to match up the interface names with what’s in the OVS database afterwards. As a result, I have all of these stale OVS ports that don’t get deleted. Is there a native OVS CNI plug-in that I can use?
I am trying to figure out where to host Fedora Core OS. It should be easy to start, have a good api, and support resizing, both up and down. What do you recommend?
Here is the official list.
https://docs.fedoraproject.org/en-US/fedora-coreos/
Click on provisioning machines on eh left hand side.
Exoscale makes it quite easy to do, They automatically generate the ignition file.. But they are twice as expensive. Vult was not too bad. All I had to provide was an ignition file. Quite difficult to debug, as there are no error messages, i just fails. The others, most are quite difficult to use. you have to first enter some shell commands. Or download an image. No thank you.
Digital Ocean advertises 55 seconds to start, but no longer include it in their list of supported os's. You have to do something complex.
CloudSigma looks interesting Easy to configure. Great API.
RESIZING
Vult lets me grow my CoreOS server, but if they have to move it, then the IP changes. No good.
Exoscale looks like hey do it right. But they are expensive.
CloudSigma, I could not find the api.
Of course Linode lets me scale up or down, they preserver the ip, but do not support CoreOS.
Any advice anyone?
I am almost tempted to bring in a fiber optic cable, and host it on a home server.
This is sort of a duplicate question to one I posted on the coreos message boards as I wanted to broaden the potential input on this. The text of that post is as follows:
I understand that installing any packages for software is discouraged in favor of containers, however, I think I may be working within a unique situation. Please correct me if I’m wrong.
I have an installation of coreOS in an air-gapped, bare-metal environment where I am attempting to install OpenShift. The installation of OpenShift in an environment with no internet acces requires the creation of a mirror repository, which is moved over the network boundary on physical media, for the OpenShift installation to pull images from.
The problem that I’m having is this: The steps I have taken in creating the registry on the network facing machine required `apache2-utils`, specifically the `htpasswd` program, to establish the registry authorization. Now that I have transported the registry over to the new environment, it appears that I also need htpasswd installed on this machine, in order to set up the new registry in which to upload the mirrored registry images, and it’s corresponding authorizations.
If someone thinks there is a better solution, please let me know. Otherwise my question is this: What is the best way to package up a tool like apache2-utils and its dependencies, in order to transport and install it on a coreOS machine after the coreOS installation? Is it possible to do this with the toolbox dnf?
Some additional info/updates:
Thanks very much!
Hi guys, I need the CoreOS ISO I'm trying for the official website and I'm not getting it, can someone provide me with the link? Thank you very much.
Hello.
I try to set up bare-metal Kubernetes environment. I'm now trying just a 1-node bootkube-controller for simplicity, however my machine don't have access to the internet, so etcd-member is unable to connect to https://quay.io/v2/. Network looks as follow:
Router (192.168.0.1) <--> (192.168.0.10) master node (172.26.1.10) <--> (172.26.1.100) node1
I added static route on router to 172.26.1.0/24 via 192.168.0.10. On node1 I added 192.168.0.0/24 via 172.26.1.10. Provisioning node have two interfaces in different networks and ipv4 forwarding on. Dnsmasq runs on 172.26.1.10 interface, I can check dns resolving with dig and it works correctly. I am able to ping both interfaces of master node, and that's all. How to configure networking to access internet?
I'm new to CoreOS so any help would be highly appreciated.
My server is done for. Not sure why. Getting a long list of error messages and then it wants to enter emergency shell or reboot in 5. Doesn't really matter what the error is as long as I can recover my volumes. That's all I need. How can I do that from the Emergency Shell?
They're supposed to be in /var/lib/docker/volumes, right?
Buy there's no /lib in /var ... and there's no /home either.... Where am I? Are my volumes gone or is something else afoot?
Please help a noob out!
This is my easy way to decide between Fedora CoreOS and Flatcar (who are carrying on the original CoreOS development). Is Fedora CoreOS a rolling release?
I noticed that download page of Fedora CoreOS currently has a 31 prefix, but the transition isn't anywhere near complete yet (1 September 2020).
Exploring my options for migrating a commercial k8s cluster off of coreos
Did you stand up a parallel env? swap nodes with a different OS version?
If have tried to install Fedora CoreOS on both a virtual and physical machine and I the following error
GPT:Alternate GPT header not at the end of the disk
I thought it was down to the virtual machine so installed it on bare metal and same error any ideas
Hi, i am new to kubernetes and have been using minikube. Can someone help me with setting authentication using dex and github as idp.
Thank-you in advance.
Hi,
since Rkt is being discontinued an worthy alternative is required which is why I'm wondering if there is anything similar that plays nicely with systemd(docker doesn't really) and supports something like rkts pods. If it matters, I'm just running a little hobby setup, so no scaling required.
Hi,
Do you know where I can find good sources about the progress of Fedora CoreOS getting released? Don’t find any „progress bar“ with items which have to be resolved for that.
As far as I understood the release of FCOS is an important milestone for the release of OKD 4 (upstream OpenShift 4). That’s why I‘m searching for it.
Thanks and greetings,
Josef