/r/linuxadmin
users voted
GUIDE to /r/linuxadmin:
/r/linuxadmin aims to be a place where Linux SysAdmins can come together to get help and to support each other.
Related reddits:
Footnote:
Talk realtime on IRC at #/r/linuxadmin @ Freenode.
/r/linuxadmin
Oracle Linux Servers that have Sentinel One Agent installed that are using KSplice to update get the following error
Ksplice was unable to install this update because your running kernel has been modified from the version provided by your vendor. Please contact Oracle support for help resolving this issue.
Has any one come across this issue / found a solution?
I am attempting to setup Landscape on my home network to test managing my machines prior to deploying at work. However, I am being prompted to enter domain. Unfortunately, I don't have a domain on my home network. Can anyone advise of a work-around for this?
Hi all! I have an issue with migrating the os files from one disk to another. To give you a quick overview I'm running VMs in Proxmox, each having it's own zfs emulated diskas below:
rpool/data/vm-102-disk-0 93.4G 663G 93.4G -
rpool/data/vm-102-disk-1 56K 663G 56K -
Because at some point instead of resizing the sda2 partition, I've created others and now thin provisioning is not working, the backups take 100GB when in reality should take ~20GB
[root@plex ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 1G 0 part /boot
├─sda2 8:2 0 9G 0 part
│ ├─centos-root 253:0 0 97G 0 lvm /
│ └─centos-swap 253:1 0 1G 0 lvm [SWAP]
├─sda3 8:3 0 10G 0 part
│ └─centos-root 253:0 0 97G 0 lvm /
└─sda4 8:4 0 80G 0 part
└─centos-root 253:0 0 97G 0 lvm /
sdb 8:16 0 32G 0 disk
[root@plex ~]# vgs
VG #PV #LV #SN Attr VSize VFree
centos 3 2 0 wz--n- <98.99g 1016.00m
[root@plex ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root centos -wi-ao---- <97.00g
swap centos -wi-ao---- 1.00g
What I'm trying to achieve is to create another 2 partions on sdb, sdb1 for /boot and sdb2 for lvm and then have / and swap on that lv. All good for now except when it comes to regenerate initramfs and grub it fails because it tries to find old centos-root lv which I'm trying to get away from. I want to regenerate initramfs and grub only for sdb root lv and swap
Please see my notes below, and thank you for taking time to help. P.S. it has to be this way if possible
parted /dev/sdb mklabel msdos
parted /dev/sdb mkpart primary xfs 1M 1024M
parted /dev/sdb mkpart primary 1024M 100%
pvcreate /dev/sdb2
vgcreate rhel /dev/sdb2
lvcreate -L 25G -n root rhel
lvcreate -L 2G -n swap rhel
mkfs.xfs /dev/sdb1
mkfs.xfs /dev/mapper/rhel-root
mkswap /dev/mapper/rhel-swap
mkdir -p /mnt/oldsystem
mkdir -p /mnt/oldsystem/boot
mkdir -p /mnt/newsystem
mkdir -p /mnt/newsystem/boot
mount /dev/mapper/centos-root /mnt/oldsystem
mount /dev/sda1 /mnt/oldsystem/boot
mount /dev/mapper/rhel-root /mnt/newsystem
mount /dev/sdb1 /mnt/newsystem/boot
rsync -avx /mnt/oldsystem/ /mnt/newsystem/
mount --bind /dev /mnt/newsystem/dev
mount --bind /proc /mnt/newsystem/proc
mount --bind /sys /mnt/newsystem/sys
chroot /mnt/newsystem
dracut --add lvm -f /boot/initramfs-$(uname -r).img $(uname -r) # this is where it fails
grub2-install /dev/sdb
grub2-mkconfig -o /boot/grub2/grub.cfg
# tried using this but it fails also
GRUB_DISABLE_OS_PROBER=true grub2-mkconfig -o /boot/grub2/grub.cfg
# update fstab
blkid
UUID=192f49c5-ba16-40e1-8800-4bf6776ea962 / xfs defaults 0 0
UUID=919067a3-5a9f-4740-b335-33d9457dd35c /boot xfs defaults 0 0
UUID=7cab3235-2803-484d-af15-760ff7b9518e swap swap defaults 0 0
exit
umount /mnt/newsystem/dev
umount /mnt/newsystem/proc
umount /mnt/newsystem/sys
umount /mnt/newsystem/boot
umount /mnt/newsystem
shutdown vm
change from Proxmox hdd drive priority to sdb2 and boot
im not a linux admin - alas i’ve gotten some admin tasks that im finding it hard to find decent documentation on whats best practices.
what would a ‘best-practice’ approach when making linux machine images (and also docker images) for locking down libraries?
say fx that for compliance reasons its paramount that the it deparment releases a ‘golden image’ that contains approved libraries these images are then release to devs so they can install their software and further proces the image for customer release.
do you run a hashing check on libraries after the devs are done?
check signing of binaries on final image somehow?
do you lock it down in some userlevel way that allows devs to experiment but not hinder them?
a custom apt mirror/proxy that only allows certain packages?
do you lock down devs? (reeaaaally dont want to do this)
any thoughts or ideas you guys could share?
I am planning to take and go for LPIC, would Ubuntu be good starting distro for learning path or what would your recommendations be? Thank you in advance.
Hi,
I am trying to find the source of this message on /var/log
Nov 14 14:14:20 etfxsp-ob-874 NetworkManager[1744]: <info> [34245280.4964] device (usb0): interface index 87 renamed iface from 'usb0' to 'enp0s20f0u9u4c4'
Nov 14 14:14:22 etfxsp-ob-874 kernel: cdc_ether 1-8.2:2.0 enp0s20f0u9u4c4: renamed from usb0
its not on the network device list
# lsusb# lsusb
Bus 002 Device 001: ID 134c:0003 Linux Foundation 3.0 root hub
Bus 001 Device 004: ID 323c:dd02 Dell Inc.
Bus 001 Device 001: ID 1f3d:0002 Linux Foundation 2.0 root hub
Tried looking on udev rules and could not find any entry regarding it.
Can anyone point me on the right direction? Thanks in advance
hey guys .how can i do this. I did know the way before but i forgot.
sda 8:0 0 3.5T 0 disk ├─sda1 8:1 0 1G 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 1.2T 0 part
On sda3 im using LVM
Im using rhel 8.10
Hello, I'm considering a job change so I have been scouting for open Linux sysadmin opportunities in my corner of the world. Most of the traditional Linux roles I have seen so far are on 'high performance computing' and 'trading systems'.
What kinds of questions should I expect to receive during technical interviews with these kinds of roles? The job descriptions didn't reveal much difference to the usual 'sysadmin' role, aside from keywords such as 'high performance computing', 'trading systems', and a few familiar terms like Infiniband, network bonding, and some proprietary software for workload scheduling.
Thanks in advance.
I have a pfSense VM running on both VMware Workstation and Proxmox. Everything seems fine—on both setups, the WAN interface receives an IP from the local home router (using auto-bridge), and the LAN is configured. However, there's a difference in how I can access the pfSense web configurator:
I can't figure out the difference in networking behavior between VMware Workstation and Proxmox that’s causing this. I would like to access the pfSense web configurator from the local PC (host machine) itself in the Proxmox setup, just like in VMware Workstation.
I started with Linux OS. Then went to linux command line-->Bash scripting. Then learnt web servers (Apache HTTP/NGINX). I went to docker and kubernetes. And here's where I felt I was lacking and missing something. It has been 1 year and still I don't quite get docker and kubernetes. It leads me to the conclusion that I am missing some preriquisites.
I am completely off track of everything else from docker and kubernetes.
Thus, I want to know what's that? Is that yamls? Is that ansible?
Goshkan, a transparent TLS and HTTP proxy that operates on all 65535 ports. with domain regex whitelisting, payload inspection, low memory usage, and a REST API for managing domain filters.
Hey,
I'm looking for a monitoring solution for two ubuntu servers. Seems to me there is a lot of different solution and I'm getting a bit lost. I'm looking to monitor things such as basic hardware usage, users logs and commands, open ports, security...
We use Entra ID a lot. I wonder if it's worth monitoring those servers with Azure Arc & Azure Monitor for simplicity sakes. Seems rather cheap for two servers. We also already use Defender for all our endpoints (except those servers).
What do you guys use for monitoring ? Can Azure and Defender works well with Linux servers ?
Forgive the ignorance, please correct anything that is wrong or fill in any gaps I'm missing.
As I understand it, you use a configuration management system like Ansible, Chef, or Puppet for the more day to day management of your systems; updating software, firewall rules, etc. Before we can think about that though, we have mention provisioning tools like Terraform or OpenTofu, who initialize the virtual systems that get managed by your config management system. My main query comes in as 'what happens before that point?' I recognize that a lot of the time that responsibility is schlepped off to the cloud providers and your provisioning tool just interacts with them, but what about those companies that have on-prem resources? How are those baremetal systems bootstrapped? I imagine those companies aren't manually installing OSs prior to using a provisioning tool? The only thing I can think of would be something like booting the baremetal servers from a pxe server containing a customized image. Am I off base?
Hi,
how two processes communicate via SSH stream?
Well, I'm speaking of rsync via SSH. With this simple command:
rsync -avz user@address:/home/user ./backup
rsync create an ssh session and on the other side "rsync --server ...." is executed that wait for protocol command. But How that works really? How the 2 processes can communicate between them via SSH?
To understand this I created a simple python script that try to read data sent from the other side of the connection, simply reading stdin and if it found "test" command it should print a string. Here the code:
import sys
for line in sys.stdin:
if(line[:-1] == "exit"):
exit(0)
elif(line[:-1] == "test"):
print("test received")
Running 'ssh user@address "pythonscript.py"' it does not work, no output from the script because it seems not able to read from the ssh connection, maybe the script should not read from stdin but from another "source"? I don't know..
I tried using ssh -t that create a pseudo terminal and with this method I can send command/data to my script.
Another way I found is SSH Tunnel (port forwarding) to permit two program to talk via network sockets.
But I can't understand how rsync can communicate with the server part via SSH. There is something that is piped or other? I tried with strace but this is a huge output of what rsync does and what ssh does.
Any tips/help/suggestion will be appreciated.
Thank you in advance.
I am looking for Udemy course which is best for RHCSA EX200.
Please let me know if any course or material I need to refer for this exam.
Hi,
I'm using rsync + python to perform backups using hardlink (--link-dest option of rsync). I mean: I run the first full backup and other backups with --link-dest option. It work very well, it does not create hardlink of the original copy but hardlink on the first backup and so on.
I'm dealing with a statement "using rsync with hardlink, you will have an hardlink farm".
What are drawbacks of having an "hardlink farm"?
Thank you in advance.
Hi. I wasn't sure which subreddit would be most appropriate, and where there might be enough users to get some insight, but I'll try my luck here!
For quick context: I'm a developer, with broad rather than deep experience. I've maintained and developed infrastructure, both cloud and non-cloud ones. Mainly with Linux servers.
So, the issue: One client noticed they had to restart our product every few days, as it ran out of file handles.
In subsequent load tests, we noticed that under some traffic patterns, some sockets and their associated connection are left on one side in TIME_WAIT state, while on the other side, the connection is in ESTABLISHED. While in ESTABLISHED, it sends a keepalive ACK packet and the TIME_WAIT MLS timer resets.
I was a little bit surprised to find that the timer for TIME_WAIT will reset on traffic. It seems like this is hard-coded behavior in the Linux kernel, and can not be modified.
On further testing, it seems that the issue is SYN cookies being on, and here the issue seems to have been the same: https://medium.com/appsflyerengineering/the-story-of-the-tcp-connections-that-refused-to-die-ee1726615d29
We can fix this for now by disabling SYN cookies and/or by tuning the keepalive values, but this led me to another realization: Couldn't a misbehaving client - whether due to a bug or deliberately as a form of DoS attack - attempt to deliberately create a similar situation?
I'd suppose that the question thus is, are there some fairly standard ways of e.g. cleaning up sockets in active close state if file handles are close to being exhausted? What kind of strategies are common for dealing with these sort of situations?
Hi all, I recently installed ubuntu server 24.04.1 LTS on an old computer, and can't seem to connect to github at all. I can't use ssh or https. DNS seems to be working fine, because the IP address that it finds works when I use other computers to ping it.
I'm using Network Manager as that was the only way I could get my old wifi card to work.
Here's a screenshot of my firewall status:
Thanks in advance for any help.
Hi,
Just wanted to ask, i have 100GB OS disk with wifh xfs filesytem, here is the setup
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot/efi
├─sda2 8:2 0 1G 0 part /boot
└─sda3 8:3 0 98.5G 0 part
├─Vol00-LVSlash 253:0 0 20G 0 lvm /
├─Vol00-LVHome 253:2 0 10G 0 lvm /home
├─Vol00-LVLog 253:3 0 10G 0 lvm /var/log
└─Vol00-LVVar 253:4 0 10G 0 lvm /var
/dev/sda3 has still 48.5 GB free space. all filesystems use less than 25% space.
Is it possible to clone this to a 50GB or 60GB disk? if not what are my options?
Hi, excuse me if this is a noob question but I never had to deal with something like this.
My server (Debian 12) has two network cards and as we are having issues with one of them and a PVE kernel upgrade, we need to test through the other one. Our second Realtek card does not list an interface name. I have a enp6s0 but nothing on the other. I can configure networks, but never had to face not having a hardware interface name for one. Unsure if this might be a hardware, bios problem or some missing configuration.
#lspci | grep "Ethernet"
06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
07:00.1 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 1a)
#hwinfo --short --netcard
network:
enp6s0 Intel I211 Gigabit Network Connection
Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
# lshw -C network -short
H/W path Device Class Description
==================================================================
/0/100/1.2/0/3/0 enp6s0 network I211 Gigabit Network Connection
/0/100/1.2/0/4/0.1 network RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
What can I do to start the interface and that it gets an interface name assigned?
Thanks.
(Edited to clarify the question)
Hi,
I use gocryptfs to encrypt my backups and found cryfs that seems a good software. I tried it a bit but not so much to have a good comparison. It seems fast like gocryptfs, it does not report file size because it saves on "blocks", it creates much more file vs gocryptfs that are update when more data reach encrypted directory so in case of sync on cloud service I could resync a very big chunk of data for a single file modification..other things don't come to my mind.
Do you use cryfs and in what way it is better vs gocryptfs?
Thank you in advance
I set the screen to auto-lock (employer workstation -- required) in the settings, but most of my work is still terminal, and the screen lock seems to ignore me just typing and running things in terminal. I have to jiggle the mouse every so often or the screen blanks and locks.
I'm using the default gnome/Wayland for workstation. Is there a setting buried somewhere in /etc that the screen lock uses to determine what inputs constitute "activity"?
Revolutionize Your DevOps Workflow! 💥
Tired of drowning in unstructured text data? 🌊 Introducing Nushell and Jc, two game-changing tools that will transform the way you work with data! 🔥
Nushell: The Modern Marvel 🤖 Rewrites command-line tools to export structured data. 💡 Say goodbye to tedious text processing!
Jc: The JSON Converter 📈 Converts legacy Linux command output into JSON format. Simplify complex tasks and collaborate more effectively! 🤝
Benefits Are Endless! 🌈
Gain efficiency, simplify scripting, improve collaboration, and reduce errors with Nushell and Jc.
Read the Full Article Here: https://cloudnativeengineer.substack.com/p/powerful-command-line-tools-for-devops 📄
I've been working in tech as a support engineer since 2 years (About to be) and today I feel like doing a project in cicd.
In my current company, cicd isn't implemented but it's done manually.(I feel like that I am not sure lol)
I know code is put in gitlab. Then it's built in jenkins. Then it's put to harbour image repository. Then it's deployed on kubernetes. (That's all I know as a support engineer as the devops team does everything.)
I want someone to guide and make a complete end to end project on ci cd. I'd be grateful if you can recommend some paid courses from any platform. As learning by projects is the best way to learn.
Edit: I just installed jenkins in my linux server. Now what I want is write some small code in and host in self hosted gitlab server (in same linux server)...Then do CI with jenkins