/r/linuxadmin
users voted
GUIDE to /r/linuxadmin:
/r/linuxadmin aims to be a place where Linux SysAdmins can come together to get help and to support each other.
Related reddits:
Footnote:
Talk realtime on IRC at #/r/linuxadmin @ Freenode.
/r/linuxadmin
do i just mount the nfs dir in /mnt/maildir and set mail location to /mnt/maildir or there is additional configurations ?
sudo mount -t nfs -o sec=krb5 mailnfsstorage.com:/var/nfs/share /mnt/maildir
mail_location = maildir:/mnt/maildir
I cant snmpwalk from remote server. Local snmpwalk works. no routing issue. no firewall between the servers, no local firewalls. Does not even answer in same subnet.
snmpd service bound to 0.0.0.0:161 udp:
[root@phone snmp]# netstat -tulpn | grep snmpd
tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN 1406689/snmpd
udp 0 0 0.0.0.0:161 0.0.0.0:* 1406689/snmpd
command used on remote server:
snmpwalk -v2c -c public x.x.x.x
snmpd.conf:
agentAddress udp:161
rocommunity public
tcpdump only shows request. snmpd does not send replies.
[root@phone ~]# tcpdump -i any port 161
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
16:56:17.685107 IP 192.168.0.1.52935 > 192.168.0.2.snmp: GetNextRequest(25)
16:56:18.686072 IP 192.168.0.1.52935 > 192.168.0.2.snmp: GetNextRequest(25)
16:56:19.687226 IP 192.168.0.1.52935 > 192.168.0.2.snmp: GetNextRequest(25)
16:56:20.688093 IP 192.168.0.1.52935 > 192.168.0.2.snmp: GetNextRequest(25)
16:56:21.689301 IP 192.168.0.1.52935 > 192.168.0.2.snmp: GetNextRequest(25)
16:56:22.690175 IP 192.168.0.1e.52935 > 192.168.0.2.snmp: GetNextRequest(25)
Title. I am running postgres15 by the way. Just wanted to know for the experienced folks here if it does matter? Would this non-default configuration cause some issues?
I could change it back to the default but it would probably incurr downtime since i assume i would have to restart the DB service running. Any suggestions?
I have a Debian server running on Vmware. I running low on space on a data partition. I want to expand the partition but have couple of questions. The results of lsblk
:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 150G 0 disk
└─sda1 8:1 0 150G 0 part /
sdb 8:16 0 60G 0 disk
└─sdb1 8:17 0 60G 0 part /home
sdc 8:32 0 190G 0 disk
├─sdc1 8:33 0 165G 0 part /var/domain/data
└─sdc2 8:34 0 25G 0 part [SWAP]
sr0 11:0 1 1024M 0 rom
Results of fdisk
on /dev/sdc
Disk /dev/sdc: 190 GiB, 204010946560 bytes, 398458880 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x1c16eed6
I have to expand the /dev/sdc1
partition but the SWAP partition starts right after it. My process was going to be:
Increase the size of the virtual disk (/dev/sdc) from the vSphere interface.
parted /dev/sdc
and then resizepart 1 100%
resize2fs /dev/sdc1
Would the above work? Or do I need to first execute swapoff /dev/sdc2
, then use fdisk
to delete /dev/sdc2
, resize /dev/sdc1
, create the swap partition again using fdisk, initialize using mkswap /dev/sdc2
and turn on swap using swapon /dev/sdc2
?
If I turn swap off, would the system crash? During off hours it uses around 3G of swap space. Also, do I have to use live cd for this?
Hello
I don't know if this is the right sub.
I need to deploy multiple Debian to fresh machines with unformatted SSD. (I have 1 machine formatted with everything is installed)
How can I do that very quickly with the least manual intervention ?
Thanks for help
I have a couple vps with a small ssd (8 to 20gb) for os and a bigger hdd for storage. (2tb or more)
I usually install AlmaLinux 9 with LUKS FDE via the graphical installer. When storage comes i select both disks and select automatic partitioning.
Installer creates lvm which spreads across both disks
Like /boot on ssd for 1gb / for 35gb spread between remaining ssd and some hdd. /home on hdd
Is this ok or should I do manual partition on ssd and hdd? If later what should be the recommended partitioning strategy?
I prefer luks based full disk encryption on whole storage.
Whats the best approach?
Thanks
Hi, moving from CentOS7 to rhel 9 I've noticed this :
In CentOS7 I have the main interface with an IP + multiple floating ip's (for convenience let's call them ip3/ip4)
Ip3 and 4 receive external requests and there's a rule like this
centos 7 rule : rule family="ipv4" destination address="ip3" forward-port port="80" protocol="tcp" to-port="8089"
This works fine, the request was correctly handled by ip3
In red hat 9 the request from ip3 is handled by the main ip and not by ip3 , so I have to add the firewalld rule :
rhel rule : rule family="ipv4" destination address="ip3" forward-port port="80" protocol="tcp" to-port="8089" to-addr="ip3"
There's a reason to this? I mean, the firewalld versions are 0.6 and 1.2..there's a difference in how the two versions handle the requests or Im missing a configuration?
I got my first sysadmin job ~6 months ago.
Everyone in our IT department hates linux for some reason so we're primarily a windows shop. Full azure environment too. Nothing on prem anymore.
So I happily volunteered to take ownership of the Linux servers since no one else likes Linux and the previous guy who owned them quit.
We only have 4 debian servers though.
1 of them is a log parser for Ms defender and the other 3 are for one BI app we use. BIDev BItest and BIprod.
The existing infra is already set up for them so I don't really have anything to do with them other than ensure uptime and patch them once a month.
We don't have anything like kubernetes or ansible set up. No business justification to do so with only 4 servers.
I know enough Linux to pass an entry level cert like comptia Linux+ but not rhcsa so I was hoping to learn something here but doesn't seem likely.
I'm implementing a bare metal restore method for my laptop (ReaR) and - well, the title says it all.
What do you exclude from your backup?
My laptop is Debian 12 in case that matters, but the question is meant more in a generic way.
So, I have installed Postgres with the package manager and he does postgres-stuff. One of those things is that a cronjob makes him create an automatic back up of the database. Now I would like to upload that back up-file to another location (using rclone in this case). I know I can do it, but should I do it?
Or in other words: should I give users created automatically for a specific job an extra task or should I create a new user for this?
I have a problem with ID mapping in Proxmox 8.2 (fresh install). I knew in the host I had to get this two files
I think I can use the ID 165536 or 165537, to map my user "santiago" in the container to same name user in my host. In the container, I executed 'id santiago', which throws: uid=1000(santiago) gid=1000(santiago) groups=1000(santiago),27(sudo),996(docker)
So, in my container I setted up this configuration:
[...]
mp0: /spatium-s270/mnt/dev-santiago,mp=/home/santiago/coding
lxc.idmap: u 1000 165536 1
lxc.idmap: g 1000 165536 1
But the error I get is:
lxc_map_ids: 245 newuidmap failed to write mapping "newuidmap: uid range [1000-1001) -> [165536-165537) not allowed": newuidmap 5561 1000 165536 1
lxc_spawn: 1795 Failed to set up id mapping.
__lxc_start: 2114 Failed to spawn container "100"
TASK ERROR: startup for container '100' failed
Please help. I'm losing my mind.
never knew this was possible but found two systems in my network that has two identical UUIDs. question now is, is there an easy way to change the UUID returned by dmidecode.
I've been using that uuid as a unique identifier in our asset system but if I can find two systems with identical UUIDs then that throws a wrench in that whole system and I'll have to find a different way of doing so.
TIA
Hello,
I've created the following filter in syslog-ng:
filter f_not_dns {
not match("1.1.1.1:53" value("MESSAGE"));
not match("1.0.0.1:53" value("MESSAGE"));
not match("8.8.8.8:53" value("MESSAGE"));
not match("8.8.4.4:53" value("MESSAGE"));
not match("172.16.50.246:53" value("MESSAGE"));
not match("208.67.222.222:53" value("MESSAGE"));
not match("208.67.220.220:53" value("MESSAGE"));
not match("[2620:119:35::35]:53" value("MESSAGE"));
not match("[2620:119:53::53]:53" value("MESSAGE"));
not match("[2606:4700:4700::1001]:53" value("MESSAGE"));
not match("[2606:4700:4700::1111]:53" value("MESSAGE"));
not match("[2001:4860:4860::8844]:53" value("MESSAGE"));
not match("[2001:4860:4860::8888]:53" value("MESSAGE"));
};
and then created a log block:
log {
source(s_network);
filter(f_not_dns);
destination(d_qfiber);
};
It seems that I can't filter IPv6 addresses since I keep seeing them in log:
Oct 25 23:22:19 172.16.50.1 firewall,info forward: in:vLAN50-Main out:WAN-HOTNet, connection-state:new src-mac ma:c0:ad:dr:es:s0, proto UDP, [2a00:0000:0000:0:ffff:ffff:ffff:ffff]:47173->[2001:4860:4860::8844]:53, len 68
Any idea why?
Thank you!
Seemless!
My homelab BIND DNS master is up and running after two major OS upgrades, thanks to following this guide.I had my doubts, given past failures with in-place upgrades, but this time the process was surprisingly smooth and easy.
What a start to the weekend!
Hi.
I am having trouble locating where my disk space is disappearing. Since the beginning of the month about 70 GB (2% of 3,6TB) has disappeared. You can see from the graph that it's probably some logs, but nowhere on the drive is there a directory that takes up more than 3 GB, except for one, but there the file size doesn't change.
Systemd journal is limited to 1GB, so it's not it.
The only directory with a size larger than 3 GB is the qemu virtual machine disk directory. However, the size of the disk files does not change.
I also checked for open descriptors for deleted files, but again - that's not it.
I'm running out of ideas on how to go about this, perhaps you can suggest something?
Here you are some df and du output:
# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.0M 3.2G 1% /run
/dev/mapper/LVM_group-root 3.6T 3.3T 159G 96% /
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/md0 462M 108M 326M 25% /boot
/dev/sda1 93M 5.9M 87M 7% /boot/efi
/dev/sdb1 220G 11G 197G 6% /mnt/ssd
tmpfs 3.2G 0 3.2G 0% /run/user/0
du -shx /*
0
/bin
108M
/boot
0
/dev
6.2M
/etc
24K
/home
0
/initrd.img
0
/initrd.img.old
0
/lib
0
/lib64
16K
/lost+found
8.0K
/media
8.0K
/mnt
4.0K
/opt
0
/proc
752K
/root
1.0M
/run
0
/sbin
4.0K
/srv
0
/sys
40K
/tmp
3.1G
/usr
3.3T
/var
0
/vmlinuz
0
/vmlinuz.old
du -shx /var/*
2.1M
/var/backups
404M
/var/cache
3.3T
/var/lib
4.0K
/var/local
0
/var/lock
1.1G
/var/log
4.0K
/var/mail
4.0K
/var/opt
0
/var/run
20K
/var/spool
20K
/var/tmp
du -shx /var/lib/*
135M
/var/lib/apt
8.0K
/var/lib/aspell
8.0K
/var/lib/dbus
4.0K
/var/lib/dhcp
24K
/var/lib/dictionaries-common
30M
/var/lib/dpkg
24K
/var/lib/emacsen-common
1.4M
/var/lib/fail2ban
12K
/var/lib/grub
3.4M
/var/lib/ispell
3.3T
/var/lib/libvirt
8.0K
/var/lib/logrotate
4.0K
/var/lib/machines
4.0K
/var/lib/man-db
4.0K
/var/lib/misc
4.0K
/var/lib/os-prober
28K
/var/lib/pam
28K
/var/lib/polkit-1
4.0K
/var/lib/portables
4.0K
/var/lib/private
4.0K
/var/lib/python
12K
/var/lib/sgml-base
4.0K
/var/lib/shells.state
22M
/var/lib/smartmontools
8.0K
/var/lib/sudo
4.0K
/var/lib/swtpm-localca
456K
/var/lib/systemd
100K
/var/lib/ucf
8.0K
/var/lib/vim
16K
/var/lib/xml-core
du -shx /var/lib/libvirt/*
4.0K
/var/lib/libvirt/boot
3.3T
/var/lib/libvirt/images
132K
/var/lib/libvirt/qemu
4.0K
/var/lib/libvirt/sanlock
Testing with a TEAMGROUP MP34 4TB Gen 3 nvme:
(in contrast to Why dm-integrity is painfully slow?)
Documentation states that "bitmap mode can in theory achieve full write throughput of the device", but might not catch errors in case of a crash. Seems to me if not using zfs/btrfs, might as well use dm-integrity with imperfect protection with bitmap mode.
Test code:
integritysetup format --sector-size 4096 --integrity-bitmap-mode --integrity xxhash64 /dev/nvme0n1p1
integritysetup open --integrity-bitmap-mode --integrity xxhash64 /dev/nvme0n1p1 integrity_device
pvcreate /dev/mapper/integrity_device
vgcreate vg_integrity /dev/mapper/integrity_device
lvcreate -l 100%FREE -n lv_integrity vg_integrity
mkfs.xfs /dev/vg_integrity/lv_integrity
mount /dev/vg_integrity/lv_integrity /mnt/testdev
dd if=/dev/zero of=/mnt/testdev/test.dat bs=1G count=10 oflag=direct
dd if=/mnt/testdev/test.dat of=/dev/null bs=1G iflag=direct
I also tried adding LUKS on top (not using the integrity flags in cryptsetup
since it doesn't include options for hash type or bitmap mode) and got
There's also integrity options for lvcreate
/lvraid
, like --raidintegrity
, --raidintegrityblocksize
, --raidintegritymode
, --integritysettings
, which can at least use bitmap mode, and I think we can set the hash to xxhash64 with --integritysettings internal_hash=xxhash64
per dm-integrity tunables
One thing I'm unclear on is if I can convert a single linear logical volume already with integrity to raid1 with lvconvert and using the raid-specialized integrity flags. Unfortunately I don't think lvcreate lets you create a degraded raid1 with a single device (mdadm can do this).
Should I disable a module in the selinux policy if it is not being used like sendmail or telnet for example? Or does it not matter? Or is it considered best practices for hardening?
I've been building a cross platform collection of productivity CLI utilities with these categories:
| command | description |
|-------------|-----------------------------------------------------------|
| aid http | HTTP functions |
| aid ip | IP information / scanning |
| aid port | Port information / scanning |
| aid cpu | System cpu information |
| aid mem | System memory information |
| aid disk | System disk information |
| aid network | System network information |
| aid json | JSON parsing / extraction functions |
| aid csv | CSV search / transformation functions |
| aid text | Text manipulation functions |
| aid file | File info functions |
| aid time | Time related functions |
| aid bits | Bit manipulation functions |
| aid math | Math functions |
| aid process | Process monitoring functions |
| aid help | Print this message or the help of the given subcommand(s) |
https://github.com/Timmoth/aid-cli
It's mostly something I've been building for fun but I hope others might find some of the features useful!
I would like to know how to find a server that allows me to install a Python application that needs to open the Chrome browser to open my website and perform some daily tests as if I were a user browsing it.
I have the entire system running locally, but whenever my connection drops or the power goes out, the system crashes and when I'm not at home I can't restart it and the computer slows down so I can't do other tasks. So I want to move this to an online server but I don't know the requirements to research.
I know it needs to be Linux Ubuntu, with PHP and Python 3.11, but it needs this user interface that when I start talking to support no one understands what I'm talking about or when I read about the server's resources I can't find anything about it.
I have the instructions on what needs to be done to install locally (command line), so I believe it is the same as installing on the server, but the normal server for my website (Hostgator doesn't have this).
I found some tutorials, but I'm not sure yet which server to choose that allows me to activate this, or if there is one that already comes with this enabled to make my work easier, as I'm inexperienced with this, but I'm trying to learn because I can't afford to hire a professional to do this. I'm familiar with the classic Linux XAMP apache/php/mysql/wordpress server, with cPanel, and even with WHM (multiple cPanel accounts), root and command line, but Python and GUI are new to me.
https://phoenixnap.com/kb/how-to-install-a-gui-on-ubuntu
https://serverspace.io/support/help/almalinux-install-gnome/
https://wiki.crowncloud.net/?How_to_install_GUI_on_centos7
https://cloudzy.com/blog/install-gui-on-centos-7/
I don't know if it's allowed here, but if anyone can directly indicate the name of 1 or 2 servers that have this so I can compare and choose the best cost-benefit, I'd be very grateful.
I'm trying to do a autofs-mount within local each home directory. Like /home/*/cifs that mounts to a cifs share. In principle, it works fine. If i do a direct mount on /- with a static sun-format map that is.
However, I'd like to use a dynamic map in form of a a program-map that echos sun-format lines. This method works just fine for my indirect mounts.
However autofs doesn't even try to run the program at startup for the direct mount.
If i run the program-map on the shell and redirect everythin into the static map file it works. The folders are created and I can cd into it just fine. As it should. So i know the format outputted by the program is correct.
I didnt find any explicit statement on what feels like the whole internet, regarding "program maps not allowed in direct mounts". But am i correct to assume that, well, it just is and i should stop searching?
$ cat auto.master.d/nethomes.autofs
# uncomment one OR the other
/- /etc/auto.nethomes --timeout=300
#/- /etc/auto.nethomes.static --timeout=300
$ ls -la /etc/auto.nethomes*
-rwxr-xr-x. 1 root root 564 23. Okt 18:30 /etc/auto.nethomes
-rw-r--r--. 1 root root 339 23. Okt 18:28 /etc/auto.nethomes.static
$ cat /etc/auto.nethomes.static
/home/userA/cifs -fstype=cifs,rw,dir_mode=0700,file_mode=0600,sec=krb5i,vers=3.0,domain=OUR.AD,uid=64201234,cruid=64201234,user=userA ://home.muc.loc/home/userA
/home/userB/cifs -fstype=cifs,rw,dir_mode=0700,file_mode=0600,sec=krb5i,vers=3.0,domain=OUR.AD,uid=64201235,cruid=64201235,user=userB ://home.muc.loc/home/userB
$ automount -m
autofs dump map information
===========================
global options: none configured
Mount point: /-
source(s):
instance type(s): program
map: /etc/auto.nethomes
no keys found in map
Hi friend, i don't know if it's right place or not for this post but I'm looking for opportunity to get hands-on experience in Linux Administration field, I've prepared for RHCSA and currently preparing RHCE, if there's any recruiter or hiring manager here who have any internship or entry-level opportunity, please let me know,
I bought an Optiplex 3060 SFF and upgraded it with two 2TB HDDs to use as my new homeserver and am kinda overwhelmed and confused about redundancy options.
I will run all kinds of docker containers like Gitea, Nextcloud, Vaultwarden, Immich etc. and will store a lot of personal files on the server. OS will be Debian.
I plan to backup to an external drive once a week and perform automatic encrypted backups with Borg or Restic to a Hetzner StorageBox. I want to make use of some RAID1-ish system, so mirror the drives, as an extra layer of protection, so that the server can tolerate one of the two drives failing. The 2 HDDs are the only drives in the server and I would like to be able to boot off either one in case one dies. I also want to be easily able to check weither there is corrupt data on a drive.
What redundancy resolution would you recommend for my situation and, specifically, do you think ZFS' error correction is of much use/benefit for me? How much of an issue generally is silent data corruption? I do value the data stored on the server a lot. How would the process of replacing one drive differ between ext4 software RAID1 and zfs?
I have a lot of experience with Linux in general, but am completely new to ZFS and it honestly seems fairly complicated to me. Thank you so much in advance!