/r/linuxadmin

Photograph via snooOG

users voted

Expanding Linux SysAdmin knowledge

GUIDE to /r/linuxadmin:

  • Please consider that a new submission must help Linux SysAdmins
  • General blog/news/review posts belong in /r/linux
  • Articles/tutorials that simply reiterate what's in a manpage or a README, without adding significant value, are not useful
  • Inflammatory material doesn't help anyone but trolls

/r/linuxadmin aims to be a place where Linux SysAdmins can come together to get help and to support each other.

Related reddits:

Footnote:

Talk realtime on IRC at #/r/linuxadmin @ Freenode.

/r/linuxadmin

205,622 Subscribers

1

Project ideas for junior

As the title suggests, what projects can I do so I can enhance my skills in this field? Recently I had my first ever interview, it was for Junior Linux Admin position, and I’m pretty sure I failed it. Now I want to build something so I am more confident in myself and what I’m capable to do.

I was thinking about to build DOS/DDOS detection script, and something similar about this topic. Another idea of mine was to set up some kind of web server. And yes, I am using Linux😅. I want to switch to Arch (currently Ubuntu), so I’m trying setting it up on virtual machines not to break anything down.

Currently I'm working on message-exchange application over blockchain in Java. It is nothing major but helps me understand how devices are connected to each other and how they work/communicate.

What and how shall I start? All the help is welcome. Thank you🙏🏼

0 Comments
2024/05/03
19:10 UTC

1

PAM permission denied for ADS user

Edit:

Seems I got it working!
So i was reading from https://github.com/neutrinolabs/xrdp/issues/906

Adding the following two lines to sssd.conf solved it for me:

ad_gpo_access_control = enforcing
ad_gpo_map_remote_interactive = +chrome-remote-desktop

So I'm trying to get chrome-remote-destop working for ADS users. The local users are working fine but when I try to start the agent for the ADS user I get the following:

$ systemctl status chrome-remote-desktop@someaduser.service
(...)
May 03 18:12:12 nixgw01 (-desktop)[4946]: pam_sss(chrome-remote-desktop:account): Access denied for user someaduser: 6 (Permission denied)
May 03 18:12:12 nixgw01 (-desktop)[4946]: PAM failed: Permission denied
May 03 18:12:12 nixgw01 (-desktop)[4946]: chrome-remote-desktop@someaduser.service: Failed to set up PAM session: Operation not permitted
May 03 18:12:12 nixgw01 (-desktop)[4946]: chrome-remote-desktop@someaduser.service: Failed at step PAM spawning /opt/google/chrome-remote-desktop/chrome-remote-desktop: Operation not permitted
May 03 18:12:12 nixgw01 systemd[1]: chrome-remote-desktop@someaduser.service: Main process exited, code=exited, status=224/PAM
May 03 18:12:12 nixgw01 systemd[1]: chrome-remote-desktop@someaduser.service: Failed with result 'exit-code'.

The AD user can normally login through SSH.

I suspect the problem is in this part in pam.d

$ cat /etc/pam.d/chrome-remote-desktop
# Copyright 2012 The Chromium Authors
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.

@include common-auth
@include common-account
@include common-password
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so close
session required pam_limits.so
@include common-session
session [success=ok ignore=ignore module_unknown=ignore default=bad] pam_selinux.so open
session required pam_env.so readenv=1
session required pam_env.so readenv=1 user_readenv=1 envfile=/etc/default/locale

$ cat /etc/pam.d/common-account
(...)
# here are the per-package modules (the "Primary" block)
account [success=1 new_authtok_reqd=done default=ignore]        pam_unix.so
# here's the fallback if no module succeeds
account requisite                       pam_deny.so
# prime the stack with a positive return value if there isn't one already;
# this avoids us returning an error just because nothing sets a success code
# since the modules above will each just jump around
account required                        pam_permit.so
# and here are more per-package modules (the "Additional" block)
account sufficient                      pam_localuser.so
account [default=bad success=ok user_unknown=ignore]    pam_sss.so
# end of pam-auth-update config

Here is my sssd.conf:

# cat /etc/sssd/sssd.conf

[sssd]
domains = ad.domain.net
config_file_version = 2
services = nss, pam

[domain/ad.domain.net]
default_shell = /bin/bash
krb5_store_password_if_offline = True
cache_credentials = True
krb5_realm = AD.DOMAIN.NET
realmd_tags = manages-system joined-with-adcli
id_provider = ad
fallback_homedir = /home/%u@%d
ad_domain = ad.domain.net
use_fully_qualified_names = False
ldap_id_mapping = False
access_provider = ad
0 Comments
2024/05/03
16:33 UTC

1

Looking for a tutorial, ldap for ssh

Looking for a good tutorial to integrate ssh host based access with ldap using keys or certs?

0 Comments
2024/05/03
15:44 UTC

22

How do you secure passwords in bash scripts

How do you all secure passwords in bash scripts in 2024? I was reading about "pass", but found that its discontinued with epel repository.

I would like to understand and implement the best practices. Please advise

44 Comments
2024/05/03
14:51 UTC

15

Streamline SSH access to hosts

I have tired of SSH keys

I'm looking for an elegant way that will allow me to centrally manage SSH access to all our Linux hosts.

What preferred method is recommended ?

57 Comments
2024/05/03
11:13 UTC

2

Need help setting up quota system for users on Ubuntu

Hey everyone,

I'm looking to set up a quota system for each user on my Ubuntu system, and I could use some guidance.

I've been trying to enable quotas following various online tutorials, but I seem to be encountering some issues. I've edited the /etc/fstab file to include the necessary options (usrquota and grpquota), remounted the filesystem, initialized the quota database, and enabled quotas, but when I run quotacheck, it doesn't seem to detect the quota-enabled filesystem.

My goal is to enforce disk quotas for individual users to ensure fair resource allocation and prevent any single user from consuming excessive disk space.

Could someone please provide step-by-step instructions or point me to a reliable guide for setting up quotas for each user on Ubuntu? Any help or advice would be greatly appreciated!

Thank you in advance!

0 Comments
2024/05/03
10:07 UTC

0

Does exists the driver for qemu / vmware-svga for Linux ?

Hello.

I've virtualized Debian 12 on Windows 11 with qemu for Windows. The parameters that I've used to launch the vm are the following ones :

qemu-system-x86_64.exe -machine q35 -cpu kvm64,hv_relaxed,hv_time,hv_synic -m 8G \ 
-device vmware-svga,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \ 
-audiodev dsound,id=snd0 -device ich9-intel-hda -device hda-duplex,audiodev=snd0 \ 
-hda "I:\Backup\Linux\Debian.img" -drive file=\\.\PhysicalDrive5 \ 
-drive file=\\.\PhysicalDrive6 -drive file=\\.\PhysicalDrive8 \ 
-drive file=\\.\PhysicalDrive11 -drive file=\\.\PhysicalDrive12 \ 
-rtc base=localtime -device usb-ehci,id=usb,bus=pcie.0,addr=0x3 \ 
-device usb-tablet -device usb-kbd -smbios type=2 -nodefaults \ 
-netdev user,id=net0 -device e1000,netdev=net0,id=net0,mac=52:54:00:11:22:33 \ 
-device ich9-ahci,id=sata -bios "I:\OS\vms\qemu\OVMF_combined.fd"

Adding "-device vmware-svga,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1" to the qemu / Debian parameters will cause it won't boot. Debian VM freezes before reaching the login prompt.

I'm sure that I should install the vmware-svga driver inside the vm,but I'm not able to find it.

Does it exists ? In FreeBSD it exists and it works well.

5 Comments
2024/05/02
09:13 UTC

7

Why "openssl s_client -connect google.com:443 -tls1" fails (reports "no protocol available" and sslyze reports that google.com accepts TLS1.0?

I need to test for TLS1.0 and TLS1.1 support in a system (with RHEL 7 and RHEL 8) where I am not able to install any additional tools and has no direct internet access, so I'm trying to use only the existing openssl. I'm validating the process in another system where I can install tools and have internet access, running

openssl s_client -connect google.com:443 -tls1

I have this result:

CONNECTED(00000003)

40374A805E7F0000:error:0A0000BF:SSL routines:tls_setup_handshake:no protocols available:../ssl/statem/statem_lib.c:104:

---

no peer certificate available

But if I run

sslyze google.com

I get the following result:

COMPLIANCE AGAINST MOZILLA TLS CONFIGURATION

--------------------------------------------

Checking results against Mozilla's "MozillaTlsConfigurationEnum.INTERMEDIATE" configuration. See https://ssl-config.mozilla.org/ for more details.

google.com:443: FAILED - Not compliant.

* tls_versions: TLS versions {'TLSv1', 'TLSv1.1'} are supported, but should be rejected.

* ciphers: Cipher suites {'TLS_RSA_WITH_AES_256_CBC_SHA', 'TLS_RSA_WITH_3DES_EDE_CBC_SHA', 'TLS_RSA_WITH_AES_128_CBC_SHA', 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA', 'TLS_RSA_WITH_AES_128_GCM_SHA256', 'TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA', 'TLS_RSA_WITH_AES_256_GCM_SHA384'} are supported, but should be rejected.

Why sslyze reports that TLSv1 and TLSv1.1 are supported on google.com website and openssl s_client -connect google.com:443 -tls1 reports there is no support for TLSv1.0 (and also no support for TLSv1.1)?

Is there any other way to use openssl to validate TLS version support in a server that reports a result similar to sslyze?

Thanks!

11 Comments
2024/05/02
09:01 UTC

3

Use the same DNS for each link with Netplan

0 Comments
2024/05/01
21:26 UTC

1

Giving file permissions to an installed service

Hello,
I'm pretty new to Linux.
My server is running Debian 12 with just the command line.

I would like to know how to give a service file permissions, Specifficaly i want to give sftpgo.service permission to upload and download all files and folder in all files and folder. Now when i try to do that through the SFTPGo web client panel it says:
For example:

Unable to create directory "/home/test": permission denied

or

Unable to write file "/home/test.pdf": permission denied

All help apprieciated :)

11 Comments
2024/05/01
15:56 UTC

7

Kerberos issues, pointers for right direction appreciated

I would like to ask for some pointers from you guys on how to fix/debug/chase my issues with my Hadoop kerberos setup, as my logs are getting spammed with this error in any combination of hostnames in my cluster:

2024-04-26 12:22:09,863 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for doop3.myDomain.tld:44009 / 192.168.0.164:44009:null (GSS initiate failed) with true cause: (GSS initiate failed)

Introduction ::

I am messing around with on-premises stuff as I kind of miss it, working in cloud.

So how about creating a more or less full on-premises data platform based on Hadoop and Spark, and this time do it *right* with kerberos? Sure.

While Kerberos is easy with AD, I haven't used it in Linux. So this will be fun.

The Problem ::

Actually starting the Hadoop cluster. The Hadoop Kerberos configuration is taken from Hadoops own security guide: https://hadoop.apache.org/docs/r3.4.0/hadoop-project-dist/hadoop-common/SecureMode.html

The Kerberos settings are from various guides, and man pages.

This will focus on my namenode and datanode #3. The error is the same on the other datanodes, this is just, what I'm taking as examples.

When I start the namenode, the services actually goes up, and on namenode I get this positive entry:

2024-04-24 15:53:16,407 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hdfs/nnode.myDomain.tld@HADOOP.KERB using keytab file hdfs.keytab. Keytab auto renewal enabled : false

And on the datanode, I get a similar one:

2024-04-26 12:21:07,454 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user dn/doop3.myDomain.tld@HADOOP.KERB using keytab file hdfs.keytab. Keytab auto renewal enabled : false

And after a couple of minutes I get hundreds of these 2 errors on all nodes:

2024-04-26 12:22:09,863 WARN SecurityLogger.org.apache.hadoop.ipc.Server: Auth failed for doop3.myDomain.tld:44009 / 192.168.0.164:44009:null (GSS initiate failed) with true cause: (GSS initiate failed)



2024-04-26 12:21:14,897 WARN org.apache.hadoop.ipc.Client: Couldn't setup connection for dn/doop3.myDomain.tld@HADOOP.KERB to nnode.myDomain.tld/192.168.0.160:8020 org.apache.hadoop.ipc.RemoteException(javax.security.sasl.SaslException): GSS initiate failed

And here an... Error? from the kerberos server log:

May 01 00:00:27 dc.myDomain.tld krb5kdc[1048](info): TGS_REQ (2 etypes {aes256-cts-hmac-sha1-96(18), aes128-cts-hmac-sha1-96(17)}) 192.168.0.164: ISSUE: authtime 1714514424, etypes {rep=aes256-cts-hmac-sha1-96(18), tkt=aes256-cts-hmac-sha384-192(20), ses=aes256-cts-hmac-sha1-96(18)}, dn/doop3.myDomain.tld@HADOOP.KERB for nn/nnode.myDomain.tld@HADOOP.KERB

It doesn't say error, listed as 'info', yet has 'ISSUE' within it.

Speaking of authtime, all servers have set up to use the KDC as NTP server, so that time drift should not be an issue.

Configuration ::

krb5.conf on KDC:

# To opt out of the system crypto-policies configuration of krb5, remove the
# symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 8766h
renew_lifetime = 180d
forwardable = true
default_realm = HADOOP.KERB
[realms]
HADOOP.KERB = {
kdc = dc.myDomain.tld
admin_server = dc.myDomain.tld
}
[domain_realm]
.myDomain.tld = HADOOP.KERB
myDomain.tld = HADOOP.KERB
nnode.myDomain.tld = HADOOP.KERB
secnode.myDomain.tld = HADOOP.KERB
doop1.myDomain.tld = HADOOP.KERB
doop2.myDomain.tld = HADOOP.KERB
doop3.myDomain.tld = HADOOP.KERB
mysql.myDomain.tld = HADOOP.KERB
olap.myDomain.tld = HADOOP.KERB
client.myDomain.tld = HADOOP.KERB

krb5.conf on clients, only change is log location, really:

# To opt out of the system crypto-policies configuration of krb5, remove the
# symlink at /etc/krb5.conf.d/crypto-policies which will not be recreated.
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 8766h
renew_lifetime = 180d
forwardable = true
default_realm = HADOOP.KERB
[realms]
HADOOP.KERB = {
kdc = dc.myDomain.tld
admin_server = dc.myDomain.tld
}
[domain_realm]
.myDomain.tld = HADOOP.KERB
myDomain.tld = HADOOP.KERB
nnode.myDomain.tld = HADOOP.KERB
secnode.myDomain.tld = HADOOP.KERB
doop1.myDomain.tld = HADOOP.KERB
doop2.myDomain.tld = HADOOP.KERB
doop3.myDomain.tld = HADOOP.KERB
mysql.myDomain.tld = HADOOP.KERB
olap.myDomain.tld = HADOOP.KERB
client.myDomain.tld = HADOOP.KERB

Speaking of log locations, nothing is created in the folder on the clients, despite having permissions to do so:

# ls -la /var/log/kerberos/
total 4
drwxrwxr--   2 hadoop hadoop    6 Apr 22 22:08 .
drwxr-xr-x. 12 root   root   4096 May  1 00:01 ..

Klist of the namenodes keytab file, that is referenced in configuration:

# klist -ekt /opt/hadoop/etc/hadoop/hdfs.keytab
Keytab name: FILE:/opt/hadoop/etc/hadoop/hdfs.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   2 04/26/2024 11:42:29 host/nnode.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha384-192)
   2 04/26/2024 11:42:29 host/nnode.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha256-128)
   2 04/26/2024 11:42:29 host/nnode.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 host/nnode.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 host/nnode.myDomain.tld@HADOOP.KERB (camellia256-cts-cmac)
   2 04/26/2024 11:42:29 host/nnode.myDomain.tld@HADOOP.KERB (camellia128-cts-cmac)
   2 04/26/2024 11:42:29 host/nnode.myDomain.tld@HADOOP.KERB (DEPRECATED:arcfour-hmac)
   2 04/26/2024 11:42:29 host/doop3.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha384-192)
   2 04/26/2024 11:42:29 host/doop3.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha256-128)
   2 04/26/2024 11:42:29 host/doop3.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 host/doop3.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 host/doop3.myDomain.tld@HADOOP.KERB (camellia256-cts-cmac)
   2 04/26/2024 11:42:29 host/doop3.myDomain.tld@HADOOP.KERB (camellia128-cts-cmac)
   2 04/26/2024 11:42:29 host/doop3.myDomain.tld@HADOOP.KERB (DEPRECATED:arcfour-hmac)
   2 04/26/2024 11:42:29 nn/nnode.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha384-192)
   2 04/26/2024 11:42:29 nn/nnode.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha256-128)
   2 04/26/2024 11:42:29 nn/nnode.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 nn/nnode.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 nn/nnode.myDomain.tld@HADOOP.KERB (camellia256-cts-cmac)
   2 04/26/2024 11:42:29 nn/nnode.myDomain.tld@HADOOP.KERB (camellia128-cts-cmac)
   2 04/26/2024 11:42:29 nn/nnode.myDomain.tld@HADOOP.KERB (DEPRECATED:arcfour-hmac)
   2 04/26/2024 11:42:29 dn/doop3.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha384-192)
   2 04/26/2024 11:42:29 dn/doop3.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha256-128)
   2 04/26/2024 11:42:29 dn/doop3.myDomain.tld@HADOOP.KERB (aes256-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 dn/doop3.myDomain.tld@HADOOP.KERB (aes128-cts-hmac-sha1-96)
   2 04/26/2024 11:42:29 dn/doop3.myDomain.tld@HADOOP.KERB (camellia256-cts-cmac)
   2 04/26/2024 11:42:29 dn/doop3.myDomain.tld@HADOOP.KERB (camellia128-cts-cmac)
   2 04/26/2024 11:42:29 dn/doop3.myDomain.tld@HADOOP.KERB (DEPRECATED:arcfour-hmac)

I naively tried to add entries for both VMs im currently talking about in the same keytab as they are mentioning each other. No difference.

Each principal is created like this, with a change of the last part for each entry obvs:

add_principal -requires_preauth host/nnode.myDomain.tld@HADOOP.KERB

On each principal in the keytab file on both mentioned VMs i run a kinit like this:

kinit -l 180d -r 180d -kt hdfs.keytab host/doop3.myDomain.tld@HADOOP.KERB

Final notes ::

I set lifetime and renewal to 180 days, as I don't always boot my server every day, and should make it easier to no have to re-init stuff. Probably not what the security team in a real production environment would be happy for.

I disable pre-auth, as I got in the kerberos logs an error, that the account needed to pre-auth, but I never found out how to actually do that.... Security guys might not be impressed by that *either*.

In my krb5.conf file, I increased ticket_lifetime = 8766h and renew_lifetime = 180d, to a year, and ~half a year. Within the max limits of the Kerberos documentation, but longer that default, again, as I would like to everything still work, after the VMs are not turned on for a few months.

When I run a kinit I do it on several accounts, as I have seen that in other guides. So first as the hadoop user, then as the root user, and finally as the hdfs user. In that order.

Not sure it is right.

All Hadoop users are in the group 'hadoop'. As I use Kerberos in my Hadoop cluster, the data nodes will be started as root in order to claim the low range ports that requires root privileges, and then use the application jsvc to handle over the process to what would normally be the account running the node, the hdfs account. And it does.

But I still not sure if kinit'ing so much is necessary.

I have found several links with this issue. Many is like 'Oh you should just run the kinit again' or other suggestions like 'just recreate the keytab and it works'. I have done these things several times, but not found an actual solution.

Any help is much appreciated.

EDITS:

I have tried to disable ipv6, as many threads says it helps. It does not for me.

SELinux is disabled as well.

7 Comments
2024/05/01
00:21 UTC

0

Micron 5100 SSD Trim Not Trimming?

I routinely make compressed full block based backups of a couple of Micron 5100 ECO SSDs that are NTFS formatted. In order to minimize the size of these backups I always manually run a trim on them via the command prompt (defrag X: /L) before doing the backup cause the trim should replace deleted data with zeroes which obviously compress well. However, I've been noticing that the size of these backups is growing even though the size of the content isn't which is strange. So I decided to run a test where I wrote about 100gb of data, deleted it, and then manually trimmed the data before creating a backup. Strangely the backup was 20GB larger than expected. It's like 80GB was correctly trimmed but 20GB wasn't. Anyone have any clue where and how to even start troubleshooting this? I'm well versed with Linux and I'm pretty sure the solution will require it which is why I'm asking the question here although in this case I am dealing with a NTFS filesystem that is normally connected to a Windows 10 machine.

10 Comments
2024/04/30
19:28 UTC

47

I learned a new command last night: mysqldumpslow

Mysqldumpslow is a tool to summarize slow query logs. I had been grepping and manually searching through them like a schmuck all these years

5 Comments
2024/04/30
08:32 UTC

3

MYSQL - Got Error 2003 When Running mysqldump

Hi,
I am running an automation in Crontab to dump databases from the remote server.

I have a crontab that runs mysqldump on each database.

I will explain the steps I am running in my crontab:

  1. I export every database that I have to a Txt file.
  2. I am dumping each database in a loop on this text file.
  3. in the middle of dumping, I got this error: I can't connect to MySQL server on '<IPV4>' (111) when trying to connect, and the dumping stops to backup and creates a file without size.

I tried a lot of things to resolve this error but failed.

For example, I tried reconfiguring things like 'connect_timeout' and 'wait_timeout.'
Also, I tried to put at the end of the loop a sleep command to wait until opening a new session to the DB, and it's not successful. It still doesn't back up the entire DB with the appropriate size very well.
If I dumpping a DB without the loop, it works fine.

My dump command is:

"mysqldump -u <user> --password='<pass>' -h <IPV4> --quick --skip-lock-tables --routines <db> 2> <path>dump_error_log.txt > <path>db.SQL"

Could someone please help me to fix this issue?

It's very urgent for us, and I am pretty stuck!

Thanks for all!

7 Comments
2024/04/30
07:29 UTC

1

How do I get a log message using rsyslog to be sent to a another user?

I used :omusrmsg but it’s still not being sent to the user.

2 Comments
2024/04/29
20:01 UTC

15

How do you guys make your Linux CVs?

Haven't updated my CV in 6 years, but now is the time.

Is there a CV example you guys are using?

Is everyone generating their own format and tweaking it every once in a while?

Anybody willing to share one to take some ideas?

Thanks!

28 Comments
2024/04/29
16:08 UTC

0

Monitoring Linux Authentication Logs with Vector & Better Stack

0 Comments
2024/04/29
15:47 UTC

2

Removing default repos on Kickstart.

I've managed to get OL9 provisioning from Foreman using a bootdisk method, and in %post I'm using the General Registration curl command with a self-maintained subscription-manger repo for OL9 to install from. The kickstart seems to go through fine, and the system registers with the correct Content View, however it also adds Oracle Linux public repositories. So when packages all update at the end of the provisioning, the latest packages are being pulled from the internet, rather than the Content View I've set up in Foreman.

I posted out to the Foreman Community as well, but just to ask a wider audience and see if I can get an answer sooner, I've posted here as well. I'll update if I get an answer elsewhere though. Does anyone know how to configure which repos are added during the provision?

0 Comments
2024/04/29
15:02 UTC

5

Alternative to Termius on Linux

I love Termius on Windows, it does both SSH and SFTP in a really good and clean way. However on Linux you either have to use their .deb version (im on Fedora) or the Snap version which is just terrible (crashing when opening files in sftp etc.).

Is there any alternative to Termius that works great on Linux? All I need is a program that combines both SSH and SFTP in one clean and easy to use application.

28 Comments
2024/04/29
13:06 UTC

10

SSSD: How to limit Service restart attempts (dependencies are causing infinite attempts) / Failing a service AND its dependencies?

Hello,

I've found a bit of an issue with SSSD, whereby if there is a typo in the config and SSSD fails to load, the unit will forever attempt to restart, therefore never finishing the boot process for the system.

It's more of a just-in-case thing, but I would like to limit the number of unit restart attempts as SSSD is not a requirement for the systems it's configured on, but should be considered optional.

I have tried adding the following lines to /etc/sssd/sssd.conf but this didn't work:

[Service]
StartLimitIntervalSec=5
StartLimitBurst=3

The service still attempts to restart infinitely as it is a dependency of others:

https://preview.redd.it/drujzclr2exc1.png?width=1183&format=png&auto=webp&s=08c0708def5f6b222499c7e138606bb0f868162a

Is there a way to fail all these dependencies if the SSSD service fails to load after X attempts, or am I a bit SOL here?

It should be noted that I am only doing this in case the config syntax is incorrect. If the daemon fails to connect to a particular LDAP server then SSSD gracefully fails to load anyway and the system still boots. I know the typical solution is "test your configs", but sometimes things slip through, and the solution to this could be useful to know in other situations too!

9 Comments
2024/04/29
09:41 UTC

3

389-DS with Apach Directory Studio

Hello there!

Im not having luck authenticating from an remote host onto my 389 LDAP server using the Apache DS browser.

The server is running the initial configs sugested in the documentation. it looks like this (minus the obfuscations for privacy reasons):

[general]

config_version = 2

[slapd]

root_dn = cn=Directory Manager

root_password = ****

[backend-userroot]

sample_entries = yes

suffix = dc=****, dc=com,

Im trying to authenticate with username "root" and the 'root_password', with no sucess. I get authentication errors, as if the credentials were invalid.

Should i create an user and bind the Directory Manager cn to it instead?

2 Comments
2024/04/29
05:54 UTC

0

Problems with installing crush FTP

I'm running Debian 12 with just the command line. I need help with installing Crush FTP because the one line link dosent work for me. Please help me on how to do it ;-)

7 Comments
2024/04/28
11:05 UTC

6

OOM killing fio benchmark

Hi, I am currently trying to test some ZFS configurations with fio but the OOM is killing the fio read test on some of the configs such as a 4 disk raidz2, a 4 disk raidz3 and a 6 disk raidz3. Weirdly it doesn't kill the same test in something like a 6 disk raidz2. The fio command being used is below:

fio --name=read --rw=read --size=256m --bs=4k --numjobs=16 --iodepth=16 --ioengine=libaio --runtime=60 --time_based --end_fsync=1

The system has 2GiB of memory and I am doing a 4Gb read test so that the disks are being hit and not the memory.

Does anyone know why the OOM would be killing the fio process for some of the configs but not the others? Apologies if this is a stupid question, am still trying to learn about storage.

8 Comments
2024/04/27
20:16 UTC

1

How would I log ONLY unsuccessful attempts into auth.log?

Hi I want to configure logging for authentication attempts but I only want logging for the unsuccessful attempts. From most of my research, I see that you can only do logging based on the priority set in the configuration file.

5 Comments
2024/04/26
21:44 UTC

Back To Top