/r/saltstack
Salt is an open source tool to manage your infrastructure via remote execution and configuration management.
Feel free to ask questions here. Or in the discord community at https://discord.gg/GC5U3SEF
Salt is a powerful remote execution manager that can be used to administer servers in a fast and efficient way.
Salt allows commands to be executed across large groups of servers. This means systems can be easily managed, but data can also be easily gathered. Quick introspection into running systems becomes a reality.
Remote execution is usually used to set up a certain state on a remote system. Salt addresses this problem as well, the salt state system uses salt state files to define the state a server needs to be in.
Between the remote execution system, and state management Salt addresses the backbone of cloud and data center management.
See the wiki page for more links and other resources.
/r/saltstack
You can only get salt 3006 or newer on the Broadcom site. Where are the packages for the older versions? This is having a horrific affect on our faith in using salt going forward.
Did anyone have archive mirrors of the previous salt versions?
How would ANYONE in Broadcom think this was a good idea?
Why should ANYONE continue using Salt?
I have to mirror salt's repos for various reasons, but broadcom's using jFrog or whatever's 'Artifactory' instead of standard repository structure.
Any insight on how to rclone from there?
Or am I stuck mirroring it myself with createrepo
before my pulp server pulls it?
I’m rebuilding my homelab and learning SaltStack as well. I want to automate everything but there is one thing that bothers me and I haven’t found a solution in the docs.
Let’s say that I need a proxy server, but that depends on a DNS Resolver. But the DNS Resolver depends on the Proxy Server to install the Unbound.
Is possible to do something like this and how to do it?
If someone is willing to point to some “production ready” examples on GitHub, I would be thankful.
I currently have a /srv/salt/base/top.sls
that looks like:
base:
'*':
- motd
- lnav
Now, I have a state called myteam-ssh-keys
that should be targeted to minions having a specific grain (managed_by
) equal to a specific value (myteam
).
How can I update the top.sls
to apply the myteam-ssh-keys
only to the targeted minion ?
The overall goal is to end up putting a cron job that runs salt '*' state-apply
regularly to keep the minions in sync.
I'm trying to use Salt lgpo.set to configure windows 'Attack Surface Reduction Rules'. This setting requires a list with values. I have successfully configured other lists without values e.g
Local_Policies:
lgpo.set:
- computer_policy:
Access this computer from the network:
- Administrators
- Remote Desktop Users
How do I include values in the list items?
well, what the title says. If I have passwords or keys defined in `/etc/salt/master` do they have to be in plain text? I'm trying to define external pillar source using hashicorp vault, which works pretty well, but in a master config file I need to define the app role secret id. I would rather the secret id not be in scm.
Olá pessoal,
Sou iniciante no salt e gostaria de uma ajuda de vocês. Criei um state para modificar a pasta C:\ProgramData\Microsoft\Windows\Start Menu\. Gostaria que todos os arquivos dela fossem limpos e só ficasse o arquivo do state cria_atalho. Quando eu executo a primeira vez ele funciona corretamente mas após isso eu crio arquivos manualmente nessa pasta e mesmo executando o state novamente ele não limpa esses arquivos. O retorno que tenho no master é que não houveram mudanças na pasta. Sabem me dizer o que estou fazendo de errado?
remove.arquivos:
file.directory:
- name: 'C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\'
- clean: True
- require:
- cria.atalho
cria.atalho:
file.managed:
- name: 'C:\\ProgramData\\Microsoft\\Windows\\Start Menu\\atalho.lnk'
- source: 'salt://win/atalhos/atalho.lnk'
- source_hash: 43808f02b6f82eb7b68906bec8cfa7be
Obrigado.
Hey all, Does anyone use SaltStack to streamline Aerospike configuration management for different clusters at your workplace/org?
Would love to hear whats your approach in deploying aerospike configuration dynamically for different aerospike clusters using saltstack.
Need ideas to streamline configuration management while setting up a new cluster.
I am having an issue where I cannot communicate with my salt minions from master even though they have their salt key accepted and the salt service is installed and running.
When I try to run test.ping I get an error "Minion did not return. [Not Connected]"
To resolve this I often have to remove the minion keys and reinstall minion with a new key. Surely, there has to be a solution for this, or maybe my salt configuration is wrong??
I have a situation where by on a VMware based virtual machine when I check if Secure Boot is enabled using mokutil
it says it is, but when I check the efi-secure-boot
grain it's saying Secure Boot isn't enabled.
When I check the VMs firmware configuration the vCenter it's configured to use EFI (and not BIOS) and Secure Boot is ticked.
This seems to be case across my entire estate of approx. 20 Debian and Ubuntu based VMs.
root@host:~ # mokutil --sb-state
SecureBoot enabled
root@host:~ # sudo salt-call grains.item efi efi-secure-boot
local:
----------
efi:
True
efi-secure-boot:
False
Anyone else experiencing the same thing?
hey all, im trying to solve a problem in using saltstack:
lets say we have aerospike clusters being used across different teams in the company. The thing is when a team is needed to create a new aerospike cluster or make any changes in the existing clusters, they create new folder in the salt:// folder specific to a cluster and add relevant hosts, config and namespaces to it that they need and spin up or make changes to a cluster
ex: config.sls host.yml install.sls etc
the problem here is since every cluster has its own folder and it creates more folders and it's kind of cumbersome. how do i improve this? using salt pillar? and how do i optimise this?
Is there really no built-in way to parse a URL in salt or jinja1
Python has urlparse, and Ansible has urlsplit.
Yes, I know I can cobble this together in many ways, but I'd expect salt or jinja to have a simple call that puts a URL in an array of parts or even a dictionary.
Am I missing something?
What is the best way to upgrade OS (Debian in my case) on the minions?
I use pkg.uptodate in my state but for some reason it does not install all the packages available.
(apt update/upgrade shows packages available for upgrade like linux-image or kernel headers)
Any tip or what am I missing?
Is it possible to use salt to add/update/delete A, PTR, etc. records with AD based DNS?
I want DNS changes to be tied to salt deploying or terminating servers as well as using Salt to automate reoccurring hygiene activities.
Any examples would be awesome.
TIA
So, I have a specific problem to solve and have been advised to look at salt as a possible tool to solve it.
I've spent the last two hours reading documentation and setting up a master and a windows minion, but now I'm a bit stuck.
In hindsight, I'm not sure I need the master, but I might play around with it at some point if I end up solving my problem with salt and using it more.
Anyway, so here's what I actually want to accomplish:
The plan is to use packer to build monthly images that will be used to deploy remote desktop session host. There are about 40 different "profiles" (I know, we're trying to cut it down. But licensing and very different workloads make it a bit of a pain). So part of the build would be installing the required applications and installing them.
At this stage I'm having packer upload the required installation files to the image, running the installations in the required order and then deleting the installation files.
I was hoping to use salt for this. I'm not sure salt is the right tool. Anything requiring use of github is a no-go. That's both disallowed by policy and actively blocked in the firewall. Any installation files need to be fetched from a local repository as we have custom packaged applications hosted on SMB-shares.
My hope would be that I could during my packer build just make a call similar to "install Developer Desktop packages" and there would be a role (not sure what it's called in salt?) that then lists everything that needs to be installed and in what order. Bonus if it can fetch it from a self-hosted repository. I can do https if I have to, but if smb isn't an option I'd rather just have packer upload the files at build time. But then I also need to keep track of the roles in packer so it knows which files to upload...
Is salt a good fit? I've been trying to find a good solution for this, but everywhere I look most tools are focused on cloud and using services hosted on the internet like git, which infosec will shut down instantly. I need something that can run on-prem with no outside dependencies. The machines also only need to be provisioned, not managed. The image will never be booted up once it's built, the workers will be clones of the image and non-persistent. As in they reboot daily and all written data is discarded and the workers revert to a clean clone. We would then rebuild the images monthly in order to apply patches and updates.
Looking to apply the CIS hardening guidelines to our windows 10 systems via a salt state
Has anyone attempted this with salt?
The list is enormous
I see the saltstack community slack workspace was deleted, if I wanted to chat, do I go back to IRC or does saltstack no longer do that ?
Thanks!
I've been trying to learn how this works, and I must be missing something. Does anyone see where I'm going wrong?
/etc/salt/minion:
master: salt.mydomain.com
startup_states: 'sls'
sls_list:
- my_startup_state
log_level: debug
On my Master:
/srv/salt/my_startup_state:
/test.txt:
file.managed:
- makedirs: true
- contents: |
# This is a salt managed file.
This is a test file!
if I run a sudo salt-call state.apply my_startup_state from the minion it will apply, but after a server or service restart, it does not.
Ideas and suggestions welcome!
I am trying to make an HTTP API call from Salt, to an HTTPS URL with a self-signed SSL certificate.
Something like this:
module.run:
- name: http.query
- url: "https://{{ apiurl }}/api/v1"
- method: POST
- verify_ssl: False
- headers:
Content-Type: "application/json"
Authorization: "Basic {{ apiuser }}:{{ apipass }}"
- decode: True
- status: 200
It seems like it's still trying to verify the certificate despite the verify_ssl setting being set to false.
Function: module.run
Name: http.query
Result: True
Comment: Module function http.query executed
Started: 11:18:05.838276
Duration: 19.7 ms
Changes:
----------
ret:
----------
error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)
The certificate is self-signed so it is completely understandable that the certificate verify is failing... but I said not to verify it so why is it trying to verify it still?
I was unable to find any more settings to change to further tell it to ignore the SSL validity... I was able to find a bug report about this issue with no outcome: https://github.com/saltstack/salt/issues/39755
So does the ssl_verify setting just like, not work? What is the point of it then?
Somewhat of a Salt noob here still... I've completed some PluralSight training on it, but I am finding the syntax a bit confusing still.
I wrote a Bash script which outputs a large dataset in JSON format, and I want to parse the data in Salt.
{% set tags = [
{'tag': 'tag1', 'category': 'category1'},
{'tag': 'tag2', 'category': 'category2'},
{'tag': 'tag3', 'category': 'category3'},
] %}
This is working code that I tested with, and gives me the type of array I need in Salt. It looks to me like it's basically already JSON...
So since my Bash script outputs a JSON array, I tried to do:
{% set tags = [ salt['cmd.run']('bash /path/to/my/script.sh') %}
This didn't seem to work because from Salt's perspective, "tags" was just a giant string. Fair enough. So I started looking for ways to "convert" the data type, I thought this might do the trick: https://docs.saltproject.io/en/latest/ref/serializers/all/salt.serializers.json.html
{% set bash_output = salt['cmd.run']('bash /path/to/my/script.sh') %}
{% set tags = salt.serializers.json.deserialize(bash_output) %}
But, sadly this didn't work.
Rendering SLS failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'serializers'; line 7
ChatGPT kept trying to get me to do this:
{% set bash_output = salt['cmd.run']('bash /path/to/my/script.sh') %}
{% set tags = salt['json.loads'](bash_output) %}
This also wasn't working.
Rendering SLS failed: Jinja variable 'salt.utils.templates.AliasedLoader object' has no attribute 'json.loads'; line 7
Am I like missing a module I need to parse JSON or something?
Or am I doing this totally wrong? lol
wondering how people manage their network config via salt,
Im curious how people use salt to manage networkManager and especially its route syntax
unlike sysconfig, NM places routes inside the actual iface config file, ie,
root@host:system-connections $ cat bond0.nmconnection
##############################################################
## This file is managed by SALTSTACK - Do not modify manually
##############################################################
[connection]
id=bond0
connection.stable-id=mac
type=bond
interface-name=bond0
[ethernet]
mac-address=00:0x:xx:x3:x1:x1
[bond]
miimon=100
mode=active-backup
[ipv4]
address1=192.168.38.69/28,192.168.38.65
method=manual
never-default=true
route1=89.34.184.0/24,192.168.38.65,100
route2=31.3.4.64/28,192.168.38.65,100
route3=41.3.4.65/32,192.168.38.65,100
route4=42.3.4.80/30,192.168.38.65,100
route5=87.3.64.64/28,192.168.38.65,100
route6=123.40.107.0/24,192.168.38.65,100
..etc
I had to script up a custom jinja processor that reads in a YAML config for each host, and generates a NM static file,
so for example if host1 has this route YAML,
# RHEL9 routes
p1p1:
192.168.38.17:
- 120.43.166.167/32 # my route 1
- 120.43.166.170/32 # my route 2
- 120.43.166.23/32 # my route 3
- 120.43.166.78/32 [metric=200, initcwnd=500] # custom route with diff metric and custom congestion window option
the jinja processor generates a NM static file that looks like this
cat /etc/NetworkManager/system-connections/p1p1.nmconnection
### PTP, Mktdata
[connection]
id=p1p1
type=ethernet
interface-name=p1p1
connection.stable-id=mac
[ethernet]
mac-address=xxxxxxx
[ipv4]
address1=192.168.18.20/28,192.168.18.17
method=manual
may-fail=false
never-default=true
route1=120.43.166.167/32,192.168.18.17,100
route2=120.43.166.170/32,192.168.18.17,100
route3=120.43.166.23/32,192.168.18.17,100
route4=120.43.166.78/32,192.168.18.17,200
route4_options=initcwnd=500
NM is a real pain in A to work with in terms of static config via any kind of config mgmt system. Wondering if theres a better way to do this
I want to be able to purge all files that are not managed in any /etc/something.d/ directory (sshd, tmpfiles, rsyslog, etc.)
The reason for that is to make sure no unmanaged files linger and cause unexpected configs to be loaded. For instance someone manually created a file, or a file managed by Salt became unmanaged, but wasn't removed.
In Ansible I do it like this (as an example):
# Create a file with the week number
- name: create diffie-hellman parameters
openssl_dhparam:
path: /etc/dovecot/dhparams/{{ ansible_date_time.year }}-{{ ansible_date_time.weeknumber }}.pem
size: 2048
mode: "0600"
notify: restart dovecot
# Create a list of all files, but exclude the file we just created
- name: find old diffie-hellman parameters
find:
paths: /etc/dovecot/dhparams/
file_type: file
excludes: "{{ ansible_date_time.year }}-{{ ansible_date_time.weeknumber }}.pem"
register: found_dh_params
# Delete all files that were found, except the newly created file
- name: delete old diffie-hellman parameters
file:
path: "{{ item.path }}"
state: absent
loop: "{{ found_dh_params['files'] }}"
loop_control:
label: "{{ item.path }}"
Is something like this easily possible in Salt? Just checking if someone has something like this already thought out and willing to share it. Otherwise I have to see if I can see to replicate this. I guess it's not impossible.
Or maybe there is a native Salt method for exactly these use cases? Any experienced Salt engineers out there?
I was wondering if there are any development efforts with Salt and Kubernetes. Ansible has some modules around Kubernetes (https://docs.ansible.com/ansible/latest/collections/kubernetes/core/k8s\_drain\_module.html) and while I am trying to stay within our Salt environment trying to figure out how I can easily drain, cordon, patch/update, reboot node, verify node is healthy, and then move on to the next one in a systematic manner. We currently have quite a few nodes. I guess I am curious if anyone is managing their Kubernetes environment with Salt?
We have the --output-diff option, that's nice, helps to unclutter the sls run output.
We can put the "state_output_diff: true" in the config file, that's even better for everyday life.
Imagine we have the "state_output_diff: true" in our counfig file; is there a command-line option that can turn on the default "display all states" behaviour?
hi all, trying to figure out the best way to do this,
i have custom runners and custom exec modules
i have a common.py in my custom runners dir that contains custom functions that are shared across all my objects, ie things like slack_notify(), send_email(), check_syntax(), etc
trying to figure out how i can reference this "common" file in my custom exec modules like "sudo.py"
it works from runners, ie, in my custom runner i can import it like this,m
from common import send_email
but in exec "sudo" module, tried import like this
from _runners.common import send_email
and
from common import send_email
it cant find the file,
minion1:
'sudo' __virtual__ returned False: No module named 'common'
whats a proper way to share functions across custom objects