/r/bash
Wake me up when September ends.
A subreddit dedicated to Bash scripting. Now complete with a Discord Server.
Content must be Bash related. This rule is interpreted generously; general shell scripting content is mostly accepted. However, the post should not be specific to another shell.
No reposts. This is meant with regards to content, not just “the same link was submitted earlier” – it’s okay to resubmit an old link in some new context (e. g. because you’d like to discuss another part of it, or because something has changed since the last time it was submitted, or because the link was updated since then). Links from the sidebar count as having been submitted already, so posting them without new context is also considered a repost.
You can choose one of these four flairs for your post:
If you don’t flair your post, the moderators will set the most appropriate flair.
/r/unix – for everything Unix
Other Shells: /r/zsh, /r/fishshell, /r/oilshell, /r/batch
BashGuide – A Bash guide for beginners.
Beginner's Guide to Command Line – A crash course for some common unix and shell commands. Update 2022-01-14: Course is currently being rewritten
Google's Shell Style Guide – Reasonable advice about code style.
Explainshell - Explain complex shell operations.
ShellCheck – Automatically detects problems with shell scripts.
BashFAQ – Answers most of your questions.
BashPitfalls – Lists the common pitfalls beginners fall into, and how to avoid them.
(Archived) The Bash-Hackers Wiki – Extensive resource.
#bash – IRC channel on Libera. The main contributors of the BashGuide, BashFAQ, BashPitfalls and ShellCheck hang around there.
/r/bash
#!/bin/bash
# vim: foldmethod=marker
function romanToArabic {
local input=$1
local result=0
local prevChar=""
local currChar=""
local currValue=0
local prevValue=0
for ((i=0; i<${#input}; i++)); do
currChar="${input:i:1}"
case $currChar in
"I") currValue=1 ;;
"V") currValue=5 ;;
"X") currValue=10 ;;
"L") currValue=50 ;;
"C") currValue=100 ;;
"D") currValue=500 ;;
"M") currValue=1000 ;;
*) continue ;;
esac
# Comment{{{
# For numbers such as IV
# The loop first executes the else block
# since there is no prevValue yet.
# so 1 is added to the result variable
# but in the case of IV and such the second iteration
# executes the if block, and so we have to substract 2
# from the result variable. 1 for the incorrect addition
# and 1 for the current number.
# }}}
if ((prevValue < currValue)); then
result=$((result + currValue - 2 * prevValue))
else
result=$((result + currValue))
fi
prevChar="$currChar"
prevValue="$currValue"
done
echo "$result"
}
if [[ -z "$1" ]]; then
echo "Usage: $0 <inputFile_or_romanNumerals>"
exit 1
fi
if [[ -f "$1" ]]; then
inputFile="$1"
while IFS= read -r line; do
eval "line=$(echo "$line" | sed -E 's/([IVXLCDM]+)/$(romanToArabic "\1")/g')"
echo "$line"
done < "$inputFile" > "$inputFile.tmp"
mv "$inputFile.tmp" "$inputFile"
echo "Roman numerals converted in $inputFile"
else
romanNumerals="$1"
arabicNumber=$(romanToArabic "$romanNumerals")
echo "Roman numerals '$romanNumerals' converted to: $arabicNumber"
fi
I store an array files
containing list of file names that will later be used for further processing (files need to be absolute paths since I reference them elsewhere). For example, I want to determine the minimum amount of mkdir -p
arguments to re-create the directories where these files belong.
My files don't have newlines in them but they should still be nul-delimited for good practice. I have the following but the last line doesn't work with error warning: command substitution: ignored null byte in input
because I think nul characters can't be in a string:
# Store files in 'files' array
while IFS= read -r -d '' f; do
files+=("$f")
done < <(fd --print0 --base-directory "$rootdir" . "$rootdir" )
# TODO determine minimum amount of directories needed as arguments for mkdir -p
dirname -z "$(printf "%s\0" "${files[@]}" | sort -zu )" | tr '\0' '\n'
Anyway, a solution is dirname -z -- "${files[@]}" | sort -zu | xargs -0 mkdir -p --
but I'm more curious on the general approach to similar problems with handling nul-delimited items since is is prevalent in scripting in general:
Is the above with xargs -0
the go-to simplest scripting solution whenever you want to pass items that should be nul-delimited as arguments? And that all commands involved should use -print0
, -z
, etc. and if an application doesn't support that, you would have to convert it by using something similar to the while loop above? In most of my scripts, I assumed filenames don't contain newline characters so I never needed to use xargs since most applications assume items are space or newline-delimited. Should xargs dependency be avoided or it's prevalent and useful in general scripting, something that is used liberally?
What would a (reasonably) Bash (or maybe even POSIX) way to accomplish the same thing?
Raining in the Linux Terminal
I have created this script because I always play rain sounds while working, and I thought it would be relaxing to have a rain of characters. Feel free to improve and modify the script :)
Thank you all, and I hope you enjoy it!
#!/bin/bash
# Display help message
show_help() {
echo "Usage: $0 [density] [character] [color code] [speed]"
echo " density : Set the density of the raindrops (default 3)."
echo " character : Choose the raindrop character (default '/')."
echo " color code : ANSI color code for the raindrop (default 37 for white)."
echo " speed : Choose speed from 1 (slowest) to 5 (fastest)."
echo
echo "Example: $0 5 '@' 32 3"
}
# Function to clear the screen and hide the cursor
initialize_screen() {
clear
tput civis # Hide cursor
height=$(tput lines)
width=$(tput cols)
}
# Declare an associative array to hold the active raindrops
declare -A raindrops
# Function to place a raindrop at a random position
place_raindrop() {
local x=$((RANDOM % width))
local speed=$((RANDOM % (5 - speed_range + 1) + 1)) # Speed adjustments
raindrops[$x]=0,$speed
}
# Function to move raindrops
move_raindrops() {
clear # Always clear the screen for each frame
# Place new raindrops randomly based on specified density
for ((i=0; i<density; i++)); do
place_raindrop
done
# Print the raindrops and update their positions
for x in "${!raindrops[@]}"; do
IFS=, read y speed <<< "${raindrops[$x]}"
tput cup $y $x
echo -en "\e[${color}m${rain_char}\e[0m" # Use specified color and character
# Increment the raindrop down at its speed rate
if ((y + speed < height)); then
raindrops[$x]=$((y + speed)),$speed
else
unset raindrops[$x] # Remove the raindrop if it reaches the bottom
fi
done
}
# Check if help is requested
if [[ "$1" == "-h" || "$1" == "--help" ]]; then
show_help
exit 0
fi
# Initialize the screen
initialize_screen
# Set variables from command-line arguments
density=${1:-3} # Default density is 3
rain_char=${2-'/'} # Correctly defaults to *, handling special characters
color=${3:-'37'} # Default color blue (34)
speed_range=${4:-3} # Default speed range is 3 (1 slowest, 5 fastest)
# Main loop to animate raindrops
trap "cleanup" SIGINT SIGTERM # Properly handle user interruption
while true; do
read -t 0.1 -n 1 key
if [[ $key == "q" ]]; then
break
fi
move_raindrops
done
# Function to reset terminal settings on exit
cleanup() {
tput cnorm # Show cursor
clear
}
trap cleanup EXIT
Looking for recommendations for a programming language that can replace bash (i.e. easy to write) for scripts. It's a loaded question, but I'm wanting to learn a language which is useful for system admin and devops-related stuff. My only "programming" experience is all just shell scripts for the most part since I started using Linux.
One can only do so much with shell scripts alone. Can a programming language like Python or Go liberally used to replace shell scripts? Currently, if I need a script I go with POSIX simply because it's the lowest denominator and if i need arrays or anything more fancy I use Bash. I feel like perhaps by nature of being shell scripts the syntax tends to be cryptic and at least sometimes unintuitive or inconsistent with what you would expect (moreso with POSIX-compliant script, of course).
At what point do you use move on from using a bash script to e.g. Python/Go? Typically shell scripts just involve simple logic calling external programs to do the meat of the work. Does performance-aspect typically come into play for the decision to use a non-scripting language (for the lack of a better term?).
I think people will generally recommend Python because it's versatile and used in many areas of work (I assume it's almost pseudo code for some people) but it's considered "slow" (whatever that means, I'm not a programmer yet) and a PITA with its environments. That's why I'm thinking of Go because it's relatively performant (not like it matters if it can be used to replace shell scripts but knowing it might be useful for projects where performance is a concern). For at least home system admin use portability isn't a concern.
Any advice and thoughts are much appreciated. It should be evident I don't really know what I'm looking for other than I want to pick up programming and develop into a marketable skill. My current time is spent on learning Linux and I feel like I have wasted enough time with shell scripts and would like to use tools that are capable of turning into real projects. I'm sure Python, Go, or whatever other recommended language is probably a decent gateway to system admin and devops but I guess I'm looking for a more clear picture of reasonable path and goals to achieve towards self-learning.
Much appreciated.
P.S. I don't mean to make an unfair comparison or suggest such languages should replace Bash, just that it can for the sake of versatility (I mean mean no one's using Java/C for such tasks) and is probably a good starting point to learning a language. Just curious what others experienced with Bash can recommend as a useful skill to develop further.
$ echo one two | read A B && echo A is $A
$ A is
$
So I am trying to get the Data from accounts.csv file. the data looks like this:
id,location_id,name,title,email,department
1,1,Susan houston,Director of Services,,
2,1,Christina Gonzalez,Director,,
3,2,Brenda brown,"Director, Second Career Services",,
and I get like this:
id,location_id,name,title,email,department
1,1,Susan Houston,Director of
Services,shouston@abc.com
,
2,1,Christina
Gonzalez,Director,cgonzalez@abc.com
,
3,2,Brenda
Brown,"Director,bbrown@abc.com
,
but here is the thing I want that if the generated emails are the same then i should add location_id inside it like if there are two emails like this "shouston@abc.com" then both of them should look like this "shouston<location_id>@abc.com".
here is the script:
#!/bin/bash
# Check if the correct number of arguments is provided
if [ "$#" -ne 1 ]; then
echo "Usage: $0 accounts.csv"
exit 1
fi
# Check if the input file exists
if [ ! -r "$1" ]; then
echo "File $1 not found!"
exit 1
fi
# Function to process each line of the input file
function process_line() {
IFS=',' read -r -a fields <<< "$1"
id="${fields[0]}"
location_id="${fields[1]}"
name="${fields[2]}"
position="${fields[3]}"
# Format name: first letter uppercase, rest lowercase
formatted_name=$(echo "$name" | awk '{print toupper(substr($1,1,1)) tolower(substr($1,2)) " " toupper(substr($NF,1,1)) tolower(substr($NF,2))}')
# Format email: lowercase first letter of name, full lowercase surname, followed by
u/abc.com
formatted_email=$(echo "$name" | awk '{print tolower(substr($1,1,1)) tolower($NF)}')
formatted_email+="@abc.com"
# Check if the email already exists
if [[ "${emails[@]}" =~ "$formatted_email" ]]; then
# If the email exists, append location_id
formatted_email="${formatted_email%%@*}${location_id}@abc.com"
else
# If the email doesn't exist, add it to the array
emails+=("$formatted_email")
fi
# Output the formatted line
echo "${id},${fields[1]},${formatted_name},${position},${formatted_email},"
}
# Initialize array to store processed emails
declare -a emails
# Copy the header from the input file to accounts_new.csv
head -n 1 "$1" > accounts_new.csv
# Process each line (excluding the header) of the input file and append to accounts_new.csv
tail -n +2 "$1" | while IFS= read -r line || [ -n "$line" ]; do
if [ -n "$line" ]; then
process_line "$line"
fi
done >> accounts_new.csv
echo "Processing completed. Check accounts_new.csv for the updated accounts."
# Ensure the output file exists and is readable
output_file="accounts_new.csv"
if [ -r "$output_file" ]; then
echo "File $output_file created successfully."
else
echo "Error: Failed to create $output_file."
exit 1
fi
the problem is that it checks if the email already exist in the file and it does the job but the first one does not get the location_id. for example if there is 3 emails that are the same only last 2 of them get the location_id inside them and not first one. but i want all of them to have it.
problem might be here and i would appreciate the help:
# Check if the email already exists
if [[ "${emails[@]}" =~ "$formatted_email" ]]; then
# If the email exists, append location_id
formatted_email="${formatted_email%%@*}${location_id}@abc.com"
else
# If the email doesn't exist, add it to the array
emails+=("$formatted_email")
fi
sorry if the explanation or the code quality is bad.
One doubt, I am not very clear about IFS
from what I have been reading.
Why does the following happen, if for example I do this:
string=alex:joe:mark && while IFS=":" read -r var1; do echo "${var1}"; done < <(echo "${string}")
why in the output it prints all the value of the string
variable (alex:joe:mark) instead of only printing the first field which would be alex depending on the defined IFS
which is : ?
On the other hand if I run this:
string=alex:joe:mark && while IFS=":" read -r var1 var2; do echo "${var1}"; done < <(echo "${string}")
That is, simply the same but initializing a second variable with read
, and in this case, if I do echo "${var1}"
as it says in the command, if it only prints the first field alex.
Could you explain me how IFS
works exactly to be able to understand it correctly, the truth is that I have read in several sites about it but it is not clear to me the truth.
Thank you very much in advance
When iterating through items (like files) that might contain spaces or other funky characters, this can be handled by delimiting them with a null character (e.g. find -print0
) or emptying IFS variable ( while IFS= read -r
), right? How do the two methods compare or do you need both? I don't think I've ever needed to modify IFS even temporarily in my scripts---print0
or equivalent seems more straightforward asuming IFS is specific to shell languages.
I need a command palette for CLI in basj... please help. Not marker.
This question was asked on stackoverflow but I still can't quite figure out how to write the command. I want to find files with a specific name, and sort by date modified or just return the most recently modified. All the files I am looking for have the same name, but are in different directories.
find -name 'filename'
returns all the options, I just want the most recently modified one
Hello, I have the following question and I can not solve it, I would like to know if the following can be done using sed and how it would be, I would need someone to explain me exactly how the address patterns and capture groups within sed to put a regular expression that matches a string of text within a capture group and then use it in the substitution to add text after or before that capture group.
In this case, I have a script that contains this string in several lines of the script:
$(dig -x ${ip} +short)
this command substitution is inside an echo -e “”
the issue is that I would like to add everywhere where $(dig -x ${ip} +short) appears the following:
simply after +short and before the closing parenthesis, this:
2>/dev/null || {ip}
so would there be any way to use sed to add that string after +short?
i have tried to do something like this but it gives error when i run it:
sed '/dig -x .* +short/s/...\1 2>/dev/null || ${ip}/g' script.sh
I have done it this way because as I have read, the capture groups are defined using (), but by default sed identifies as capture groups the substrings of the regular expression, so (.*) would be the first capture group, so I use ...\1 as placeholder .* to tell it that after that the following string has to go: 2>>/dev/null || ip
My understanding is probably wrong
The truth is that I am quite lost with the operation for these cases of the tool and I would like you to help me if possible, thanks in advance.
Disclaimer: Completely new and mostly clueless about anything Linux related
I am using a python script to process some files and create some hard links of a large number of files. This might not be the most efficient but it works for my use case. My script compiles directory and file names into the hard link command ln {source} {dest} along with the respective mkdirs where needed. And what I do is execute it in the shell. I am running OMV 7.05, linux 6.1.0-20 kernel. I run and generate all link commands on my Win10 laptop and ssh into my omv machine to execute the commands.
Most of the link commands execute with no problem but when a filename contains a special character like quotes or exclamation marks, it does not work. Here is a sample command:
ln "/srv/dev-disk-by-uuid-5440592e-75e4-455f-a4b6-2f2019e562fa/Data/TorrentDownloads/TR_Anime/Mr Magoo 2018/Mr Magoo S01 720p HMAX WEB-DL DD2.0 x265-OldT/Mr Magoo_S01E76_Free the Rabbit!.nfo" "/srv/dev-disk-by-uuid-5440592e-75e4-455f-a4b6-2f2019e562fa/Data/Media/Anime/Mr Mgoo/Mr Magoo S01 720p HMAX WEB-DL DD2.0 x265-OldT/Mr Magoo_S01E76_Free the Rabbit!.nfo"
it says - bash: !.info not found
I have tried escaping the special character like Mr Magoo_S01E76_Free the Rabbit\!.nfo
and Mr Magoo_S01E76_Free the Rabbit\\!.nfo (idk why but i just tried)
and it says
ln: failed to access '/srv/dev-disk-by-uuid-5440592e-75e4-455f-a4b6-2f2019e562fa/Data/TorrentDownloads/TR_Anime/Mr Magoo 2018/Mr Magoo S01 720p HMAX WEB-DL DD2.0 x265-OldT/Mr Magoo_S01E76_Free the Rabbit\!.nfo': No such file or directory
Ive also tried encasing just the filename or the word the Rabbit!
in single quotes like ln "/srv/d....Mr Magoo_S01E76_Free the'Rabbit!'.nfo" ...
with the same result.
Same goes for single or double quotes, commas and iirc dashes too and this occurs irrespective of file type. The only way I got it to work is manually go in and removing the special character from the filename which is near impossible to do for hundreds of files.
Is there anyway I can make this work? I can adjust my script on my own but I just need a way to make the link command to work with the special chars.
Hi! As title implies — I’m wanting to build a custom GPS app using only low-level code (bash and C, specifically). My requirements are to:
Is this possible? I have seen grass, but want to know if there are any tools that are a bit more in tune with what I want. This project will be on a Raspberry Pi (and not the only code running), so it can’t take a whole lot of memory ideally.
Thanks in advance!
( (seq 11 19; seq 21 29 >&2;) 2>&1 1>&11 11>&- | cat &> cat.txt 11>&- ) 11>&1
I just wanna document on the internet what's the real way to redirect stderr to a command, while still redirecting stdout to stdout, without the use of <(process) >(substitution).
I wanna document it because i just see people suggesting https://unix.stackexchange.com/questions/404286/communicate-backwards-in-a-pipe ways to get the job done but nobody ever mentions how to *just* pipe stderr to a command without side effects.
For some reason, this works: bash -i 1>& /dev/tcp/127.0.0.1/8080 0>&1 2>&1 However, this doesn't: bash -i 0>& /dev/tcp/127.0.0.1/8080 1>&0 2>&0 I just inverted the order. Why doesn't it work? I had this doubt years ago but it doesn't seem to leave my mind so here it is :).
TL;DR: What command will return a list or count of all commands spawned from the current script? Ideally it would include the actual commands running, eg: aws ec2 describe-instances ...
I have a script that pulls data from multiple AWS accounts across multiple regions. I've implemented limited multi-threading but I'm not sure it's working exactly as intended. The part in question is intended to get a count of the number of processes spawned by the script:
$( jobs -r -p | wc -l )
jobs
shows info on "processes spawned by the current shell" so I suspect it may not work in cases where a new shell is spawned, as in when using pipes. I'm also not sure if -r causes it to miss processes (aws-cli) waiting on a response from AWS.
Each AWS command takes a while to run, so I let it run 2 less than the number of cores in parallel. Here's an example of it and the rest of the code/logic:
list-ec2(){
local L_PROFILE="$1"
local L_REGION="$2"
[[ $( jobs -r -p | wc -l ) -ge ${PARALLEL} ]] && wait -n
aws ec2 describe-instances --profile ${L_PROFILE} --region ${L_REGION} > ${L_OUT_FILE} &
}
ACCOUNTS=( account1 account2 account3 account4 )
REGIONS=( us-east-1 us-east-2 us-west-1 us-west-2 )
PARALLEL=$(( $( nproc )-2 )) # number of cores - 2
for PROFILE in ${PROFILES[@]} ; do
for REGION in ${REGIONS[@]} ; do
list-ec2 "${PROFILE}" "${REGION}"
done
done
I have a handful of similar scripts, some with multiple layers of functions and complexity. I've caught some of them spawning more than ${PARALLEL} number of commands so I know something's wrong.
I've also tried pgrep -P $$
but I'm not sure that's right either.
Ideally I'd like a command that returns a list of all processes running within the current script including their command (eg: aws ec2 describe-instances ...) so I can filter out file-checks, jq commands, etc. OR - a better way of implementing controlled multi-threading in bash.
Hello everyone,
In Linux, files are permanently written only after the partition is unmounted. This might explain why many graphical tools deliver unsatisfactory performance when writing files to USB flash drives. To address this issue, I have developed a compact script which, thus far, has performed effectively.
#!/bin/bash
declare -r MOUNT_POINT="/media/flashdrive"
# Define sudo command or alternative for elevated privileges
SUDO="sudo"
# Check for sudo access at the start if a sudo command is used
if [[ -n "$SUDO" ]] && ! "$SUDO" -v &> /dev/null; then
echo "Error: This script requires sudo access to run." >&2
exit 1
fi
# Function to check for required commands
check_dependencies() {
local dependencies=(lsblk mkdir rmdir mount umount cp du grep diff rsync sync blkid mkfs.exfat)
local missing=()
for cmd in "${dependencies[@]}"; do
if ! command -v "$cmd" &> /dev/null; then
missing+=("$cmd")
fi
done
if [[ ${#missing[@]} -ne 0 ]]; then
echo "Error: Required commands not installed: ${missing[*]}" >&2
exit 1
fi
}
# Function to safely sync and unmount the device
safe_unmount() {
local device="$1"
if mount | grep -qw "$device"; then
echo "Syncing device..."
sync
echo "$device is currently mounted, attempting to unmount..."
"$SUDO" umount "$device" && echo "$device unmounted successfully." || { echo "Failed to unmount $device."; return 1; }
fi
}
# Function to mount drive
ensure_mounted() {
local device="$1"
if ! mount | grep -q "$MOUNT_POINT"; then
echo "Mounting $device..."
"$SUDO" mkdir -p "$MOUNT_POINT"
"$SUDO" mount "$device" "$MOUNT_POINT" || { echo "Failed to mount $device."; exit 1; }
else
echo "Device is already mounted on $MOUNT_POINT."
fi
}
# Function to copy files or directories safely
copy_files() {
local source="$1"
local destination="$2"
local dest_path="$destination/$(basename "$source")"
if [[ -d "$source" ]]; then
echo "Copying directory $source to $destination using 'cp -r'..."
"$SUDO" cp -r "$source" "$dest_path" && echo "$source has been copied."
else
echo "Copying file $source to $destination using 'cp'..."
"$SUDO" cp "$source" "$dest_path" && echo "$source has been copied."
fi
# Verify copy integrity
if "$SUDO" du -b "$source" && "$SUDO" du -b "$dest_path" && "$SUDO" diff -qr "$source" "$dest_path"; then
echo "Verification successful: No differences found."
else
echo "Verification failed: Differences found!"
return 1
fi
}
# Function to copy files or directories using rsync
rsync_files() {
local source="$1"
local destination="$2"
echo "Copying $source to $destination using rsync..."
"$SUDO" rsync -avh --no-perms --no-owner --no-group --progress "$source" "$destination" && echo "Files copied successfully using rsync."
}
# Function to check filesystem existence
check_filesystem() {
local device="$1"
local blkid_output
blkid_output=$("$SUDO" blkid -o export "$device")
if [[ -n "$blkid_output" ]]; then
echo -e "Warning: $device has existing data:"
echo "$blkid_output" | grep -E '^(TYPE|PTTYPE)='
echo -e "Please confirm to proceed with formatting:"
return 0
else
return 1
fi
}
# Function to format the drive
format_drive() {
local device="$1"
echo "Checking if device $device is mounted..."
safe_unmount "$device" || return 1
# Check existing filesystems or partition tables
if check_filesystem "$device"; then
read -p "Are you sure you want to format $device? [y/N]: " confirm
if [[ $confirm != [yY] ]]; then
echo "Formatting aborted."
return 1
fi
fi
echo "Formatting $device..."
"$SUDO" mkfs.exfat "$device" && echo "Drive formatted successfully." || echo "Formatting failed."
}
# Function to display usage information
help() {
echo "Usage: $0 OPTION [ARGUMENTS]"
echo
echo "Options:"
echo " -c, -C DEVICE SOURCE_PATH Mount DEVICE and copy SOURCE_PATH to it using 'cp'."
echo " -r, -R DEVICE SOURCE_PATH Mount DEVICE and copy SOURCE_PATH to it using 'rsync'."
echo " -l, -L List information about block devices."
echo " -f, -F DEVICE Format DEVICE."
echo
echo "Examples:"
echo " $0 -C /path/to/data /dev/sdx # Copy /path/to/data to /dev/sdx after mounting it using 'cp'."
echo " $0 -R /path/to/data /dev/sdx # Copy /path/to/data to /dev/sdx after mounting it using 'rsync'."
echo " $0 -L # List all block devices."
echo " $0 -F /dev/sdx # Format /dev/sdx."
}
# Process command-line arguments
case "$1" in
-C | -c)
check_dependencies
ensure_mounted "$3"
copy_files "$2" "$MOUNT_POINT"
safe_unmount "$MOUNT_POINT"
"$SUDO" rmdir "$MOUNT_POINT"
;;
-R | -r)
check_dependencies
ensure_mounted "$3"
rsync_files "$2" "$MOUNT_POINT"
safe_unmount "$MOUNT_POINT"
"$SUDO" rmdir "$MOUNT_POINT"
;;
-L | -l)
lsblk -o NAME,MODEL,SERIAL,VENDOR,TRAN
;;
-F | -f)
check_dependencies
format_drive "$2"
;;
*)
help
;;
esac
Hi! I need to read FIFO file, because it arrives a log of snmp traps in the FIFO file that I need to read and process them sequentially. So I've created a while (true) loop to begin to read lines of FIFO file and process the output. Problem is machine increase cpu up 100% with the use of the script. I don't know if I put a sleep 3s for example in script. Should it read all lines of fifo file or could be that it doesn't read all lines?
Thanks and sorry for my English!
Bash works on any kind of processor and any operating system. when i execute 'ls' it works both on windows and linux even though both use completely different file systems ? so who implements the features of bash ?
Is bash just a specification and each os / motherboard manufactures implements it according to the specification ?
Hi, I use bash terminal, and I found by trying that the command *ls -d / is the way mode to see only the dirs into another dir, excluding the files. Do you know another command for filter only the dir/ ? Thank you and regards!
Hello, so i did a search of r/bash and i asked "what is an argument" and i got this result
and i got a lot of posts about modifying arguments, but what i noticed is i couldn't find any explanation of what an argument is, so i wanted to take this moment to ask.
what is an argument in bash? what does an argument mean?
thank you
question, what is a "shell language" in the context of other programming languages?
i keep hearing the term "shell language" but when i google it i just get "shell script" but people keep using this term "shell language" as if it's some how different in the context of other programming languages
any ideas?
thank you
Hi,
I made a bash script called "AppendDate.sh" which simply appends the modification date to the filenames of any drag and dropped files. Since I can't drag and drop files directly onto .sh files, to run this script, I am using a launcher to indirectly run it.
The launcher works if I use a an absolute path for the script combined with $1 for the dropped file(s). But I would like to use a relative path in the launcher instead, so that the solution is more "portable".
On other internet pages, I have read that an Exec command like the following should work:
sh -e -c "exec \\"\\$(dirname \\"\\$0\\")/AppendDate.sh\\"" %k
But this isn't working for me, no matter where I try to add $1 (or \\$1).
Any ideas?
I wanna know what's the best resource to learn bash scripting. And for cyber security which one is better bash or zsh?
Hello, i'm trying to understand what the difference between a relative path and an absolute path is in the bash shell
i did a reddit search of r/bash and found this
https://www.reddit.com/r/bash/comments/4aam9w/can_someone_tell_me_the_difference_between/
but i'm not really understanding what they are talking about in the context of the bash shell
can anyone give me any examples of the difference between an absolute path and a relative path that i can actually use in my shell so i myself can get a handle on the concept?
thank you
I am pretty much new to bash and learning to script well, I am learning how to use jq tool to parse a json file and access the elements of character by character.
In this effort, my code works fine I have the item to be "DOG"
and my for loop to have
for entry in $(echo "$json_data" | jq '.[] | select(.[] | contains("D"))'); do
where the key comes out to be 2 but when i access dynamically with ${item:$j:1} its not going to the for loop itself. Could someone help me understand this thing?
for entry in $(echo "$json_data" | jq '.[] | select(.[] | contains("${item:$j:1}"))'); do
$ seq 100000 | { head -n 4; head -n 4; }
1
2
3
4
499
3500
3501
3502