/r/linux_programming
Everything related to GNU/Linux/UNIX/POSIX (system) programming and tools.
Everything related to GNU/Linux/UNIX/POSIX (system) programming and tools.
Other subreddits you may like:
Sidebar additions or corrections? Mail me here
/r/linux_programming
Hello, if possible could someone give a noobies guide on how to turn my usb stick into a way for me to unlock my disk password for me, i'm correctly dual booting windows 11 and Qubes OS and i want to have a way for me to get into Qubes OS while still having a safety on my Qubes OS but at the moment it takes 1-3 mins to get something up and going on qubes while it would just be easier to just launch windows and get in, within like 20-30 seconds, i tried asking an AI to help me make the usbkey but i don't think i did it right
Edit: Because i'm bad at writing, and people can't always read what i'm typing, i asked an AI to remake the message above
AI: The user is seeking guidance on how to use a USB stick as a key to unlock their Qubes OS disk password, aiming to streamline the boot process while maintaining security. They are currently dual-booting Windows 11 and Qubes OS but find the latter's startup time significantly longer, and their previous attempt to set up a USB key using AI assistance was unsuccessful.
I write software to monitor the health of computer systems, and I now get to port this software to Linux!
On MacOS, I am using the proc_pid_rusage
function to get information about running processes. On Linux, I know I can get the same information by reading the /proc/${PROCESS_ID}/stat
file, BUT my daemon will need to parse the text in those files to convert strings to integers (and the kernel has to convert integers to strings first!). Is there a more direct API I can call on linux to access process stats from within (for example) C code? What does top
do on Linux?
Hi, I'm looking for an option to receive a message or notification on Android from Debian as a server.
Example:
I need a script that indicates if a file exists in a directory, and if it does, I get the notification on Android.
TOX seems like the best option, being P2P it goes directly, but I can't find an app in apt that supports working with commands or scripts. I tried with toxic, but it doesn't work for automating a message.
It's basically about receiving a "yes" or "no" message on Android from Debian using tools available in the official Debian Stable and F-Droid repositories.
Thanks
I've been programming in Linux for years and always tacitly assumed the .tv_sec field from clock_gettime(CLOCK_REALTIME) was exactly equivalent to the value returned by time(). When some code of mine started acting oddly I determined it was because the code had made that assumption; but they are consistently different by one second. Yet both are described as seconds since Epoch.
My approach for ages has been to call time() and feed that to localtime() and now you know what time it is. But now I have two clocks so I don't know what time it is. There are situations where I really want clock_gettime() for the nanosecond field, but still need to produce a correct localtime() result.
Can someone explain best practice?
EDIT: for the curious:
Linux lachesis 4.15.0-99-generic #100-Ubuntu SMP Wed Apr 22 20:32:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Ok so I have this status-script written in python for i3bar that, among other things, checks the network usage.
It uses the following function to look that up:
def network_usage(interface_name: str) -> str:
with open("/sys/class/net/%s/statistics/rx_bytes" %interface_name) as fp_rx, \
open("/sys/class/net/%s/statistics/tx_bytes" %interface_name) as fp_tx:
pol1_ts = time_ns()
rx_bytes_old: int = int(fp_rx.read())
tx_bytes_old: int = int(fp_tx.read())
sleep(0.25)
with open("/sys/class/net/%s/statistics/rx_bytes" %interface_name) as fp_rx, \
open("/sys/class/net/%s/statistics/tx_bytes" %interface_name) as fp_tx:
pol2_ts = time_ns()
rx_bytes_new: int = int(fp_rx.read())
tx_bytes_new: int = int(fp_tx.read())
rx_bytes_ps: float = (rx_bytes_new - rx_bytes_old) / ((pol2_ts - pol1_ts) / (10**9))
tx_bytes_ps: float = (tx_bytes_new - tx_bytes_old) / ((pol2_ts - pol1_ts) / (10**9))
byte_parser = lambda x: \
f"{round(x)} BPS" if x < 1000 else \
f"{round(x / 1000)} KBPS" if x < 1000**2 else \
f"{round(x / 1000**2)} MBPS" if x < 1000**3 else \
f"{round(x / 1000**3)} GBPS"
return f"in {byte_parser(rx_bytes_ps)} - out {byte_parser(tx_bytes_ps)}"
Now, what I noticed is that the output of this function, is always a factor of 10 removed from the network usage Steam shows (e.g. if my script outputs 3 MBPS steam shows something like 27.8 MBps).
So is my math just wrong? (if so how exactly cause I'm stuck) or do the [r|t]x_bytes work differently than what this function assumes?
Sorry for the noobish question, thanks in advance!
(if it matters I'm using Debian 12.something)
Hi, I want to call a function which is there in a proprietary module from a GPL module.
If i do EXPORT_SYMBOL(function) in the proprietary module and call that function from GPL module. I am getting below error:
<GPL_file>: module using GPL-only symbols uses symbols from proprietary module <proprietary_module_name>
Looking for any legal ways I can use to access the function from GPL module.
Thanks in advance.
What am I doing wrong?
struct can_filter g_socket_can_filter[CAN_MAX_FILTER_INDICES];
void
socket_can_filter_set
(uint8_t index, struct can_filter *filter)
{
if (CAN_MAX_FILTER_INDICES > index)
{
g_socket_can_filter[index] = *filter;
(void)setsockopt(g_socket_can_file_descriptor, SOL_CAN_RAW, CAN_RAW_FILTER,
&g_socket_can_filter, sizeof(g_socket_can_filter));
}
return;
}
I can confirm that socket_can_filter_set() is getting called with appropriate parameters to effect a filtering of extended IDs of 0x10 with a mask of 0x3F. But then, when I send traffic for extended ID 0x0A on the bus, that traffic is making it past this filter. This filter is the only think in g_socket_can_filter[], which despite having CAN_MAX_FILTER_INDICES elements, only element 0 is being used. Is there a default-drop behaviour I should be setting somewhere?
I am writing an application overlay, a program that is mostly transparent and mostly unclickable. I have tested it a bunch on x11/gnome but users report errors to me with other desktop environments / display servers / compositors / window managers. It seems to be because each one of those is configured differently, especially the wayland compositors. Does anyone have experience writing automated tests for situations like this?
Hi all, I’m a comp sci student with most of my experience in c++ but a decent amount of experience with command line and I’m looking to get into shell scripting to learn software development better, especially at a root level
Any advice or resources I can use to learn more would be appreciated!
TLDR: looking to learn shell scripting, unsure where to start
Attempting to increase the volume level of samples produces clipping before it produces audio as loud as VLC or Audacity. I have to turn system volume to 100% (from 30%) to hear anything.
I know there's a multiplication, but also some technique to avoid clipping. My samples come from a 16-bit PCM WAV file. It's only a problem with my app. VLC behaves properly.
Rootfs was working fine to previously used uImage.
U-Boot 2018.09-00002-g0b54a51eee (Sep 10 2018 - 19:41:39 -0500), Build: jenkins-github_Bootloader-Builder-65
CPU : AM335X-GP rev 2.1
I2C: ready
DRAM: 512 MiB
No match for driver 'omap_hsmmc'
No match for driver 'omap_hsmmc'
Some drivers were not found
Reset Source: Power-on reset has occurred.
RTC 32KCLK Source: External.
MMC: OMAP SD/MMC: 0, OMAP SD/MMC: 1
Loading Environment from EXT4...
** Unable to use mmc 0:1 for loading the env **
Board: BeagleBone Black
<ethaddr> not set. Validating first E-fuse MAC
BeagleBone Black:
BeagleBone: cape eeprom: i2c_probe: 0x54:
BeagleBone: cape eeprom: i2c_probe: 0x55:
BeagleBone: cape eeprom: i2c_probe: 0x56:
BeagleBone: cape eeprom: i2c_probe: 0x57:
Net: eth0: MII MODE
cpsw, usb_ether
Press SPACE to abort autoboot in 2 seconds
board_name=[A335BNLT] ...
board_rev=[00C0] ...
switch to partitions #0, OK
mmc0 is current device
SD/MMC found on device 0
switch to partitions #0, OK
mmc0 is current device
Scanning mmc 0:1...
Found /extlinux/extlinux.conf
Retrieving file: /extlinux/extlinux.conf
119 bytes read in 2 ms (57.6 KiB/s)
1: Yocto
Retrieving file: /uImage
5334112 bytes read in 336 ms (15.1 MiB/s)
append: root=PARTUUID=a607f020-02 rootwait console=ttyS0,115200
Retrieving file: /am335x-boneblack.dtb
67160 bytes read in 6 ms (10.7 MiB/s)
Image Name: Linux-6.11.0-04557-g2f27fce67173
Created: 2024-09-18 10:21:14 UTC
Image Type: ARM Linux Kernel Image (uncompressed)
Data Size: 5334048 Bytes = 5.1 MiB
Load Address: 80000000
Entry Point: 80000000
Verifying Checksum ... OK
Booting using the fdt blob at 0x88000000
Loading Kernel Image ... OK
Loading Device Tree to 8ffec000, end 8ffff657 ... OK
Starting kernel ...
Reboot again and again.
I want to make custom linux distribution based on Debain , I am trying to build it with live-build and calamares installer but there are many errors while building it, please guide me step by step
I want to make a cross platform drawing app that can take input from a drawing tablet, including pen pressure. Most libraries I would use for similar projects don't expose pen pressure in their APIs (SDL2, GLFW, SFML, etc.). As a result I'm considering doing window creation, OpenGL context creation, and input handling using the native platform APIs.
At this point I need to choose between using X11 or Wayland for my Linux version (I'll probably add the other eventually), and the available documentation is pushing me towards Wayland. X11 and the XInput2 extension are very poorly documented. Meanwhile, Wayland's protocols for drawing tablets are very nicely documented and well defined. The only thing keeping me from just jumping into Wayland is the number of people I could keep from using my app since (as far as I can tell) X11 is still used by the vast majority of Linux users.
Is there a better way forward? Should I start with Wayland? X11? Neither?
Vim Racer is a speed test for VIM! My goal with it is to help people learn new commands and navigate faster. It's similar to VIM golf, but the focus is speed and you can play it online.
The idea is to build something like this:
Virtualization for GPU that allows you to run local GPU apps and the code is actually run in the cloud, keeping your data local.
Functionality:
vGPU
is a virtualization
layer for a GPU
vGPU
CUDA
) instructions to the remote GPU-Coortinator
GPU-Coortinator
distribute the instructions to multiple real GPU
svGPU
which sends them to the local appThe advantage is your private data never leaves your network in plain. Only actual GPU instructions (CUDA instructions) are sent over the wire but encrypted with TLS.
I know it will be slow, but in cases where the data flow is small compared to processing time it could be a reasonable compromise for the security it gives you.
Also because instructions are distributed to multiple GPUs, when possible, it could offer better performance, in some cases, than locally
schema https://github.com/radumarias/rvirt-gpu/blob/main/website/resources/schema2.png
implementation ideas https://github.com/radumarias/rvirt-gpu/wiki/Implementation
I need to create a dice roll for different sized dice that needs to be crypto secure and eradicated sampling bias for the different sized dice. I'm using generand that uses dev/urandom and dev/random both according to reading are crypto secure on desktop PCs. And threshold to get rid of sampling bias. Is this the correct way to do it
unsigned long Dice::roll(unsigned long max_value) { if (max_value == 0) { return 0; // No valid range if max_value is 0 }
unsigned long random_value; unsigned long range = max_value + 1; unsigned long threshold = ULONG_MAX - (ULONG_MAX % range);
do { ssize_t result = getrandom(&random_value, sizeof(random_value), 0); if (result != sizeof(random_value)) { // Handle error, for example, by throwing an exception throw std::runtime_error("Failed to get random value"); } } while (random_value >= threshold);
return random_value % range; // add one when used for 1 to n size rolls }
Not because of GCC itself but because my build includes paths to GCC headers. Every time a new version of GCC appears, I have to manually update the header path in every project. The same also happens where clang is searching for headers for completion, symbol lookup and so on.
For example, today GCC changed from 14.1.1 to 14.2.1, and the path to its headers changed with it. Now all my builds fail unless I change header paths in several places. Is there some way to not require this? Can I get things to figure out where the current GCC headers are in an automated way?
After one year refactor, new CrossDB is born and open sourced. More features will be added.
Source Code
https://github.com/crossdb-org/CrossDB
Document
A proof-of-concept log monitoring solution built with a microservices architecture and containerization, designed to capture logs from a live application acting as the log simulator. This solution delivers actionable insights through dashboards, counters, and detailed metrics based on the generated logs. Think of it as a very lightweight internal tool for monitoring logs in real-time. All the core infrastructure (e.g., ECS, ECR, S3, Lambda, CloudWatch, Subnets, VPCs, etc...) deployed on AWS via Terraform.
There's some Linux internals/deployment specifics within the ECS module of the terraform config on the project's respective GH repo below... if any of you want to take a look and provide any feedback, that'd be great!
Feel free to take a look and give some feedback on the project :) https://github.com/akkik04/Trace
Hey everyone,
I’m at a crossroads and could really use some advice from this community.
I’ve been working on system tools and applications in Python for a while, but I’m realizing that I’ll eventually need to switch to a compiler-based language. My long-term goals involve some pretty low-level work, such as:
I’m not really into high-level stuff—it doesn’t appeal to me as much as getting deep into the system does.
Here’s where I’m stuck: I’m trying to choose the right programming language for these tasks, but I’m torn between a few options:
I’d appreciate any suggestions or insights from those of you who have experience in these areas. What would you recommend based on my goals? Any resources, would be super helpful.
I recently bought a Nineplus AX1800 USB 3.0 adapter that runs a Mediatek MT7961u chipset. I've read several on several different forums that people have been able to run this chipset on Linux, but somehow I'm not installing these drivers correctly, and I think it has to do with following different directions from 30 different people and not quite comprehending everything. I'm not new to Linux, but I'm no Linux wizard. I have the firmware on a USB drive, so I'd be elated and forever grateful if anyone runs this chipset and doesn't mind helping me out.
I'm running a Lenovo Ideapad 330s, dual-booting Win 11 and Parrot OS 6.2 Security Edition (lorikeet)
There may or may not be a ZJ in it for you.
I have the following rules in a wireguard docker container:
docker exec wireguard sh -c "
# Clear existing rules
iptables -F
iptables -t nat -F
iptables -X
iptables -t nat -X
# Set up new rules
iptables -t nat -A PREROUTING -d ${WIREGUARD_IP} -j DNAT --to-destination 10.10.10.2
iptables -t nat -A POSTROUTING -s 10.18.0.0/16 -o wg0 -j MASQUERADE
iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -o eth0 -j MASQUERADE
iptables -A FORWARD -i eth0 -o wg0 -j ACCEPT
iptables -A FORWARD -i wg0 -o eth0 -j ACCEPT
# Ensure IP forwarding is enabled
echo 1 > /proc/sys/net/ipv4/ip_forward
"
The container eth0 is at 10.18.0.2. The wireguard interface wg0 is at 10.10.10.1. Data is forwarded from eth0 to wg0 and I see it on the client side.
Data being received by the wireguard container (10.18.0.2) can be from various containers at 10.18.1.0, 10.18.2.0 etc. The ports however will be unique which is key for my application. On the client side, I only care about the ports. When the client side app responds though, it sends it to the wireguard connection with the correct port, but the ip needs to be switched to the correct container (10.18.1.0, 10.18.2.0 etc.) How can I achieve this and is it possible? Thanks.
I wrote some end-to-end tests for some microservice we are developing, I am trying to use Test Containers to launch a docker from inside a docker, to spin up a test database and run the tests.
The thing is Windows makes things like this so, so difficult. It is mind blowing how harder something can get just because I am on Windows.
I really feel like I am fighting with my laptop constantly through the whole dev cycle. I am getting tired of it.
Should I just dump Windows and fully embrace Linux?
Hola, necesito ayuda eh podido observar que se le puede poner una contraseña de arranque solamente a la terminal de Linux para poder utilizar los comandos. ¿Alguien sabe cómo?