/r/DSP
Anything related to digital signal processing (DSP), including image and video processing (papers, books, questions, hardware, algorithms, news, etc.)
Interesting technical papers or articles are particularly welcome!
/r/DSP
I’m an undergraduate CS student and would like to learn more about the fundamentals of DSP.
Hi guys,
I have an excel sheet from a vibration monitor that has timestamps and particle velocities columns. I want to perform an FFT to get the data in frequencies and amplitude. I have tried using the excel packages and also coding it in python to perform and plot the FFT, but I cant see that the results make any sense. Am i trying to do something impossible here because vibrations signals include so much noise? Thanks in advance for any help and replies.
Best regards
I've been trying to do research on how to literally hookup the GPI/O on a DSP and start using it, but there are no videos about it, or even a name of the cables that are used for hooking up the GPI/O ports on a DSP. I feel like I am missing something obvious, any help?
On a Blu-100 DSP: https://bssaudio.com/en-US/products/blu-100#product-thumbnails-2
On the back there are logic input and outputs, what kind of wires are those? Is it just regular power wires? Some special connector?
I made a spectrum analyzer with C# and recently created a repo on github. I wonder if this will be useful in your workflow.
Feel free to give feedbacks, fork, modify or use any part of the code.
Hello!
I wanted to learn more about DSP for audio, so I worked on implementing DSP algorithms running in real-time on an FPGA. For this learning project, I have implemented a flanger and a pitch shifter. In the video, you can see and hear both the flanger and pitch shifter in action.
With white noise as input, it is noticeable that flanging creates peaks/valleys in the spectrum. In the PYNQ jupyter notebook the delay length and oscillator period are changed over time.
Pitch shifter is a bit more tricky to get to sound right and there is plenty of room for improvement. I implemented the pitch shifter in the time domain by using a delay line and varying the delay over time, also known as Doppler shift. However, since the delay line is finite, reaching its end of the delay line causes an abrupt jump back to the beginning, leading to distortion. To mitigate this, I used two read pointers at different locations in the delay line and cross-faded between two channels. I experimented with various types of cross-fading (linear, energy preserving etc), but the distortion and clicking remained audible.
The audio visualization, shown on the right side of the screen, is made using the Dash framework. I wanted the plots to be interactive (zooming in, changing axis range etc), so I used the Plotly/dash framework for this.
For this project, I am using a PYNQ-Z2 board. One of the major challenges was rewriting the VHDL code for the I2S audio codec. The original design mismatched the sample rate (48 kHz) and the LRCLK (48.828125 kHz), leading to an extra duplicated sample for every 58 samples. I don't know whether this was an intentional design choice or a bug. This mismatch caused significant distortion, I measured an increase in THD by a factor of 20. So it was worth it to address this issue. Addressing this issue required completely changing the design and defining a separate clock for the I2S part and doing a clock domain crossing between AXI and I2S clock.
I understand that dedicated DSP chips are more efficient and better suited for these tasks, and an FPGA is overkill. However, as a learning project, this gave me valuable insights. Let me know if you have any thoughts, feedback, or tips. Thanks for reading!
Hans
I am currently making a pulse-doppler radar system in matlab similar to this except I'm doing most of it from scratch: https://www.mathworks.com/help/phased/ug/doppler-shift-and-pulse-doppler-processing.html
I used a sine wave sin(2pift) as the transmitted signal and added additive white gaussian noise to it using awgn. Using a periodogram on the noiseless version of the signal recovers the signal's doppler frequency so I can calculate the velocity. But whenever I add noise, the output becomes unpredictable. To solve this, I am using a matched filter and it accurately finds the range of the object the radar has detected through the transmitted signal.
My problem is trying to follow the periodogram by using the range that was detected, just like in the link I sent. Using the ranges within the signal's pulse width has the periodogram detecting the largest value of power at 0 Hz. If I go outside of this pulse width, the periodogram just returns a flatline. I need the maximum power value in the periodogram to be assigned to the doppler frequency of the object but I don't know how to make this reflect towards the received signal.
Hello, greetings to everyone.
I am a sound engineer, and I’m passionate about audio equipment, especially Eurorack systems, effects gear, and synthesizers. As a hobby, I would love to design my own hardware, both analog and digital. I have studied many concepts related to this, such as microcontrollers, DSP, electronics, and programming, but all in a very general way. I would like to ask for recommendations on courses, books, or tools to help me get started. Thank you!
I've been researching and have discovered Daisy as a foundation to start with, along with STM microcontrollers. However, I’d like to delve deeper and truly understand this world in depth. I need help organizing all these ideas and figuring out where to start.
Would somebody be able to help explain to me why there is still a tone at the fundamental after frequency mixing. The 10bit quantized signal is mixed with floating point tone, both at the same frequency of 2.11MHz. After mixing, there is the tone at 2*fin = 4.21MHz, DC content and some residual remaining at the fundamental of 2.11MHz?
Edit. Why is uploaded image being removed?
Pretty much the title. I understand that impulse response files are typically only 2 kilobytes or so, but in something like a digital synthesizer the filter section usually has two or three variable parameters -- cutoff frequency, resonance, and sometimes cutoff slope as well -- which could add up if there's a unique impulse response file for every change in the parameters. I suspect that a limited number of impulse response files could be used to create a deeper resolution of parameter changes by implementing a weighted spectral morph between two impulse response files before convolving the audio signal though. But maybe I'm waaay off with either approach and something else entirely is employed?
If it helps at all, I was looking at the this page's FIR tools when I started wondering about this.
Also, if anyone has any recommendations for books, terminology, etc. to look into I'd appreciate it.
Hello! I recently got my hands on a Playstation eye which has a linear 4-microphone array.
I want to try to use it to learn some Beamforming and DOA estimation, but I have no clue about the microphone spacing. Does anybody here have any information about it?
I'm doing some sound programming in C and can't wrap my head around how to do sample rate conversion. I'm trying to convert a 44100Hz signal into a 48000Hz signal. I feel like I'm getting realy close but I get a lot of noises.
I’ve currently built an OFDM system in MATLAB that can transmit bits over an audio channel (at the Tx I export a .wav file, play it on a speaker, record it with my phone and send a .wav file back to the Rx). I've used a bunch of standard OFDM techniques- synchronization, 8-PSK, pilot signaling etc.
How could I extend this design using a microcontroller and RF transceiver? I want to get experience implementing this in C/C++ and working over a more precise channel.
Hello! I have 4 short (about 0.20 seconds each) recorded impact sounds and I would like to perform spectral analysis on them to compare and contrast these sound clips. I know only the bare minimum of digital signal processing and am kind of lost on how to do this aside from making a spectrogram. What do I do after? How do I actually go about doing this? The analysis doesn't have to be too deep, but I should be able to tell if 2 sounds are more similar or different from each other. Any python libraries, resources, advice? Im not sure where to even start and what I need to code for. I would like to use python for this analysis
Hello, I'm currently an undergraduate computer engineering student and I'm interested in becoming a digital signal processing engineer. As the choice for my master's approaches I'm wondering what master's program I should go for? The university I'm attending and plan on pursing my master's at has several programs and I think I've narrowed it down to either their Signal Processing & Machine Learning track or their Embedded Systems track. My university also has a communications master's but it has a lot of focus on analog so I've dismissed it.
The course overview for the Signal processing track doesn't really seem to have anything specifically targeted at digital signal processing. So my uncertainty comes from the fact that I've heard several several times that a DSP engineer who has good hardware skills is highly valued, particularly in the context of implementing DSP algorithms on an FPGA. The embedded systems track has a lot of focus on FPGA programming but doesn't touch on signal processing at all. I can take 3 elective signal processing classes as my electives but I'm also interested in learning about AI and implementing it on and FPGA for things like processing EEG headset data as well as other bio-signals.
Looking at these tracks what would you guys recommend in this context and what should I spend time learning on my own outside of school if I go with one option or the other? Or should I just find a different university that has a more targeted master's program? I'm open to the idea of transferring to a different university but I'm struggling to find one that has a more targeted program and there are a handful of small-ish reasons reasons why it may be more preferable to stay at my current university.
Also, slightly tangential, but what are some good projects/project areas that an ambitious computer engineer undergrad who is comfortable programming can pursue that would look great on their resume in the context of DSP positions and internships?
I am pursuing my BE in Electronics and communication and am a newbie to signal processing, it seems really interesting and i want to get deeper into it, can I get suggestions for some good beginner friendly resources and advice o start with signal processing.
And also what are the carrier options in this Domain.
A common question for younger engineers is: what DSP class I should I take? I wrote a blog with an emphasis on an RF career path in an attempt to help answer that question. I describe classes to take and decisions to make at the undergraduate and graduate levels. The short version is that the later into your schooling, the more flexibility you will have in choosing courses. I also worth noting that have a personal bias towards algorithm design and software implementation, rather than hardware. I hope this helps answer some questions.
https://www.wavewalkerdsp.com/2024/11/01/what-dsp-classes-should-i-take/
Hi.
I am doing a project mostly for learning, I want to use Python to detect some power quality parameters, but then I came up to the topic of transients.
This is from Fluke:
"What are voltage transients? A transient voltage is a temporary unwanted voltage in an electrical circuit that range from a few volts to several thousand volts and last micro seconds up to a few milliseconds"
I have some questions.
First about the electrical implementation of these devices:
1)How fast is the sampling rate on power quality monitoring devices to be able to capture transients?
2)How the devices protect themselves from high voltage induced by transients?
Second about the algorithm I want to implement 1)Is there any way to get real time logs from power quality meters systems without having such a device? 2)If is not possible to get logs, what is the best way to simulate voltage and current signal with common power disturbances? 3)What is the minimum amount of data suggested to start processing (half cycle, one cycle, etc?)
Thanks.
I have specifications for an upsampling filter chain on an ASIC and need recommendations for a more efficient design approach.
The filtering happens after upsampling, with the input sampling rate of f_s. The low-pass filter requirements are:
Passband ripple: 0.01
Stopband attenuation: 86 dB
Assumptions (normalized frequencies based on the sampling frequency):
Cutoff frequency: wc = 0.6 * pi
Stopband edge: ws = 0.37 * pi
Note: wc + ws != pi
Given these constraints, using a half-band FIR filter is not optimal. question1:
What filter structure would be more efficient for these specifications than a half-band filter?
question2:
Is using the least squares algorithm a good choice for calculating filter coefficients, or is there a better approach? Thanks in advance for your insights!
Question3:
If I have a chain of upsampling filters that collectively upsample the input data by a factor of 12 in several stages, requiring the cascading of multiple upsampling filters, how can I simulate that in Python to verify if the output signal meets my requirements?
Long story short- My control theory professor was a grumpy douche who made me hate the subject with a passion, and i’ve been avoiding it like the plague ever since.
Any quick and dirty source to relearn the subject? I feel like I’m missing out on a lot of stuff
I dont quite understand how convolving an audio buffer with an impulse response sounds so convincing and artefact-free.
As I understand it, most if not all convolution processes in audio use FFT-based convolution, meaning the frequency definition of the signal is constrained to a fixed set of frequency bins. Yet this doesn't seem to come across in the sound at all.
ChatGPT is suggesting its because human perception is limited enough not to notice any minor differences, but im not at all convinced since FFT-processed audio reconstructions never sound quite right. Is it because it retains the phase information, or something like that?
Hello,
I am new to dsp and sigma studio and have a problem with my project.
I built a two way speaker with a wondom jab 5 module.
Got it running with basic functions like a limiter, gain, eq and so on.
I want to use 3 of the 5 possible external potis.
aux adc_0 = Volume
aux adc_1= Bass
aux adc_2 = Treble
The volume control works, but if i try to control the tone (looked up some tutorials), the potis doesnt work correctly.
For example i want to boost the 100hz with the poti. Found this routing on the internet, but it doesnt work for me. Sounds like shit if activated and the poti doesnt work all its way turning.
I attached some screenshots of my build. Maybe you guys can help me out :)
Hi DSP community! I am attempting to rewrite the code for a book called ThinkDSP, a book written to teach the fundamentals of DSP for Python programmers. I would like to rewrite the code and examples in Haskell (a purely functional programming language). Let me know if you are interesting in contributing! I'll post the github here: https://github.com/kellybrower/ThinkYouADSPForGreatGood and you can find the original post on r/haskell here: https://www.reddit.com/r/haskell/comments/1gpln0v/comment/lwuk6rp/?context=3
Does anyone know or have a chart or diagram that shows the Fourier series, DTFT, Discrete Fourier series, DFT, z-transform, with all the definitions, how they are related, some relevant properties, and what they are used for?
Hello,
I'm looking for books recommendations to learn software-defined radio. I already have experience with SDR but I've learned by practicing with gnu radio. While that led me to understand which functions should I use and what can I adjust to improve performance, the theory behind many of these topics is almost a mystery to me.
And so on, I think you got the idea. I am in a strange situation where I know more than I understand, so I get the basics of DSP but the advanced stuff is magic to me.
I'm interested in satellites communications (and especially how to develop ground segment softwares), so I'd like books explaining carrier synchronisation, symbol timing recovery, viterbi decoding, maximum likelihood, residual carrier vs suppressed carrier, all this kind of stuff
Also, I'd love a book which summarizes the state-of-the-art for ground segment SDR. Feel free to recommend different books for this.
Note that I will experiment on Matlab, python or c++ while reading this/these books, so if there's a ton of maths it's not that bad.
And finally, I'd welcome any other advice, especially from people who were in the same situation as me.
Hi all,
Newbie here!
I am trying to understand a paper where, for numerical stability reasons, the author approximates the derivative of a periodic function by using centered finite differences :
[; \frac{\partial f}{\partial x} \approx \frac{f(x+\Delta x) - f(x-\Delta x)}{2\Delta x} ;]
In his paper,he obtains that
[; \frac{\partial f}{\partial x} \approx -\sum_k \hat f(k) \exp{(-i h k x)}\frac{i\sin{hk\Delta x}}{\Delta x} ;]
with [; h = \frac{2\pi}{N} ;]
could anyone point me out on how to obtain that result?
In an experiment I created a water wave with a single frequency. I recorded the amplitude of the wave using sensors. Of course, the experiment data has noise and such. I will perform an FFT on the time history of the wave to find the peak amplitude and corresponding frequency. I will later use that peak amplitude to calculate some other value (k_i) for which I would like to make error bars for. I will need to know the uncertainty in amplitude so I can propagate it to find the uncertainty for k_i.
Usually, I find the uncertainty for amplitude by looking at the time history and finding the standard error of the mean. Then I use the mean amplitude for my later calculations. Since I am getting this mean amplitude from the tallest peak of the FFT plot, from where would it's uncertainty come from?
I am a cs student specializing in AI finishing my BE and my M.Sc at the end of this year and i am embarquing on a 6 month internship at CEA Leti Grenoble - France for a classification project based on physiological signals. I want to know is this intenship worth being taken ? Will it open the path to other future jobs ? And what can they be i really want to know because i dont know how can i use all my experience in the physiological signals manipulation (i did 2 internships in the same domain before this current internship). Please leave your suggestions. What roles can i apply for specifically ? Data scientist ? ML engineer ? Persue a master's degree or PhD ... ??? PS : i did some MLops projects but mainly my resume contains a big part of time series based projects