/r/DSP
Anything related to digital signal processing (DSP), including image and video processing (papers, books, questions, hardware, algorithms, news, etc.)
Interesting technical papers or articles are particularly welcome!
/r/DSP
I work for a company with expertise in electromagnetics physics, specifically dealing with time domain-based simulation software. We routinely need to transform our results into frequency domain, which we do performing FFTs, but we often need to manipulate our data in order to get valid frequency data, and our knowledge of digital signal processing is somewhat lackluster. For example, we may want to do more with windows to address some of the peculiarities of our simulation results, and we would like to learn more about it. Or we may need to interpolate and/or decimate very unevenly-spaced data sets, and we don't want to accidentally mess up the frequency domain results by doing so.
I am looking for an online educational resource that can be used for internal training. A lot of courses I find are very theoretical. For example, this MIT course is certainly very rigorous, but we probably don't need to learn the intricacies of the z-transform to know how to properly apply a flat-top window to our data sets. On the other end of the spectrum (pun intended) are many courses on youtube/coursera, or similar, which might touch upon subjects more directly applicable to our needs, but come with a lot of unknown about their quality.
Could anybody recommend any course which strikes a good balance between theoretical knowledge and practical use, which can be followed online? Free or not. I can provide more details of what we are looking for if my description was too vague.
Thank you very much in advance!
I am trying to get a visual representation of how the Doppler effect destroys the orthogonality of the subcarrier and understand this effect.
I plotted three orthogonal subcarriers in MATLAB:
st = linspace(-10,10,1000); % sinc, -10 to 10.
y1 = sinc(st);
y2 = sinc(st);
y3 = sinc(st + ((pi/2)/4000));
figure
hold on
plot(st+1,y2,'r',st+2,y2,'b',st,y3,'r--');
grid;xlim([-8 11]);
hold off
To observe the effect of the Doppler shift, should I only add a shift for one of the subcarriers or will all subcarriers be shifted?
max_doppler_shift = 5;
fd = linspace(-max_doppler_shift, max_doppler_shift, 3);
for i=1:3
y11 = sinc(st-fd(i));
y21 = sinc(st-fd(i));
y31 = sinc(((pi/2)/4000)-fd(i));
end
% Plot sinc functions without Doppler shift
figure;
hold on;
plot(st+1, y11);
plot(st+2, y21);
plot(st, y31);
hold off;
grid on;
Matlab plot me the same 3 sinc-functions but shifted by some values. I don't see any effects of shifting.
My goal is to reproduce the following figure:
S. Ahmed and H. Arslan, "Evaluation of frequency offset and Doppler effect in terrestrial RF and in underwater acoustic OFDM systems," MILCOM 2008 - 2008 IEEE Military Communications Conference, San Diego, CA, USA, 2008, pp. 1-7, doi: 10.1109/MILCOM.2008.4753547 📷 Add to Citavi project by DOI.
Hey there!
I am learning signal processing and currently working on a project that involves time series total electron content data extracted from GPS/GNSS measurements.
I have seen a few spikes in the signal before and after the Japan Earthquake which seems an interesting project to work on.
Since I am completely new to DSP I would like to know how would a DSP engineer approach the data analysis and extract meaningful insights.
This morning I calculated the continuous wavelet transform using MATLAB.
Please suggest me some other methods I can implement on this data?
I am thinking of training NARX model using the past three months data to see how the forecast vs actual values go.
But I am highly interested in getting suggestions from experienced people and implement it using MATLAB or Python.
I am looking for more of a workflow
Thank you
Hey all,
Not sure if this is the right forum for this, if you think there is a better sub out there I can delete this and post there. Thanks!
Wondering if anyone has any knowledge of the specific ways that DAWs handle digital clipping and audio bit manipulation internally. I'm working on a pet project right now that is a simple DAW with a GUI frontend and an audio engine backend. The application is able to get (only, for now) 24 bit audio streamed in and monitored by VU meters. So I have a few (i think) simple questions about handling overflow and manipulating audio volume internally.
Any experience or advice anyone has related to this sort of thing would be very helpful. Thanks!
Usually whenever I add reverb to something, I then end up EQing both the reverb and the dry channel separately, which is a pain for many reasons, including having to use busses, and making AB testing trickier.
I'm wondering if there's a plugin or just an algorithm that combines EQ and reverb. So, something where you can manipulate curves over the frequency domain, not just for dry channel amplitude, but also reverb amplitude and reverb decay time.
As far as I can see, nothing yet allows for different decay times for different frequencies.
So I'm trying to apply a Zoom FFT as described here:
https://flylib.com/books/en/2.729.1/the_zoom_fft.html
I think I get that we scale the sampled inputs so that its frequency spectrum is shifted to have the frequency range of interest be closer to 0 Hz, and then I get that we can apply an FFT on the shifted (and filtered) data and since the frequency range we care about is now closer to 0 Hz we don't have to have as large of an FFT to consider the frequencies of interest.
But I was wondering why is the frequency shift specified so that the range of interest becomes centered at 0? I thought the DFT was only defined for positive frequencies, but it seems like this would shift some frequencies of interest to be in the negative frequency domain?
Also, can I instead just shift the frequency range of interest so that the minimum frequency of interest is shifted to 0 Hz, then do low pass filtering and an FFT to do the zoom fft? Or is there a reason why shifting the midpoint of the frequency range of interest to 0 Hz is better?
I want to basically create a filter that can add noise to a synthesised voice to make it sound like it was recorded in a bedroom. Usually with DSP you want to filter the noise out, but I want to add it back in, these AI voices sound way too clean.
Thinking some basic reverb, maybe some quantization to lower the quality, but what else can I do to design the synthesised voice with a hard-coded, classic DSP algorithmic approach?
Recorded some audio and the resulting waveform looks like this https://i.imgur.com/J4QGd8Z.png which is not something I've seen before. If it matters it was recorded with a mems inmp 441 microphone on an esp32 using the arduino audio tools library.
Howdy, as the title suggests I'm interested in applications of linear algebra(LA) as it relates to DSP. I'm studying data science for an undergrad, and in our intro LA class, we need to do a project that gives an overview of a real-world application of LA. I know LA is/can be used in DSP, but I'm looking for a high-quality article/paper/text that is focused on LA's role within DSP. I found some stuff already, but would greatly appreciate more examples.
Specifically, I would really love to find a source discussing how LA might be utilized within a DAW or other modes of music-oriented DSP.
EDIT: Shorter texts are what I am looking for in this project. A textbook would be useful, but would not be usable in my project's proposal.
Hi, I've been working on a project that uses WiFi Halow (sub GHz WiFi) and I'd like to be able to use my SDR to estimate how much of the 1MHz wide RF channel is being used.
The goal is to understand if more devices can be added to the WiFi network without overloading it.
Below I've created a GNU Radio flow graph that listens to the RF channel (Original signal in Red and Blue)
My question is,
Thanks for taking the time to read this post :)
Mechanical Engineer here, only have previous experience with FFTs, I want to learn more about Wavelet packet decomposition for signal processing for a particular use case. In particular about its terminologies and maths, any resources and help would be appreciated.
I am doing an essay, the last one of my degree, and a relevant part it is create a QMF in Python to filter audio signals, i discovered a function on PyWavelets: https://pywavelets.readthedocs.io/en/latest/ref/other-functions.html
But idk how to create the input scaling filter, is the one part that i need to finish the essay, anyone knows how to design this input filter?
i just got back my old raspberry pi from middle school, and my only use for it now would be a digital guitar pedal using my audio interface for input/output.
first of all, is this possible? would my interface (focusrite scarlett solo) cause latency? and where could i get started with learning how to do this?
i have experience with arduino and electrical circuits (but hoping to avoid extra wiring for this). very new to raspberry pi. i want to make a multi-purpose digital “pedalboard” that can have multiple effects simultaneously. i’m also hoping to connect a keypad that can toggle on/off different effects. once all the programming is done, i won’t be using a display except for maybe an LED for each effect to indicate on/off.
any insight would be greatly appreciated!
So I know in when working with continuous time signals we can pass the time domain signal through an LTI dynamic filter to pretty much only pass certain frequencies of the signal. To do the same filtering for a discrete time domain signal, would I just pass the discrete signal through the discretization of the same LTI filter? It seems like that's the case from looking at the DFT and the z transform, but I've never worked with the z transform and it seems there's some weird stuff with it, mainly that it has an infinite summation rather than a finite one.
I want some tips on my master thesis theme. I'm interested in digital signal processing and my final bachelor degree work was in IoT. I'd like to gather these two themes.
My background is in Electrical engeneering. Something related to Telecommunications is welcome too.
That's it.
Ps: I'm not really into machine learning, i have to study it more.
Hey there, I've started learning about timeseries and would like to learn about the basic dsp stuff before I move on to advance stuff like ARİMA and SARİMA etc.
I am confused about basic stuff like what is a power spectral density used for and how can we use it to extract meaningful information.
Also what is the difference between fft and psd and how to interpret their graphs. What does it mean when the graph is max or centered at 0.
Can anyone suggest me a good YouTube channel where I can learn woth some Matlab/python implementation too.
Thank you
Hi Everyone,
I have done some bachelor-level courses in DSP in the university and my company is now willing to pay for a course for me to attend. I live in Edinburgh so the course can either be very local or online.
Any ideas on an intermediate to expert course on telecommunications DSP course?
Folks:
Do any of you know if I can use the MATLAP Home Edition (Which is $149.00) instead of the Standard (Which is $980)?
If I use the Home Edition, would I need any add ons for learning digital signal processing and for it to provide C++ code examples?
Thank you
Mark Allyn
Recently, I realized that 4D Imaging RADAR is used for autonomy industries. Are there any good resources such as 4D Imaging RADAR Hardware Architecture or DSP side of it?
Hello. I have the following time series dataset. I added the constant value at the end to see how it would affect the spectrogram. The resulting spectrogram is shown at the bottom, but it confuses me. Shouldn't the spectrogram's only nonzero Fourier mode after t = 5000 be the DC mode? I was working from code from https://bea.stollnitz.com/blog/spectrograms-scaleograms/.
I apologize for the bad labeling of the frequency axis. Midway through the frequency axis is the DC frequency, while above/below it are positive/negative frequencies.
Im working on a project that involves an arrangement of four microphones placed in a circular pattern, each microphone spaced 46 mm apart, essentially forming a square when lines are drawn connecting each one. The objective is to determine the sound source's direction of arrival within a 45-degree error margin on a 360-degree axis.
Based on the findings from my research, it appears that employing the Time Difference of Arrival (TDOA) method across each microphone pair (for example, pairs (1, 2), (1,3), (1,4)) is the method to pursue. This involves calculating the time delay and possibly utilizing cross-correlation techniques. Further research revealed that the arrival angle between two microphones could be deduced using the formula: (speed of sound * time delay) / microphone distance.
I have a few questions regarding this approach. First, when determining the angle between two microphones, how do we establish which direction is considered positive or negative, assuming the line connecting the two microphones acts as the x-axis? Secondly, after calculating the angles for the microphone pairs, how can we integrate these angles to ascertain the overall direction of arrival? Especially in a circular fashion. Lastly, would all this be possible to implement through python and is there any sources I can look towards that might help with actual code implementation?
I'm using an esp32 recording 2048 samples of audio at a sample rate of 64000 with an inmp 441 mic. I want to get the raw spectrogram data NOT visualize it since I want to detect in code when an audio event occurs. I've looked at other esp32 spectrogram projects but can't figure out how to get that data instead of having it shown and they all visualize it (example: https://github.com/donnersm/FFT_ESP32_Analyzer).
If I have an array of 2048 points of data from a mic, what is the smallest sample size I can pass through an FFT to get an accurate representation of the frequency change in time? If viewing a spectrogram in python, I use this line of code
plt.specgram(data, NFFT = 4, noverlap = 2, cmap = "rainbow")
and from what I understand it's performing an FFT with only 4 data samples?? However when I try to implement this in Arduino IDE it gives garbage data even when trying with 16 samples. My audio is in an array and I pass the first 16 samples of data to an FFT. Then I pass samples 8-24, then 16-32, etc. Is this the right methodology to get a spectrogram?
I'm using this FFT code https://webcache.googleusercontent.com/search?q=cache:https://medium.com/swlh/how-to-perform-fft-onboard-esp32-and-get-both-frequency-and-amplitude-45ec5712d7da since the esp32 spectrogram projects online use arduinoFFT and that seems to have changed so that none of the project codes will compile and there's way too many errors that I don't understand enough to fix.
Hi all, I currently use matlab a bit at work, mostly gui but I do script quite a bit call functions from our library, write functions, and additional scripting for my projects. I won’t be able to get toolboxes galore because of the expense. I have some signals that I’d like to remove certain frequencies from and maybe apply some type of agc filter to. Just looking for tips or if anyone has any good open source libs I can use that’d be cool. For now I’m just looking for something easy to follow, this is for the most part for my own learning. Thank You.
Hey there,
I am working with a timeseries data and trying to detection an anomaly. I was wondering if 1D kalman filter can help in this situation.
İf yes how to implement it any references etc?
İf no please suggest me some anomaly detection methods in timeseries.
Thank you
For a physics experiment I want to sample 2 signals at 1 GS/s for about 200ns, and then do some simple calculations on the data, and I want to do it in less than 2ms.
I was thinking something like a Red Pitaya but the most fancy version of that samples at 250 MS/s. Otherwise it looks like I'd need to buy a specialised ADC from for example Analog and interface it with an independent FPGA, which looks a bit intimidating.
Can anyone recommend a platform similar to Red Pitaya but with a higher-speed ADC that might be tractable for a simple physicist?
Electrical engineering undergrad here! I'm taking signals and systems right now, and while the class is absolutely kicking my butt, it's so interesting and I'd love to explore a DSP project on MATLAB, but I'm having trouble coming up with one. Any suggestions? I recently coded a variable audio equalizer with a bunch of cascaded RC bandpass filters, if that gives you a sense of my skill level. I know Fourier and Laplace, but not the Z-transform (or at least not in-depth).
what are the jobs for DSP?
from what ive read, it seems theres only comms work, FPGA implementation with verilog, or pure software engineering with areas like audio/image processing. is this true?
im also going to grad school for DSP next term. are there any must-take classes for signal processing?