/r/DSP
Anything related to digital signal processing (DSP), including image and video processing (papers, books, questions, hardware, algorithms, news, etc.)
Interesting technical papers or articles are particularly welcome!
/r/DSP
I'm new to the concepts of DSP and ive just learned about phase accumulators so naturally im trying to make my first digital oscillator in juce
what Im about to show may disturb some viewers but right now im just trying to learn the basics so im going about it in a not so clean fashion
float lastTime = 0;
for (int channel = 0; channel < totalNumInputChannels; ++channel)
{
auto* channelData = buffer.getWritePointer (channel);
Accumulator = 0;
for (int sample = 0; sample < buffer.getNumSamples(); sample++)
{
float time = sample / getSampleRate();
float deltaTime = time - lastTime;
lastTime = time;
float phase = deltaTime * 440 * 360;
Accumulator += phase * 2;
Accumulator = std::fmodf(Accumulator, AccumulatorRange);
channelData[sample] = (Accumulator / AccumulatorRange);
}
// ..do something to the data...
}
Basically this code does make a saw wave but its measuring at very detuned G4 and im struggling to understand why
even if I remove the 360 and the * 2, even if I change 440, even if I use time instead of delta time even if I do accumulator = phase even if accumulator *= phase, accumulatorRange could be set to 2^8 or 2^16 (currently 2^16) and itll always be g4 and I dont know why
this method also has aliasing as well
I understand that the PSD and the FFT are similar mathematically but feature different amplitude scaling. What is the significance of this? What is the intuition behind wanting the PSD versus just calculating a simple FFT/amplitude?
So I have 1000's of audio clips recorded with my Zoom. They were recorded using the auto-trigger mode that records when a certain DB threshold is crossed.
I need some way to sort the files as a lot of them are just noise or something quiet and not what I want to keep.
So most importantly I want to remove the quiet clips, if possible I would also like sort by DB and frequency so I only keep clips above a certain DB level but that level be under 400Hz or some other arbitrary frequency.
I have some coding experience but not with DSP, really just looking for a pre-existing program but would be willing to mess with something open source.
If anyone’s like me and has a hard time making sense of C++ but still wants to explore audio stretching algorithms, I put together AudioFlex – a (much slower) pure Python implementation for a few of these methods. It’s definitely not optimized for speed but might help with understanding concepts like WSOLA and Overlap-Add in a more Pythonic way.
Code and examples are up on GitHub if you're interested: link
Would love any feedback or suggestions from the DSP community!
P.S. I would recommend AudioSmith's Talk on these stretching algorithms. It was a huge part of the project
Hello everybody,
First of all I'm from French Polynesia but studying in Europe in my last year of a Master degree in Systems engineering for signal and image, before that I've done a bachelor in Mathematics.
I would like to work in US after graduation but I'm a bit undecided about which field I could go in, one thing is sure is that I want to earn a good living (I mean who doesn't ?) so I would like to know which field is giving the best opportunities nowadays ? I'm open to all informations/experiences you have to share guys
Plus, I'm also seeking for an internship abroad, I saw that it could be a bit difficult to do it in US for international students, am I wright ? Once again, any info/experience could be very helpfull
Thank you for your reading
Hi.
I recently finished a Digital signal processing course during my undergraduate studies. I studied DFT, DTFTs , Z-transforms and filter designs. To solidify my foundation, I am currently reading the book: "The Scientist and Engineer's Guide to Digital Signal Processing " by Steven W. Smith.
I am interested in pursuing this field even further by doing a masters in the field but I haven't really had the opportunity to do research/projects with regards to dsp because i took that course in my final year of school. From what I have discovered from applying to grad school, you should have the questions you want to answer in mind. Here is the case, I don't even know which of the subfield to go into.
I would really appreciate a mentor from industry or the research community.
Thank you in advance for all your insights.
I'm looking for a developer to work a tiny, short term project which is to create a pitch correction, aka Autotune project in C to work on mobile devices. The available algorithms to accomplish this should be basis.
Feel free to DM me if you are interested.
Hey guys, I've been interested in DSP for a pretty long time, since I was a kid I've wanted to work on stuff in this field. However, anything I am finding online requires knowledge of Calculus. Should I self study Calculus or wait to do dsp until I learn it in school next year? (I am in highschool). I currently only know precalculus since I'm a sophomore and I'm wondering if there's anything I can do on this front aside from just programming plugins until my mathematical knowledge is advanced. Also, any book reccs would be greatly appreciated.
Let's say I have a simple signal/filter H(z) = 1 / (1 - z^(-1)). This means that the ROC is |z| > 1. So the ROC is outside the unit circle, meaning that the z-transform does not exist inside or on the unit circle. What does this mean??
It just seems so backwards and weird to me. Pretty much everything we do in the z-domain takes place inside the unit circle. Say I have a pole at z=1, but also a few zeros inside the unit circle. How does that work when the z-transform isn't even defined for any of those points?
See, I get it from a purely mathematical standpoint. 1 / (1 - az^(-1)) comes from the power series of (az^(-1))^(n), which only converges if |az^(-1)| < 1. The -1 power kinda makes it the opposite of the typical power series radius of convergence that I'm used to.
Still, it's kinda weird to me intuitively how it's the inside that doesn't converge. Especially when the border is at the unit circle. I mean, the inside is where everything takes place! That's where we do our work!
I have a series of allpass filters to create a chirp and/or wave dispersion sound effect.
Could something very similar (or even equivalent) be achieved using less computing power?
I've tried various nested structures in an exploratory way, but to no avail.
EDIT: the goal is to have a filter that can be placed / reused in other structures the same way it's possible with its current non-optimized form (i.e. it's not introducing resonant modes of its own, or any non-allpass characteristics).
Thanks!
Hi I am just learning about time synchronous averaging and the math in my source goes as follows:
summation_{k runs from 1 to M} y_k(m) = summation_{k runs from 1 to M} x_k(m) + summation_{k runs from 1 to M} n_k(m), where y is the output, x is the input and n is the noise. k is the index of the signal realization and there is an ensemble of M signal realizations, with each having M time samples.
It says summation_{k runs from 1 to M} x_k(m) = M*x_K(m) and
summation_{k runs from 1 to M} n_k(m) => noise mean = 0 and noise variance = M*sigma^2. I understand it up to this point. But then it says that SNR_y = sqrt(M) * SNR_x; that time sync averaging improves the output SNR by a factor of root M. Can someone please explain to me how this is?
Hi all, not at all sure if this is the right subreddit to post in but I am looking for a plugin developer familiar with AAX for a paid collaboration in the development of a new plugin. If this isn’t the right place I’d appreciate it if you could point me in the right direction.
Hey all! I'm interviewing for a Software Engineer (DSP) role at Motorola and would love to hear about recent interview experiences. Specifically:
Any insights would be super helpful—thanks!
Hi!
I want to set up a development environment on my Ubuntu PC. But not certain what I need.
I thinking of using Jupyter notebook for python and I want my GPU to be able to be utilized for some calculation improvement.
Do anyone know what I need to get it working? I have no problem with finding out how to install the different softwares, but not certain what is required. So a simple list with the packages and dependencies would work.
EDIT:
Forgot to add what hardware I have:
OS: Ubuntu 24.04
CPU: AMD Ryzen 7 4700GE
GPU: Nvidia RTC 3060Ti
MEM: 32GB DDR4
My distro is rejecting it saying it’s trademarked
Hey r/DSP!
We’re excited to introduce Dubby: a powerful, standalone device built to push the boundaries of creative DSP work. With full programmability in C++ and Max Gen~ compatibility, Dubby is ideal for DSP developers looking to explore, test, and run custom audio algorithms in a hands-on, portable platform.
👉 Check out our Kickstarter campaign here: Dubby – The Flexible Music Multi-Tool for DJs and Producers
Why DSP Devs Will Love Dubby:
For DSP developers, Dubby offers a flexible and programmable standalone platform that’s ready to support a wide range of projects. We’d love your support on Kickstarter, and we’re here for any questions.
Let’s make DSP more accessible, creative, and hands-on with Dubby! 🎶
#DSP #C++Programming #MaxGen #Dubby #AudioProcessing
Photography: Ross Adams
I am a recreational sdr enthusiast with a solid background in math. I would like to learn about demodulation of basically everything. I have googled around and found how to do basic stuff like AM & FM, and I found a resource on BPSK. I want to go a bit further to work on things like BFSK, and QPSK/QFSK but I struggle finding good resources on how to do it. What could I read to get better at these digital modulation methods?
I’m not sure if I fit in here as a hobbyist, but here goes… I’m wanting to put together a PCB with a DSP chip, microcontroller/microprocessor, and some peripherals. I know a little C# and some web languages.
The features of SigmaStudio seem appealing for many of the DSP cases I’d like, but there is some custom functionality I’d need to add, which is why I’m expecting to need a microprocessor.
Since the industry changes so fast, I’m wondering what chip recommendations you all have; is my current plan decent, is there something more modern which could do everything with one chip, am I way off track. Also, are there any chips where I can stick with C# or will I need to learn C/C++?
I'm thinking of the Diamond Vibrato and the BOSS Vibrato pedals. They're in the 250€ - 350€ price range, which - to me - makes them crazy overpriced. Why is it? Aren't they just Chorus Pedals with different tweaking system?
Ever wondered how to measure a 4096-QAM waveform with over 100 dB SNR with sub-dB accuracy? 🤔 Join me at my workshop next week during the 2024 DSP Online Conference, where I'll share the essential tricks and techniques for effective waveform analysis complete with demonstrations using Python.
There is a great line-up of fantastic speakers and talks!
Register here: https://www.dsponlineconference.com/
#DSPOC
I got my Bachelor's in Computer Engineering and right now am currently doing a Master's in Electrical Engineering. I plan to do course work instead of a thesis as I just want to take courses related to DSP and head straight into the industry but how do I go about acquiring experience or doing projects with regard to this field?
Hi all,
I know this question is extremely subjective and boils down to: Well, do you want to do research or just go straight to industry?..., but I still wanted to hear anyone's thoughts or what they'd do since I am really on the fence right now. All the programs I am applying to are Masters in EE, and the schools I am looking at are Stanford, University of Rochester, EPFL, UBC, UCSB, Berkeley, UW, and Cal Poly SLO. All of them offer an M.S. but some offer a Pro M.S. or a M. Eng. I think doing research could be really cool, but I am pretty hesitant about the idea of a PhD and I only recently got into signal processing. Is there a huge reason to avoid Pro M.S. or M. Eng.? is a regular M.S. a much safer pick just in case I REALLY want to do research post-graduation? Any/all opinions are appreciated!
When I try to generate the IIR filter from the FDATOOL on MATLAB. I got the SOS matrix (second order section) and the G value (scale).
For an order 6 function SOS matrix
b01 b11 b21 a01 a11 a21
b02 b12 b22 a02 a12 a22
b03 b13 b23 a03 a13 a23
Scale value (an example)
[0.2 0.4 0.9 1]
Can I check if I am supposed to multiply the b values by the scale values to get the coeffient?
b01x0.2 b11x0.2 b21x0.2 a01 a11 a21
b02x0.4 b12x0.4 b22x0.4 a02 a12 a22
b03x0.9 b13x0.9 b23x0.9 a03 a13 a23
Secondly, I would like to change the output to Q15 format that range between (-1 to 1). When I try to change, for example the 2.5 value located at b11. The output after changing to Q15 would 0 as 2.5 is not within the range of -1. I found online that it could be normalise by dividing by the near nearest value, which is 3. Why is that so?
Let's say I want create a filter that attenuates at omega = pi/2. That means zeros at z=+-j. When I write down the transfer function to my filter, my natural instinct is to write
H(z) = (z - j)(z + j) = z^2 + 1
but my teacher says you're supposed to write
H(z) = (1 - jz^(-1))(1 + jz^(-1)) = 1 + z^(-2).
When you write it that way you also get two poles at the origin, right? Since
1 + z^(-2) = (z^2 + 1) / z^(2).
I get that the poles at the origin don't do anything, but why do we want them?
Is my way wrong? That way you literally just get two zeros and no poles.
Hi all,
I am kind of new to the domain, so I am still figuring out probably stuff that is basic for you guys.
I work in space, but for the sake of simplicity, let's assume I work in the time domain.
I have a data series (let's call it x) that is periodic over a time span t. My data series is discretized using N data points. When I want to represent this data in the frequency domain, I generate N frequencies spanning from 0 to 2 pi :
[ 0, 2pi/N, 2 2pi/N, ..., (N-1) 2pi/N] let's call each of these frequencies w
Now If I want to compute the time derivative of my original data (which is periodic on time), I simply have to compute :
=> dx/dt = ifft( i w fft(x))
The thing is that I do not see where the actual time span impacts my previous expression. What I mean is that if my data spans over 1 second or a year, this must have an impact on the computation of dx/dt.
I am missing where this is accounted for when performing the operation using the FFT.
Thank you for your insights,
I’m playing around with a daisy seed and while I have a pretty nice sounding digital delay which is pretty transparent other than a high pass / low pass parameter and a lo-fi setting which limits the EQ to a relatively small band that can be wiped with a parameter, I’m a little unsure how to approach what pedals often refer to as their “analog” option, it’s taking the raw signal and doing what to it before it reaches the delay line?