/r/numerical
For people doing numerical computing. Tools, algorithms and related math.
keywords: scientific computing, BLAS, gpgpu, fft, wavelets, linear equations, neural netwoks, statistics, physics, monte carlo, optimization, matlab scilab, octave, numpy, R, mathematica, ...
/r/numerical
Is there an equation or algorithm to calculate the maximum value from the sum of sinusoids of different frequencies? All I can find online is the beating equation, but that's just for two frequencies.
I have a problem where I have numerical solutions to a simulation comprised of multiple sinusoidal responses (6+) being summed up. The results are 2D heatmaps at a handful of frequencies, given in real and imaginary component heatmaps. What I need to do is find the maximum value obtained at any point in time, at any location in the 2D space, of the sum of the responses.
The only way I can see doing this right now is by brute-forcing the answer numerically, marching through time. However, that seems computationally prohibitive/inefficient, as the heatmaps are very dense, and I need to be able to churn through thousands of these heatmaps. (Hundreds of simulations, ~10 frequencies per simulation, two heatmaps per frequency (real and imaginary component).)
I would like an equation/algorithm to calculate that maximum response value and/or the time, t_max, at which the maximum response is achieved, as a function of the coefficients of the sum. I.e., if the response at a point is the sum
f(t) = sum_i^n A_i * sin(w_i * t + phi_i)
for n responses, then the maximum value, as I'd like to be able to calculate it, is
max( f(t) ) = fcn(A_i, w_i, phi_i) , i = 1, 2, ..., n
such that time, t, is nowhere in the equation. Alternatively, if t_max can be calculated by a similar function, that would obviously suffice.
It's worth noting that these frequencies will always be integer multiples of the first frequency, however there will be many multiples for which A_i = 0. Effectively, the responses for a given simulation could be at {1 Hz, 2 Hz, 3 Hz, 17 Hz, 63 Hz, and 80 Hz}, or any scalar of that set, but each frequency after the first will be some integer multiple of the first.
Appreciate any help anyone can give.
I am tasked to utilize the Metropolis algorithm to 1) generate/sample values of x based on a distribution (in this case a non-normalized normal distribution i.e. w(x) = exp(-x***^(2)/2); and 2) approximate the integral shown below where f(x) = x^(2)*** exp(-x***^(2)***/2). I have managed to perform the sampling part, but my answer for the latter part seems to be wrong. From what I understand, the integral is merely the sum of f(x) divided by the number of points, but this gives me ≈ 0.35. I also tried dividing f(x) with w(x), but that gives ≈ 0.98. Am I missing something here?
Note: The sampling distribution being similar to the integrand in this case is quite arbitrary, I am also supposed to test it with w(x) = 1/(1+x***^(2)***) which is close to the normal distribution too.
import numpy as np
f = lambda x : (x**2)*np.exp(-(x**2)/2) # integrand
w = lambda x : np.exp(-(x**2)/2) # sampling distribution
n = 1000000
delta = 0.25
# Metropolis algorithm for non-uniform sampling
x = np.zeros(n)
for i in range(n-1):
xnew = x[i] + np.random.uniform(-delta,delta)
A = w(xnew)/w(x[i])
x[i+1] = xnew if A >= np.random.uniform(0,1) else x[i]
# first definition
I_1 = np.sum(f(x))/n # = 0.35
print(I_1)
# second definition
I_2 = np.sum(f(x)/w(x))/n # = 0.98
print(I_2)
I am trying to solve a hyperbolic equation using finite difference as shown below.
My main confusion is that to calculate for U_i,2 (i.e. the first iteration), where do I get U_i,1 from? Because the only given initial condition is U_i,0.
Note: I did try assuming that U_i,1 = U_i,0 and the solution does seem right, but I just would like to see if there is a better approach.
I'm struggling a little with numerical evaluation. I have a model that depends on two variabls f(x,y). I need to evaluate the quantities
as well as
each evaluated at the point (\tilde x,\tilde y).
So far so good; my model can not be expressed analytically but I can produce point estimates f(x*,y*) running a simulation and in principle I would create a grid of x and y values, evaluate the model-function at each grid-point and calculate a numerical derivative for it - the problem is, that each simulation takes some time and I need to reduce the number of evaluations without losing too much information (e.g. I have to assume that f is non-linear...).
I'm asking here for some references towards strategies, since I have no idea where to even start. Specifically I want to know:
Thank you in advance for any interesting wikipedia-pages, book-recommendations and what not!
Hi all,
I wanted to try solving something quite far from my field, so here we go.
Linear quantum harmonic oscillator (I took the equation from a general book on dynamical systems):
i u_t + 0.5 * u_{xx} - 0.5 * x^2 * u = 0
ic: u(x,0) = exp(-0.2*x^2)
bc: u_{x}(\partial\Omega) = 0
Spatial discretisation performed with finite elements (Bubnov Galerkin) and time discretisation performed first with Backward Euler. The solution was too dissipated, hence I moved to Crank-Nicolson. The problem is linear, hence no further stabilizations are exploited. Here enclosed you can find solutions obtained from both time integration schemes.
Hello everyone,
I am currently doing a numerical methods course in my university, and one of the topics if finite volume method, however our course book Numerical Methods for Engineers does not go into as much detail as our course requires, and the course notes on this topic are fairly difficult to understand.
I was wondering if anyone can recommend a book which goes into detail on this topic, I would really appreciate, thanks!
Singular Value Decomposition takes too long as matrix size increases. Lancosz bidiagonalisation is sometimes unstable. What algorithm is fast and robust?
Hey!
Any researcher in numerical analysis here?
I was curious about the sort of relevant/interesting problems nowadays in numerical PDEs, favourably (but not necessarily) which have a considerable intersection with optimization theory. Any document with a description of those things and reading suggestions?
Another question...
Computationally speaking, I get the feeling that the whole numerical PDE thing is inherently computationally expensive. Is there hope for fast algorithms? I get this is a vague question. I'm sorry.
Thank you.
Have been working on a compartmental model with multiple levels and have been getting a lot of overshoot. The model is of a population who go up compartments representing how poisoned they are by a substance, with each higher compartment being more likely to die. They leave each compartment by interacting with the substance. So for example, compartment B_2 loses B_2 through mass action with S, so a term in its derivative is "-interaction_rateSB_2", however, B_3 then gains "+interaction_rateSB_2" in its derivative.
Have been simulating turning on and off the parameter for rhe amount of substance and the rate at which it comes in. So for a while, S is 0 until the max population is reached, and then gets turned on by having a different value.
When this value is small, it overshoots and actually makes the population increase past its previous value. It seems to be due to the large number of compartments adding up all those S*B_i terms wrong. Have been using stiff equation solvers. Is there any other way to get rid of this overshoot?
I am trying to perform an N-body particle simulation that has particles apply a linear attractive spring force to each other when they are within a range R. The equation can be given as so for the acceleration of an individual particle:
The particles have an uneven distribution in space. Attached is a visualization.
Right now I am using the naive method of summing droplet spring forces and have a time complexity of O(N^2). I want to use some method to reduce the time complexity down to O(N) or O(N * log(N). Any recommendations on what to use?
The problem is y*y"+0.0001=0 with y(0)=10 and y(5)=1000. I can't solve it following the method for linear ode bvp
I'm following this example
https://m.youtube.com/watch?v=u8dVrzxTvSA
But here only 2nd order equation and my problem consists of 4th order ode with bcs in y(0),y(1), y'(0), y''(1). So how can I modify the method in video so that it works for 4 the order ode ?
I'm finishing my masters in mathematics, focusing on modelling, numerics and simulation, and my dream is to get a job as a numerical programmer working on some big/complex piece of numerical or simulation software.
I have experience working with C, C++, Python and OpenMPI, but I learn fast and am willing to learn new technologies.
I'm interested in contributing to some piece of numerical or simulation software to get experience and foster connections in the industry, either voluntary, or as a werkstudent position. I am based in Germany, so research groups or other entities based in Germany are of particular interest to me.
Would love to get some tips on projects looking for help.
I wondered if someone can tell me an easy trick how to figure out what to put in which line of the Clenshaw scheme. For the Tschebyscheff I understand that the last row is always multiplied by 2times the searched value x and after additionally putting those values of the last row shifted in the second row all of them are added together. For the second version of Tschebyscheff we do the same with the last the last coefficient while with 1. Tschebyscheff we only multiply with x. However how would it work with general formulas?
With the tridiagonal matrix that evolved for 0 values of orthogonal Polynoms I understand that the 0-values of the polynomial are the same as the eigenvalues of Jacobi matrix. however how do I calculate those 0 values or eigenvalues for example for the tschebyscheff or Legendre polynomial?
Thanks heaps for your help :)
Hello everyone
I have encountered the following problem related to reconstructing a positive valued particle density function f: [0,1]^2 -> R>0.
Basically I am given measurements mi=integral_{[0,1]^2} (f * gi) where gi are weighting functions that are known in advance, so the measurements basically correspond to weighted sums/integrals of f with the weights gi.
My question is given the mi, is there a general numerical approach to reconstruct f?
If it helps, I attach a picture of a typical weighting function:
Typical Weighting Function gi, red color is equivalent to 0, blue/greening color corresponds to 1.
Hey everyone. Can someone please tell me anything about solving a stiff ODE system using Rosenbrock method? Any help is appreciated. Thank you.
I need the answer for this question plz