Why I get negative values on using periodogram after Fourier analysis?

sorry for bothering you with this question today. I am trying to analyze wave data that was produced using a wave tank. The period used for the waves is 1.7s, the waves encountered an obstacle at some point and then reflections are expected.

I plotted one of the sensors that are a wave gauge system. And I obtained the next plot:

Imaage of wave amplitude vs time

The x is the time in seconds and the y the wave height in mm. After this I use the code:

ListLinePlot[Abs[Fourier[datcy]], PlotRange -> All] 

And then I get the Fourier transform and I also plot just the data:

Fourier transform of the data data plotted

I know the plot is reflected and that is why I have a double one, but what I don’t get in wolfram or maybe on signal analysis is how I get a power spectrum that is negative after this:

Periodogram[datcy[[All, 2]], Frame -> True,  

GridLinesStyle -> Directive[Red, Dashed], PlotRange -> All]

Power spectrum

It’s on DB?, if that is true then technically there are no negative values and I should interpret as a -150 DB?. I am a bit lost here. Also, have a bit of doubt about why the amplitude of the Fourier plot doest not relate to the amplitude of my wave?.

On Defining the Fourier Transform and Performing Changes of Variable on Quotient Subgroups of $\mathbb{Q}/$

Much to my dismay, in my work the more number-theoretic side of harmonic analysis (ex: the fourier transform on the adeles, on the profinite integers, etc.), I have found myself struggling with technicalities that emerge from frustratingly simple issues—so simple (and yet, so technical) that I haven’t been able to find anything that might shine any light on the matter.

Let $ L_{\textrm{loc}}^{2}\left(\mathbb{Q}\right)$ be the space of functions $ f:\mathbb{Q}\rightarrow\mathbb{C}$ which satisfy $ \sum_{t\in T}\left|f\left(t\right)\right|^{2}<\infty$ for all bounded subsets $ T\subseteq\mathbb{Q}$ . Letting $ \mu$ be any positive integer, it is easy to show that any functions which is both $ \mu$ -periodic (an $ f:\mathbb{Q}\rightarrow\mathbb{C}$ such that $ f\left(t+\mu\right)=f\left(t\right)$ for all $ t\in\mathbb{Q})$ and in $ L_{\textrm{loc}}^{2}\left(\mathbb{Q}\right)$ is necessarily an element of $ L^{2}\left(\mathbb{Q}/\mu\mathbb{Z}\right)$ , the complex hilbert space of functions $ f:\mathbb{Q}/\mu\mathbb{Z}\rightarrow\mathbb{C}$ so that: $ $ \sum_{t\in\mathbb{Q}/\mu\mathbb{Z}}\left|f\left(t\right)\right|^{2}<\infty$ $ Equipping $ \mathbb{Q}/\mu\mathbb{Z}$ with the discrete topology, we can utilize Pontryagin duality to obtain a Fourier transform: $ \mathscr{F}_{\mathbb{Q}/\mu\mathbb{Z}}$ . The ideal case is when $ \mu=1$ . There, $ \mathscr{F}_{\mathbb{Q}/\mathbb{Z}}$ is an isometric hilbert space isomorphism from $ L^{2}\left(\mathbb{Q}/\mathbb{Z}\right)$ to $ L^{2}\left(\overline{\mathbb{Z}}\right)$ , where: $ $ \overline{\mathbb{Z}}\overset{\textrm{def}}{=}\prod_{p\in\mathbb{P}}\mathbb{Z}_{p}$ $ is the ring of profinite integers, where $ \mathbb{P}$ is the set of prime numbers, and where $ L^{2}\left(\overline{\mathbb{Z}}\right)$ is the space of functions $ \check{f}:\overline{\mathbb{Z}}\rightarrow\mathbb{C}$ which are square-integrable with respect to the haar probability measure $ d\mathfrak{z}=\prod_{p\in\mathbb{P}}d\mathfrak{z}_{p}$ on $ \overline{\mathbb{Z}}$ .

The first sign of trouble was when I learned that, for any integers $ \mu,\nu$ , the (additive) quotient groups $ \mathbb{Q}/\mu\mathbb{Z}$ and $ \mathbb{Q}/\nu\mathbb{Z}$ are group-isomorphic to one another, and thus, that both have $ \overline{\mathbb{Z}}$ as their Pontryagin dual. From my point of view, however, this isomorphism seems to only cause trouble. In my work, I am identifying $ L^{2}\left(\mathbb{Q}/\mu\mathbb{Z}\right)$ with the set of $ \mu$ -periodic functions $ f:\mathbb{Q}\rightarrow\mathbb{C}$ which are square integrable with respect to the counting measure on $ \mathbb{Q}\cap\left[0,\mu\right)$ . As such, $ \mathbb{Q}/\mu\mathbb{Z}$ and $ \mathbb{Q}/\nu\mathbb{Z} $ cannot be “the same” from my point of view, because $ f\left(t\right)\in L^{2}\left(\mathbb{Q}/\mu\mathbb{Z}\right)$ need not imply that $ f\left(t\right)\in L^{2}\left(\mathbb{Q}/\nu\mathbb{Z}\right)$ .

In my current work, I am dealing with a functional equation of the form:$ $ \sum_{n=0}^{N-1}g_{n}\left(t\right)f\left(\frac{a_{n}t+b_{n}}{d_{n}}\right)=0$ $ where $ N$ is an integer $ \geq2$ , where the $ g_{n}$ s are known periodic functions, where $ f$ is an unknown function in $ L^{2}\left(\mathbb{Q}/\mathbb{Z}\right)$ (i.e., $ f\left(t\right)\in L_{\textrm{loc}}^{2}\left(\mathbb{Q}\right)$ and $ f\left(t+1\right)=f\left(t\right)$ for all $ t\in\mathbb{Q})$ , and where $ a_{n},b_{n},d_{n}$ are integers with $ \gcd\left(a_{n},d_{n}\right)=1$ for all $ n$ . For brevity, I’ll write: $ $ \varphi_{n}\left(t\right)\overset{\textrm{def}}{=}\frac{a_{n}t+b_{n}}{d_{n}}$ $ Because $ \varphi_{n}\left(t+1\right)$ need not equal $ \varphi_{n}\left(t\right)+1$ , the individual functions $ f\circ\varphi_{n}$ , though periodic and in $ L_{\textrm{loc}}^{2}\left(\mathbb{Q}\right)$ , are not necessarily going to be of period $ 1$ . Letting $ p$ denote the least common multiple of the periods of $ g_{n}$ and the $ f\circ\varphi_{n}$ s, I can view the functional equation as existing in $ L^{2}\left(\mathbb{Q}/p\mathbb{Z}\right)$ , and as such, I hope to be able to simplify it by applying the fourier transform.

Letting $ e^{2\pi i\left\langle t,\mathfrak{z}\right\rangle }$ denote the duality pairing between elements $ t\in\mathbb{Q}$ (or $ \mathbb{Q}/\mu\mathbb{Z}$ ) and $ \mathfrak{z}\in\overline{\mathbb{Z}}$ , the idea is to multiply the functional equation by $ e^{2\pi i\left\langle t,\mathfrak{z}\right\rangle }$ , sum over an appropriate domain of $ t$ (ideally, $ \mathbb{Q}/p\mathbb{Z}$ ), make a change of variables in $ t$ to move one of the $ \varphi_{n}\left(t\right)$ s out of $ f$ and into $ e^{2\pi i\left\langle t,\mathfrak{z}\right\rangle }$ , pull out terms from this character, and then invert the fourier transform to return to $ L^{2}\left(\mathbb{Q}/p\mathbb{Z}\right)$ with a vastly simpler equation. My main difficulty can be broken into three parts:

(1) Is taking the least common multiple of the periods to reformulate the functional equation as one over $ L^{2}\left(\mathbb{Q}/p\mathbb{Z}\right)$ legal?

(2) I know that the fourier transform $ \mathscr{F}_{\mathbb{Q}/\mathbb{Z}}:L^{2}\left(\mathbb{Q}/\mathbb{Z}\right)\rightarrow L^{2}\left(\overline{\mathbb{Z}}\right)$ is given by:$ $ \mathscr{F}_{\mathbb{Q}/\mathbb{Z}}\left\{ f\right\} \left(\mathfrak{z}\right)\overset{\textrm{def}}{=}\sum_{t\in\mathbb{Q}/\mathbb{Z}}f\left(t\right)e^{2\pi i\left\langle t,\mathfrak{z}\right\rangle }$ $ and that the inverse transform is: $ $ \mathscr{F}_{\mathbb{Q}/\mathbb{Z}}^{-1}\left\{ \check{f}\right\} \left(t\right)\overset{\textrm{def}}{=}\int_{\overline{\mathbb{Z}}}\check{f}\left(\mathfrak{z}\right)e^{-2\pi i\left\langle t,\mathfrak{z}\right\rangle }d\mathfrak{z}$ $ However, I am at a loss as to what formula to use for $ \mathscr{F}_{\mathbb{Q}/p\mathbb{Z}}$ and its inverse, and for two reasons. On the one hand, because $ \mathbb{Q}/\mathbb{Z}$ and $ \mathbb{Q}/p\mathbb{Z}$ are group-isomorphic, what is to stop me from using the same formula for their fourier transforms? On the other hand, if I use a modified formula—say:$ $ \mathscr{F}_{\mathbb{Q}/p\mathbb{Z}}\left\{ f\right\} \left(\mathfrak{z}\right)=\sum_{t\in\mathbb{Q}/p\mathbb{Z}}f\left(t\right)e^{2\pi i\left\langle \frac{t}{p},\mathfrak{z}\right\rangle }$ $ does the fact that $ \mathscr{F}_{\mathbb{Q}/p\mathbb{Z}}\left\{ f\right\} \left(\mathfrak{z}\right)\in L^{2}\left(\overline{\mathbb{Z}}\right)$ then mean that I can recover $ f$ by applying $ \mathscr{F}_{\mathbb{Q}/\mathbb{Z}}^{-1}$ , or do I have to also modify it in order to make everything consistent? Knowing the correct formula for the fourier transform on $ L^{2}\left(\mathbb{Q}/p\mathbb{Z}\right)$ and its inverse is essential.

(3) I would like to think that performing a change-of-variables for a sum of the form:$ $ \sum_{\mathbb{Q}/r\mathbb{Z}}f\left(\alpha t+\beta\right)$ $ (where $ \alpha,\beta,r\in\mathbb{Q}$ , with $ \alpha\neq0$ and $ r=\frac{p}{q}>0$ ) would be a relatively simple matter, but, that doesn’t appear to be the case. For example, if $ t\in\mathbb{Q}\rightarrow f\left(\alpha t+\beta\right)\in\mathbb{C}$ is not a $ r$ -periodic function, then this sum is not well-defined over the quotient group $ \mathbb{Q}/r\mathbb{Z}$ . Worse yet—supposing $ f$ is $ r$ -periodic take a look at this: write elements of $ \mathbb{Q}/r\mathbb{Z}$ in co-set form: $ t+r\mathbb{Z}$ , where $ t\in\mathbb{Q}$ . Then, make the change-of-variable $ \tau=\alpha t+\beta$ . Consequently, the set of all $ \tau$ is:$ $ \alpha\left(\mathbb{Q}/r\mathbb{Z}\right)+\beta=\left\{ \alpha\left(t+r\mathbb{Z}\right)+\beta:t\in\mathbb{Q}\right\} =\left\{ \tau+\alpha r\mathbb{Z}:t\in\mathbb{Q}\right\}$ $ . Here is where things get loopy.

(1) Since $ \alpha,\beta\in\mathbb{Q}$ with $ \alpha\neq0$ , the map $ \varphi\left(t\right)\overset{\textrm{def}}{=}\alpha t+\beta$ is a bijection of $ \mathbb{Q}$ . As such, I would think that:$ $ \left\{ \tau+\alpha r\mathbb{Z}:t\in\mathbb{Q}\right\} =\left\{ \tau+\alpha r\mathbb{Z}:\varphi^{-1}\left(\tau\right)\in\mathbb{Q}\right\} =\left\{ \tau+\alpha r\mathbb{Z}:\tau\in\varphi\left(\mathbb{Q}\right)\right\}$ $ and hence:$ $ \alpha\left(\mathbb{Q}/r\mathbb{Z}\right)+\beta=\left\{ \tau+\alpha r\mathbb{Z}:\tau\in\mathbb{Q}\right\} =\mathbb{Q}/\alpha r\mathbb{Z}$ $ Using this approach, I obtain:$ $ \sum_{t\in\mathbb{Q}/r\mathbb{Z}}f\left(\alpha t+\beta\right)=\sum_{\tau\in\mathbb{Q}/\alpha r\mathbb{Z}}f\left(\tau\right)$ $

(2) Since $ r=\frac{p}{q}$ , decompose $ \mathbb{Z}$ into its equivalence classes mod $ q$ :$ $ \left\{ \tau+\alpha r\mathbb{Z}:\tau\in\mathbb{Q}\right\} =\bigcup_{k=0}^{q-1}\left\{ \tau+\alpha r\left(q\mathbb{Z}+k\right):\tau\in\mathbb{Q}\right\}$ $ and so:$ $ \sum_{t\in\mathbb{Q}/r\mathbb{Z}}f\left(\alpha t+\beta\right)=\sum_{k=0}^{q-1}\sum_{\tau\in\mathbb{Q}/\alpha rq\mathbb{Z}}f\left(\tau+\alpha rk\right)$ $ On the other hand, for each $ k$ :$ $ \left\{ \tau+\alpha r\left(q\mathbb{Z}+k\right):\tau\in\mathbb{Q}\right\} =\left\{ \tau+\alpha rk+\alpha rq\mathbb{Z}:\tau\in\mathbb{Q}\right\}$ $ and, since $ \tau\mapsto\tau+\alpha rk$ is a bijection of $ \mathbb{Q}$ , the logic of (1) would suggest that:$ $ \left\{ \tau+\alpha r\left(q\mathbb{Z}+k\right):\tau\in\mathbb{Q}\right\} =\left\{ \tau+\alpha rq\mathbb{Z}:\tau\in\mathbb{Q}\right\}$ $ for all $ k$ . But then, that gives:$ $ \sum_{k=0}^{q-1}\sum_{\tau\in\mathbb{Q}/\alpha rq\mathbb{Z}}f\left(\tau+\alpha rk\right)=\sum_{t\in\mathbb{Q}/r\mathbb{Z}}f\left(\alpha t+\beta\right)=q\sum_{\tau\in\mathbb{Q}/\alpha rq\mathbb{Z}}f\left(\tau\right)$ $ which hardly seems right.

Continuous Fourier integrals by Ooura’s method

I have a PR implementing Ooura and Mori’s method for continuous Fourier integrals. I used it here to compute an oscillatory integral that Mathematica got completely wrong, and then I thought “well maybe this method is pretty good!” and figured I’d finish it after letting it languish for over a year.

Here are a few concerns:

  • I agonized over computing the nodes and weights accurately. But in the end, I had to precompute the nodes and weights in higher accuracy and cast them back down. Is there any way to avoid this that I’m missing? (If I don’t do this, then the error goes down after a few levels, then starts increasing. Use -DBOOST_MATH_INSTRUMENT_OOURA to see it if you’re interested.)

  • There is also some code duplication that I can’t really figure out how to get rid of. For example, for the sine integral, f(0) = inf is allowed, but for the cosine integral, f(0) must be finite. So you have this very small difference that disallows extracting the generic code into a single location. But maybe I’m just not being creative enough here.

  • I’m also interested in whether the comments are well-written and informative. I couldn’t understand my comments from when I stopped working on this last year, and now I’m worried I won’t be able to understand my current comments next year!

So here’s the code:

// Copyright Nick Thompson, 2019 // Use, modification and distribution are subject to the // Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt // or copy at http://www.boost.org/LICENSE_1_0.txt)  /*  * References:  * Ooura, Takuya, and Masatake Mori. "A robust double exponential formula for Fourier-type integrals." Journal of computational and applied mathematics 112.1-2 (1999): 229-241.  * http://www.kurims.kyoto-u.ac.jp/~ooura/intde.html  */ #ifndef BOOST_MATH_QUADRATURE_OOURA_FOURIER_INTEGRALS_HPP #define BOOST_MATH_QUADRATURE_OOURA_FOURIER_INTEGRALS_HPP #include <memory> #include <boost/math/quadrature/detail/ooura_fourier_integrals_detail.hpp>  namespace boost { namespace math { namespace quadrature {  template<class Real> class ooura_fourier_sin { public:     ooura_fourier_sin(const Real relative_error_tolerance = tools::root_epsilon<Real>(), size_t levels = sizeof(Real)) : impl_(std::make_shared<detail::ooura_fourier_sin_detail<Real>>(relative_error_tolerance, levels))     {}      template<class F>     std::pair<Real, Real> integrate(F const & f, Real omega) {         return impl_->integrate(f, omega);     }      std::vector<std::vector<Real>> const & big_nodes() const {         return impl_->big_nodes();     }      std::vector<std::vector<Real>> const & weights_for_big_nodes() const {         return impl_->weights_for_big_nodes();     }      std::vector<std::vector<Real>> const & little_nodes() const {         return impl_->little_nodes();     }      std::vector<std::vector<Real>> const & weights_for_little_nodes() const {         return impl_->weights_for_little_nodes();     }  private:     std::shared_ptr<detail::ooura_fourier_sin_detail<Real>> impl_; };   template<class Real> class ooura_fourier_cos { public:     ooura_fourier_cos(const Real relative_error_tolerance = tools::root_epsilon<Real>(), size_t levels = sizeof(Real)) : impl_(std::make_shared<detail::ooura_fourier_cos_detail<Real>>(relative_error_tolerance, levels))     {}      template<class F>     std::pair<Real, Real> integrate(F const & f, Real omega) {         return impl_->integrate(f, omega);     } private:     std::shared_ptr<detail::ooura_fourier_cos_detail<Real>> impl_; };   }}} #endif 

And the detail (which contains the real meat):

// Copyright Nick Thompson, 2019 // Use, modification and distribution are subject to the // Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt // or copy at http://www.boost.org/LICENSE_1_0.txt) #ifndef BOOST_MATH_QUADRATURE_DETAIL_OOURA_FOURIER_INTEGRALS_DETAIL_HPP #define BOOST_MATH_QUADRATURE_DETAIL_OOURA_FOURIER_INTEGRALS_DETAIL_HPP #include <utility> // for std::pair. #include <mutex> #include <atomic> #include <vector> #include <iostream> #include <boost/math/special_functions/expm1.hpp> #include <boost/math/special_functions/sin_pi.hpp> #include <boost/math/special_functions/cos_pi.hpp> #include <boost/math/constants/constants.hpp>  namespace boost { namespace math { namespace quadrature { namespace detail {  // Ooura and Mori, A robust double exponential formula for Fourier-type integrals, // eta is the argument to the exponential in equation 3.3: template<class Real> std::pair<Real, Real> ooura_eta(Real x, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     Real expx = exp(x);     Real eta_prime = 2 + alpha/expx + expx/4;     Real eta;     // This is the fast branch:     if (abs(x) > 0.125) {         eta = 2*x - alpha*(1/expx - 1) + (expx - 1)/4;     }     else {// this is the slow branch using expm1 for small x:         eta = 2*x - alpha*expm1(-x) + expm1(x)/4;     }     return {eta, eta_prime}; }  // Ooura and Mori, A robust double exponential formula for Fourier-type integrals, // equation 3.6: template<class Real> Real calculate_ooura_alpha(Real h) {     using boost::math::constants::pi;     using std::log1p;     using std::sqrt;     Real x = sqrt(16 + 4*log1p(pi<Real>()/h)/h);     return 1/x; }  template<class Real> std::pair<Real, Real> ooura_sin_node_and_weight(long n, Real h, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     using boost::math::constants::pi;     using std::isnan;      if (n == 0) {         // Equation 44 of https://arxiv.org/pdf/0911.4796.pdf         Real eta_prime_0 = Real(2) + alpha + Real(1)/Real(4);         Real node = pi<Real>()/(eta_prime_0*h);         Real weight = pi<Real>()*boost::math::sin_pi(1/(eta_prime_0*h));         Real eta_dbl_prime = -alpha + Real(1)/Real(4);         Real phi_prime_0 = (1 - eta_dbl_prime/(eta_prime_0*eta_prime_0))/2;         weight *= phi_prime_0;         return {node, weight};     }     Real x = n*h;     auto p = ooura_eta(x, alpha);     auto eta = p.first;     auto eta_prime = p.second;      Real expm1_meta = expm1(-eta);     Real exp_meta = exp(-eta);     Real node = -n*pi<Real>()/expm1_meta;       // I have verified that this is not a significant source of inaccuracy in the weight computation:     Real phi_prime = -(expm1_meta + x*exp_meta*eta_prime)/(expm1_meta*expm1_meta);      // The main source of inaccuracy is in computation of sin_pi.     // But I've agonized over this, and I think it's as good as it can get:     Real s = pi<Real>();     Real arg;     if(eta > 1) {         arg = n/( 1/exp_meta - 1 );         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }     else if (eta < -1) {         arg = n/(1-exp_meta);         s *= boost::math::sin_pi(arg);     }     else {         arg = -n*exp_meta/expm1_meta;         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }      Real weight = s*phi_prime;     return {node, weight}; }  #ifdef BOOST_MATH_INSTRUMENT_OOURA template<class Real> void print_ooura_estimate(size_t i, Real I0, Real I1, Real omega) {     using std::abs;     std::cout << std::defaultfloat               << std::setprecision(std::numeric_limits<Real>::digits10)               << std::fixed;     std::cout << "h = " << Real(1)/Real(1<<i) << ", I_h = " << I0/omega               << " = " << std::hexfloat << I0/omega << ", absolute error est = "               << std::defaultfloat << std::scientific << abs(I0-I1)  << "\n"; } #endif   template<class Real> std::pair<Real, Real> ooura_cos_node_and_weight(long n, Real h, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     using boost::math::constants::pi;      Real x = h*(n-Real(1)/Real(2));     auto p = ooura_eta(x, alpha);     auto eta = p.first;     auto eta_prime = p.second;     Real expm1_meta = expm1(-eta);     Real exp_meta = exp(-eta);     Real node = pi<Real>()*(Real(1)/Real(2)-n)/expm1_meta;      Real phi_prime = -(expm1_meta + x*exp_meta*eta_prime)/(expm1_meta*expm1_meta);      // Equation 4.6 of A robust double exponential formula for Fourier-type integrals     Real s = pi<Real>();     Real arg;     if (eta < -1) {         arg = -(n-Real(1)/Real(2))/expm1_meta;         s *= boost::math::cos_pi(arg);     }     else {         arg = -(n-Real(1)/Real(2))*exp_meta/expm1_meta;         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }      Real weight = s*phi_prime;     return {node, weight}; }   template<class Real> class ooura_fourier_sin_detail { public:     ooura_fourier_sin_detail(const Real relative_error_goal, size_t levels) {         if (relative_error_goal <= std::numeric_limits<Real>::epsilon()/2) {             throw std::domain_error("The relative error goal cannot be smaller than the unit roundoff.");         }         using std::abs;         requested_levels_ = levels;         starting_level_ = 0;         rel_err_goal_ = relative_error_goal;         big_nodes_.reserve(levels);         bweights_.reserve(levels);         little_nodes_.reserve(levels);         lweights_.reserve(levels);          for (size_t i = 0; i < levels; ++i) {             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }         }     }      std::vector<std::vector<Real>> const & big_nodes() const {         return big_nodes_;     }      std::vector<std::vector<Real>> const & weights_for_big_nodes() const {         return bweights_;     }      std::vector<std::vector<Real>> const & little_nodes() const {         return little_nodes_;     }      std::vector<std::vector<Real>> const & weights_for_little_nodes() const {         return lweights_;     }      template<class F>     std::pair<Real,Real> integrate(F const & f, Real omega) {         using std::abs;         using std::max;         using boost::math::constants::pi;          if (omega == 0) {             return {Real(0), Real(0)};         }         if (omega < 0) {             auto p = this->integrate(f, -omega);             return {-p.first, p.second};         }          Real I1 = std::numeric_limits<Real>::quiet_NaN();         Real relative_error_estimate = std::numeric_limits<Real>::quiet_NaN();         // As we compute integrals, we learn about their structure.         // Assuming we compute f(t)sin(wt) for many different omega, this gives some         // a posteriori ability to choose a refinement level that is roughly appropriate.         size_t i = starting_level_;         do {             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             Real absolute_error_estimate = abs(I0-I1);             Real scale = max(abs(I0), abs(I1));             if (!isnan(I1) && absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;         } while(++i < big_nodes_.size());          // We've used up all our precomputed levels.         // Now we need to add more.         // It might seems reasonable to just keep adding levels indefinitely, if that's what the user wants.         // But in fact the nodes and weights just merge into each other and the error gets worse after a certain number.         // This value for max_additional_levels was chosen by observation of a slowly converging oscillatory integral:         // f(x) := cos(7cos(x))sin(x)/x         size_t max_additional_levels = 4;         while (big_nodes_.size() < requested_levels_ + max_additional_levels) {             size_t i = big_nodes_.size();             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }             Real I0 = estimate_integral(f, omega, i);             Real absolute_error_estimate = abs(I0-I1);             Real scale = max(abs(I0), abs(I1)); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             if (absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;             ++i;         }          starting_level_ = big_nodes_.size() - 2;         return {I1/omega, relative_error_estimate};     }  private:      template<class PreciseReal>     void add_level(size_t i) {         size_t current_num_levels = big_nodes_.size();         Real unit_roundoff = std::numeric_limits<Real>::epsilon()/2;         // h0 = 1. Then all further levels have h_i = 1/2^i.         // Since the nodes don't nest, we could conceivably divide h by (say) 1.5, or 3.         // It's not clear how much benefit (or loss) would be obtained from this.         PreciseReal h = PreciseReal(1)/PreciseReal(1<<i);          std::vector<Real> bnode_row;         std::vector<Real> bweight_row;         // Definitely could use a more sophisticated heuristic for how many elements         // will be placed in the vector. This is a pretty huge overestimate:         bnode_row.reserve((1<<i)*sizeof(Real));         bweight_row.reserve((1<<i)*sizeof(Real));          std::vector<Real> lnode_row;         std::vector<Real> lweight_row;          lnode_row.reserve((1<<i)*sizeof(Real));         lweight_row.reserve((1<<i)*sizeof(Real));          Real max_weight = 1;         auto alpha = calculate_ooura_alpha(h);         long n = 0;         Real w;         do {             auto precise_nw = ooura_sin_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             bnode_row.push_back(node);             bweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             ++n;             // f(t)->0 as t->infty, which is why the weights are computed up to the unit roundoff.         } while(abs(w) > unit_roundoff*max_weight);          // This class tends to consume a lot of memory; shrink the vectors back down to size:         bnode_row.shrink_to_fit();         bweight_row.shrink_to_fit();         // Why we are splitting the nodes into regimes where t_n >> 1 and t_n << 1?         // It will create the opportunity to sensibly truncate the quadrature sum to significant terms.         n = -1;         do {             auto precise_nw = ooura_sin_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             if (node <= 0) {                 break;             }             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             using std::isnan;             if (isnan(node)) {                 // This occurs at n = -11 in quad precision:                 break;             }             if (lnode_row.size() > 0) {                 if (lnode_row[lnode_row.size()-1] == node) {                     // The nodes have fused into each other:                     break;                 }             }             lnode_row.push_back(node);             lweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             --n;             // f(t)->infty is possible as t->0, hence compute up to the min.         } while(abs(w) > std::numeric_limits<Real>::min()*max_weight);          lnode_row.shrink_to_fit();         lweight_row.shrink_to_fit();          // std::scoped_lock once C++17 is more common?         std::lock_guard<std::mutex> lock(node_weight_mutex_);         // Another thread might have already finished this calculation and appended it to the nodes/weights:         if (current_num_levels == big_nodes_.size()) {             big_nodes_.push_back(bnode_row);             bweights_.push_back(bweight_row);              little_nodes_.push_back(lnode_row);             lweights_.push_back(lweight_row);         }     }      template<class F>     Real estimate_integral(F const & f, Real omega, size_t i) {         // Because so few function evaluations are required to get high accuracy on the integrals in the tests,         // Kahan summation doesn't really help.         //auto cond = boost::math::tools::summation_condition_number<Real, true>(0);         Real I0 = 0;         auto const & b_nodes = big_nodes_[i];         auto const & b_weights = bweights_[i];         // Will benchmark if this is helpful:         Real inv_omega = 1/omega;         for(size_t j = 0 ; j < b_nodes.size(); ++j) {             I0 += f(b_nodes[j]*inv_omega)*b_weights[j];         }          auto const & l_nodes = little_nodes_[i];         auto const & l_weights = lweights_[i];         // If f decays rapidly as |t|->infty, not all of these calls are necessary.         for (size_t j = 0; j < l_nodes.size(); ++j) {             I0 += f(l_nodes[j]*inv_omega)*l_weights[j];         }         return I0;     }      std::mutex node_weight_mutex_;     // Nodes for n >= 0, giving t_n = pi*phi(nh)/h. Generally t_n >> 1.     std::vector<std::vector<Real>> big_nodes_;     // The term bweights_ will indicate that these are weights corresponding     // to the big nodes:     std::vector<std::vector<Real>> bweights_;      // Nodes for n < 0: Generally t_n << 1, and an invariant is that t_n > 0.     std::vector<std::vector<Real>> little_nodes_;     std::vector<std::vector<Real>> lweights_;     Real rel_err_goal_;     std::atomic<long> starting_level_;     size_t requested_levels_; };  template<class Real> class ooura_fourier_cos_detail { public:     ooura_fourier_cos_detail(const Real relative_error_goal, size_t levels) {         if (relative_error_goal <= std::numeric_limits<Real>::epsilon()/2) {             throw std::domain_error("The relative error goal cannot be smaller than the unit roundoff.");         }         using std::abs;         requested_levels_ = levels;         starting_level_ = 0;         rel_err_goal_ = relative_error_goal;         big_nodes_.reserve(levels);         bweights_.reserve(levels);         little_nodes_.reserve(levels);         lweights_.reserve(levels);          for (size_t i = 0; i < levels; ++i) {             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }         }      }      template<class F>     std::pair<Real,Real> integrate(F const & f, Real omega) {         using std::abs;         using std::max;         using boost::math::constants::pi;          if (omega == 0) {             throw std::domain_error("At omega = 0, the integral is not oscillatory. The user must choose an appropriate method for this case.\n");         }          if (omega < 0) {             return this->integrate(f, -omega);         }          Real I1 = std::numeric_limits<Real>::quiet_NaN();         Real absolute_error_estimate = std::numeric_limits<Real>::quiet_NaN();         Real scale = std::numeric_limits<Real>::quiet_NaN();         size_t i = starting_level_;         do {             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             absolute_error_estimate = abs(I0-I1);             scale = max(abs(I0), abs(I1));             if (!isnan(I1) && absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;         } while(++i < big_nodes_.size());          size_t max_additional_levels = 4;         while (big_nodes_.size() < requested_levels_ + max_additional_levels) {             size_t i = big_nodes_.size();             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             absolute_error_estimate = abs(I0-I1);             scale = max(abs(I0), abs(I1));             if (absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;             ++i;         }          starting_level_ = big_nodes_.size() - 2;         return {I1/omega, absolute_error_estimate/scale};     }  private:      template<class PreciseReal>     void add_level(size_t i) {         size_t current_num_levels = big_nodes_.size();         Real unit_roundoff = std::numeric_limits<Real>::epsilon()/2;         PreciseReal h = PreciseReal(1)/PreciseReal(1<<i);          std::vector<Real> bnode_row;         std::vector<Real> bweight_row;         bnode_row.reserve((1<<i)*sizeof(Real));         bweight_row.reserve((1<<i)*sizeof(Real));          std::vector<Real> lnode_row;         std::vector<Real> lweight_row;          lnode_row.reserve((1<<i)*sizeof(Real));         lweight_row.reserve((1<<i)*sizeof(Real));          Real max_weight = 1;         auto alpha = calculate_ooura_alpha(h);         long n = 0;         Real w;         do {             auto precise_nw = ooura_cos_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             bnode_row.push_back(node);             bweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             ++n;             // f(t)->0 as t->infty, which is why the weights are computed up to the unit roundoff.         } while(abs(w) > unit_roundoff*max_weight);          bnode_row.shrink_to_fit();         bweight_row.shrink_to_fit();         n = -1;         do {             auto precise_nw = ooura_cos_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             // The function cannot be singular at zero,             // so zero is not a unreasonable node,             // unlike in the case of the Fourier Sine.             // Hence only break if the node is negative.             if (node < 0) {                 break;             }             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             if (lnode_row.size() > 0) {                 if (lnode_row.back() == node) {                     // The nodes have fused into each other:                     break;                 }             }             lnode_row.push_back(node);             lweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             --n;         } while(abs(w) > std::numeric_limits<Real>::min()*max_weight);          lnode_row.shrink_to_fit();         lweight_row.shrink_to_fit();          std::lock_guard<std::mutex> lock(node_weight_mutex_);         // Another thread might have already finished this calculation and appended it to the nodes/weights:         if (current_num_levels == big_nodes_.size()) {             big_nodes_.push_back(bnode_row);             bweights_.push_back(bweight_row);              little_nodes_.push_back(lnode_row);             lweights_.push_back(lweight_row);         }     }      template<class F>     Real estimate_integral(F const & f, Real omega, size_t i) {         Real I0 = 0;         auto const & b_nodes = big_nodes_[i];         auto const & b_weights = bweights_[i];         Real inv_omega = 1/omega;         for(size_t j = 0 ; j < b_nodes.size(); ++j) {             I0 += f(b_nodes[j]*inv_omega)*b_weights[j];         }          auto const & l_nodes = little_nodes_[i];         auto const & l_weights = lweights_[i];         for (size_t j = 0; j < l_nodes.size(); ++j) {             I0 += f(l_nodes[j]*inv_omega)*l_weights[j];         }         return I0;     }      std::mutex node_weight_mutex_;     std::vector<std::vector<Real>> big_nodes_;     std::vector<std::vector<Real>> bweights_;      std::vector<std::vector<Real>> little_nodes_;     std::vector<std::vector<Real>> lweights_;     Real rel_err_goal_;     std::atomic<long> starting_level_;     size_t requested_levels_; };   }}}} #endif ``` 

Multidimensional Fourier transform: inner product in exponential?

I have a basic and not very deep understanding of continuous and discrete Fourier transforms in one dimension.

In a multidimensional Fourier transform, the exponent of $ e$ includes the inner product of the arguments of the original function and the transformed function. I’m having trouble finding an explanation of why this is the right way to extend the idea of a Fourier transform to more than one dimension. What is the intuition, or perhaps a more formal reason for this idea? In one dimension, the original function’s and transformed function’s arguments are multiplied, and using an inner product in a multidimensional case is one natural extension of that idea, but that’s not enough to justify it.

I see that using the inner product in the exponential means that the integral or sum (for a discrete Fourier transform) is over a product of $ \cos x_i + i \sin x_i$ sums, but I am not sure why that makes sense, or even whether that’s a useful way to think about it. (I can picture waves in two real dimensions, and multiplying one-dimensional wave equations kind of feels like a good way to represent that, but I still don’t have a clear understanding.)

Pointers to texts as well as explanations here would be welcome.

Pointwise convergence of Fourier series of function $\sqrt{|x|}$

I am trying to solve the following exercise:

Let $ f(x) = \sqrt{|x|}$ , $ x\in\mathbb{R}$ . Show that the Fourier series $ s_n(0)$ converges to $ f(0)$ .

The hint is that one should consider the convolution with the Dirichlet Kernel and the Riemann-Lebesgue lemma. This approach yields $ $ s_n(0) = \int_{-\pi}^\pi f(t)\frac{\sin[(N+1/2)t]}{\sin t/2}dt = \int_{-\pi}^\pi f(t)\sin nt\cot(t/2)dt + \int_{-\pi}^\pi f(t)\cos nt dt,$ $ and while the integral on the right tends to zero, by R-L, I could not estimate the integral on the left. This seems to come primarily from the fact that $ \cot t/2$ behaves quite poorly around $ t = 0$ , with $ \lim_{x\to 0} f(x)\cot x = \infty$ .

I have searched a few elementary texts on PDE’s, including Folland’s, Evans’ and Strauss’, and I could not find any example of pointwise convergence with a function of unbounded derivative. Moreover, the only related question that I found on MSE was this one one, but in this case the function is odd and the integrals vanish trivially. Any help would be appreciated.

Fourier transform of a Gaussian wave packet in position representation to momentum representation

I am Studying Gaussian wave packets in J.J Sakurai’s Quantum Mechanics, I stuck here , I don’t know to evaluate this integral

$ \int_{\mathbf -\infty}^\infty exp\Biggl({\mathbf -ipx\over h }+ikx-{x^2\over 2d^2}\Biggr)dx$ ,

Where K,h,p,d are constants and

$ i=\sqrt{-1}$

Please help me to move more

Applying the Fourier transform to Maxwell’s equations

I have the following Maxwell’s equations:

$ $ \nabla \times \mathbf{h} = \mathbf{j} + \epsilon_0 \dfrac{\partial{\mathbf{e}}}{\partial{t}} + \dfrac{\partial{\mathbf{p}}}{\partial{t}},$ $

$ $ \nabla \times \mathbf{e} = – \mu_0 \dfrac{\partial{\mathbf{h}}}{\partial{t}}$ $

My understanding is that the Fourier transform,

$ $ F(\omega) = \int_{-\infty}^\infty f(t) e^{-j \omega t} \ dt,$ $

can be applied to Maxwell’s equations to go from the time domain $ t$ to the angular frequency domain $ \omega$ .

I have experience with the Laplace transform but not the Fourier transform, and I cannot find anything online that goes through the steps of the transformation. Do we just apply the Fourier transform $ F(\omega)$ to every term in Maxwell’s equations? How do we deal with the presence of vector terms in the context of such an integration?

For instance, we have

$ $ \nabla \times \mathbf{h} = \mathbf{\hat{i}} (d_y h_z – d_z h_y) – \mathbf{\hat{j}} (d_x h_z – d_z h_x) + \mathbf{\hat{k}} (d_x h_y – d_y h_x)$ $

I would greatly appreciate it if people could please take the time to clarify this.

Fourier series representation of piecewise function

$ $ {Expand} \; f(x)= \begin{cases} 2{A\over L}x & 0\leq x\leq {L\over 2} \ \ 2{A\over L}\left(L-x\right) & {L\over 2}\leq x\leq L \end{cases} $ $

I have determined $ A_0$ (but omitted) to be $ A_0=A$ . For $ \ A_n \ $ and $ \ B_n$ , I’m rather confused where to start. For $ A_n$ : $ $ A_n={2\over L}\left[2{A\over L}\int_{0}^{L\over 2}x\cos\left({2n\pi x}\over L\right)dx+2{A\over L}\left(L\int_{L\over 2}^L\cos\left({2n\pi x}\over L\right)dx-\int_{L\over 2}^Lx\cos\left({2n\pi x}\over L\right)dx\right)\right]$ $ And a similar case to $ B_n$ , I’m not quite sure if I’m on the right path.