Right thread implementation in frequently invoked method

I am fresh in c++

But actually what I am interested in :

I have such method

void VideoRender::updateData(const std::string &i_obj_path, const std::string &i_pngPath, const uint i_subIndex) {     std::shared_ptr<FrameManager> container = std::make_shared<FrameManager>();      std::thread th1(&VideoRender::thread_task1, this, i_obj_path.c_str(), i_subIndex, container);     std::thread th2(&VideoRender::thread_task2, this, i_pngPath, container);      th1.join();     th2.join();      fifo.enqueue(container); } 

As you can see, this method is executing 2 task in 2 different threads and then join them. I invoke this method 20 times per second.

What bothers me: that 20 times per second I create 2 threads…

Question is: if this implementation in c++ looks ok? Because I think I should to use something like ThreadPool or some Handler that will run in another thread and I will push the task‘s there…

If am I right?

Right thread implementation in frequently invoked method

I am fresh in c++

But actually what I am interested in :

I have such method

void VideoRender::updateData(const std::string &i_obj_path, const std::string &i_pngPath, const uint i_subIndex) {     std::shared_ptr<FrameManager> container = std::make_shared<FrameManager>();      std::thread th1(&VideoRender::thread_task1, this, i_obj_path.c_str(), i_subIndex, container);     std::thread th2(&VideoRender::thread_task2, this, i_pngPath, container);      th1.join();     th2.join();      fifo.enqueue(container); } 

As you can see, this method is executing 2 task in 2 different threads and then join them. I invoke this method 20 times per second.

What bothers me: that 20 times per second I create 2 threads…

Question is: if this implementation in c++ looks ok? Because I think I should to use something like ThreadPool or some Handler that will run in another thread and I will push the task‘s there…

If am I right?

Limitations on method of Lagrange multipliers

My general question is this:

What are the conditions (if any) such that the method of LaGrangian multipliers will NOT find all the critical points of a differentiable function?

To give some context to this very general question, for

f(x, y, z) = 600xy + 900xz + 900yz subject to xyz = 486

I confirmed a minimum at (9, 9, 6) using a LaGrangian multiplier. That method also indicated that was the only critical point. However, Wolfram found an approximation to an additional minimum, which looks valid.

So I am confused. My best guess at an explanation is that although the function is everywhere differentiable, the constraint is not continuous everywhere. But that is a pure guess.

To get the full context behind my question, please look at the following thread at a math homework site where I volunteer:

https://www.freemathhelp.com/forum/threads/maximum-minimum-in-multivariable-functions.116663/

Browser crashes while using OffscreenCanvas.convertToBlob method large file in web worker

I’m trying to show Tiff File in browser, i successfully read Tiff using UTIF.js file, Where I am using Web worker to read Tiff format file . Some files are very large like 10,000 px height and 13,000 width, I need to show them in browser. Browser crashes while executing code OffscreenCanvas.convertToBlob method which return Promise object.

This is where i used Web Worker and Offscreencanvas , I have tried convertToBlob method with different parameter such as quality .6 and less also but still browser crashing.

UTIF.decodeImage(ubuf,utif[k]); var ubuf1 =UTIF.toRGBA8(utif[k]); var a =  new Uint8ClampedArray(ubuf1); var imgData = new ImageData(a,utif[k].width,utif[k].height); var canvas1 = new OffscreenCanvas(utif[k].width,utif[k].height); var ctx = canvas1.getContext('2d'); ctx.putImageData(imgData,0,0); var that = self; if(utif[k].width >2048) { canvas1.convertToBlob({type : "image/jpeg", quality : 0.3 }).then(function(blob) { that.postMessage(blob);                    }); } else { canvas1.convertToBlob( {type : "image/jpeg", quality : 1 }).then(function(blob) { that.postMessage(blob);                    }); } 

I am expecting browser should not crashes in large file scenario.

THanks a Lot in Advance.

Continuous Fourier integrals by Ooura’s method

I have a PR implementing Ooura and Mori’s method for continuous Fourier integrals. I used it here to compute an oscillatory integral that Mathematica got completely wrong, and then I thought “well maybe this method is pretty good!” and figured I’d finish it after letting it languish for over a year.

Here are a few concerns:

  • I agonized over computing the nodes and weights accurately. But in the end, I had to precompute the nodes and weights in higher accuracy and cast them back down. Is there any way to avoid this that I’m missing? (If I don’t do this, then the error goes down after a few levels, then starts increasing. Use -DBOOST_MATH_INSTRUMENT_OOURA to see it if you’re interested.)

  • There is also some code duplication that I can’t really figure out how to get rid of. For example, for the sine integral, f(0) = inf is allowed, but for the cosine integral, f(0) must be finite. So you have this very small difference that disallows extracting the generic code into a single location. But maybe I’m just not being creative enough here.

  • I’m also interested in whether the comments are well-written and informative. I couldn’t understand my comments from when I stopped working on this last year, and now I’m worried I won’t be able to understand my current comments next year!

So here’s the code:

// Copyright Nick Thompson, 2019 // Use, modification and distribution are subject to the // Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt // or copy at http://www.boost.org/LICENSE_1_0.txt)  /*  * References:  * Ooura, Takuya, and Masatake Mori. "A robust double exponential formula for Fourier-type integrals." Journal of computational and applied mathematics 112.1-2 (1999): 229-241.  * http://www.kurims.kyoto-u.ac.jp/~ooura/intde.html  */ #ifndef BOOST_MATH_QUADRATURE_OOURA_FOURIER_INTEGRALS_HPP #define BOOST_MATH_QUADRATURE_OOURA_FOURIER_INTEGRALS_HPP #include <memory> #include <boost/math/quadrature/detail/ooura_fourier_integrals_detail.hpp>  namespace boost { namespace math { namespace quadrature {  template<class Real> class ooura_fourier_sin { public:     ooura_fourier_sin(const Real relative_error_tolerance = tools::root_epsilon<Real>(), size_t levels = sizeof(Real)) : impl_(std::make_shared<detail::ooura_fourier_sin_detail<Real>>(relative_error_tolerance, levels))     {}      template<class F>     std::pair<Real, Real> integrate(F const & f, Real omega) {         return impl_->integrate(f, omega);     }      std::vector<std::vector<Real>> const & big_nodes() const {         return impl_->big_nodes();     }      std::vector<std::vector<Real>> const & weights_for_big_nodes() const {         return impl_->weights_for_big_nodes();     }      std::vector<std::vector<Real>> const & little_nodes() const {         return impl_->little_nodes();     }      std::vector<std::vector<Real>> const & weights_for_little_nodes() const {         return impl_->weights_for_little_nodes();     }  private:     std::shared_ptr<detail::ooura_fourier_sin_detail<Real>> impl_; };   template<class Real> class ooura_fourier_cos { public:     ooura_fourier_cos(const Real relative_error_tolerance = tools::root_epsilon<Real>(), size_t levels = sizeof(Real)) : impl_(std::make_shared<detail::ooura_fourier_cos_detail<Real>>(relative_error_tolerance, levels))     {}      template<class F>     std::pair<Real, Real> integrate(F const & f, Real omega) {         return impl_->integrate(f, omega);     } private:     std::shared_ptr<detail::ooura_fourier_cos_detail<Real>> impl_; };   }}} #endif 

And the detail (which contains the real meat):

// Copyright Nick Thompson, 2019 // Use, modification and distribution are subject to the // Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt // or copy at http://www.boost.org/LICENSE_1_0.txt) #ifndef BOOST_MATH_QUADRATURE_DETAIL_OOURA_FOURIER_INTEGRALS_DETAIL_HPP #define BOOST_MATH_QUADRATURE_DETAIL_OOURA_FOURIER_INTEGRALS_DETAIL_HPP #include <utility> // for std::pair. #include <mutex> #include <atomic> #include <vector> #include <iostream> #include <boost/math/special_functions/expm1.hpp> #include <boost/math/special_functions/sin_pi.hpp> #include <boost/math/special_functions/cos_pi.hpp> #include <boost/math/constants/constants.hpp>  namespace boost { namespace math { namespace quadrature { namespace detail {  // Ooura and Mori, A robust double exponential formula for Fourier-type integrals, // eta is the argument to the exponential in equation 3.3: template<class Real> std::pair<Real, Real> ooura_eta(Real x, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     Real expx = exp(x);     Real eta_prime = 2 + alpha/expx + expx/4;     Real eta;     // This is the fast branch:     if (abs(x) > 0.125) {         eta = 2*x - alpha*(1/expx - 1) + (expx - 1)/4;     }     else {// this is the slow branch using expm1 for small x:         eta = 2*x - alpha*expm1(-x) + expm1(x)/4;     }     return {eta, eta_prime}; }  // Ooura and Mori, A robust double exponential formula for Fourier-type integrals, // equation 3.6: template<class Real> Real calculate_ooura_alpha(Real h) {     using boost::math::constants::pi;     using std::log1p;     using std::sqrt;     Real x = sqrt(16 + 4*log1p(pi<Real>()/h)/h);     return 1/x; }  template<class Real> std::pair<Real, Real> ooura_sin_node_and_weight(long n, Real h, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     using boost::math::constants::pi;     using std::isnan;      if (n == 0) {         // Equation 44 of https://arxiv.org/pdf/0911.4796.pdf         Real eta_prime_0 = Real(2) + alpha + Real(1)/Real(4);         Real node = pi<Real>()/(eta_prime_0*h);         Real weight = pi<Real>()*boost::math::sin_pi(1/(eta_prime_0*h));         Real eta_dbl_prime = -alpha + Real(1)/Real(4);         Real phi_prime_0 = (1 - eta_dbl_prime/(eta_prime_0*eta_prime_0))/2;         weight *= phi_prime_0;         return {node, weight};     }     Real x = n*h;     auto p = ooura_eta(x, alpha);     auto eta = p.first;     auto eta_prime = p.second;      Real expm1_meta = expm1(-eta);     Real exp_meta = exp(-eta);     Real node = -n*pi<Real>()/expm1_meta;       // I have verified that this is not a significant source of inaccuracy in the weight computation:     Real phi_prime = -(expm1_meta + x*exp_meta*eta_prime)/(expm1_meta*expm1_meta);      // The main source of inaccuracy is in computation of sin_pi.     // But I've agonized over this, and I think it's as good as it can get:     Real s = pi<Real>();     Real arg;     if(eta > 1) {         arg = n/( 1/exp_meta - 1 );         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }     else if (eta < -1) {         arg = n/(1-exp_meta);         s *= boost::math::sin_pi(arg);     }     else {         arg = -n*exp_meta/expm1_meta;         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }      Real weight = s*phi_prime;     return {node, weight}; }  #ifdef BOOST_MATH_INSTRUMENT_OOURA template<class Real> void print_ooura_estimate(size_t i, Real I0, Real I1, Real omega) {     using std::abs;     std::cout << std::defaultfloat               << std::setprecision(std::numeric_limits<Real>::digits10)               << std::fixed;     std::cout << "h = " << Real(1)/Real(1<<i) << ", I_h = " << I0/omega               << " = " << std::hexfloat << I0/omega << ", absolute error est = "               << std::defaultfloat << std::scientific << abs(I0-I1)  << "\n"; } #endif   template<class Real> std::pair<Real, Real> ooura_cos_node_and_weight(long n, Real h, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     using boost::math::constants::pi;      Real x = h*(n-Real(1)/Real(2));     auto p = ooura_eta(x, alpha);     auto eta = p.first;     auto eta_prime = p.second;     Real expm1_meta = expm1(-eta);     Real exp_meta = exp(-eta);     Real node = pi<Real>()*(Real(1)/Real(2)-n)/expm1_meta;      Real phi_prime = -(expm1_meta + x*exp_meta*eta_prime)/(expm1_meta*expm1_meta);      // Equation 4.6 of A robust double exponential formula for Fourier-type integrals     Real s = pi<Real>();     Real arg;     if (eta < -1) {         arg = -(n-Real(1)/Real(2))/expm1_meta;         s *= boost::math::cos_pi(arg);     }     else {         arg = -(n-Real(1)/Real(2))*exp_meta/expm1_meta;         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }      Real weight = s*phi_prime;     return {node, weight}; }   template<class Real> class ooura_fourier_sin_detail { public:     ooura_fourier_sin_detail(const Real relative_error_goal, size_t levels) {         if (relative_error_goal <= std::numeric_limits<Real>::epsilon()/2) {             throw std::domain_error("The relative error goal cannot be smaller than the unit roundoff.");         }         using std::abs;         requested_levels_ = levels;         starting_level_ = 0;         rel_err_goal_ = relative_error_goal;         big_nodes_.reserve(levels);         bweights_.reserve(levels);         little_nodes_.reserve(levels);         lweights_.reserve(levels);          for (size_t i = 0; i < levels; ++i) {             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }         }     }      std::vector<std::vector<Real>> const & big_nodes() const {         return big_nodes_;     }      std::vector<std::vector<Real>> const & weights_for_big_nodes() const {         return bweights_;     }      std::vector<std::vector<Real>> const & little_nodes() const {         return little_nodes_;     }      std::vector<std::vector<Real>> const & weights_for_little_nodes() const {         return lweights_;     }      template<class F>     std::pair<Real,Real> integrate(F const & f, Real omega) {         using std::abs;         using std::max;         using boost::math::constants::pi;          if (omega == 0) {             return {Real(0), Real(0)};         }         if (omega < 0) {             auto p = this->integrate(f, -omega);             return {-p.first, p.second};         }          Real I1 = std::numeric_limits<Real>::quiet_NaN();         Real relative_error_estimate = std::numeric_limits<Real>::quiet_NaN();         // As we compute integrals, we learn about their structure.         // Assuming we compute f(t)sin(wt) for many different omega, this gives some         // a posteriori ability to choose a refinement level that is roughly appropriate.         size_t i = starting_level_;         do {             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             Real absolute_error_estimate = abs(I0-I1);             Real scale = max(abs(I0), abs(I1));             if (!isnan(I1) && absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;         } while(++i < big_nodes_.size());          // We've used up all our precomputed levels.         // Now we need to add more.         // It might seems reasonable to just keep adding levels indefinitely, if that's what the user wants.         // But in fact the nodes and weights just merge into each other and the error gets worse after a certain number.         // This value for max_additional_levels was chosen by observation of a slowly converging oscillatory integral:         // f(x) := cos(7cos(x))sin(x)/x         size_t max_additional_levels = 4;         while (big_nodes_.size() < requested_levels_ + max_additional_levels) {             size_t i = big_nodes_.size();             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }             Real I0 = estimate_integral(f, omega, i);             Real absolute_error_estimate = abs(I0-I1);             Real scale = max(abs(I0), abs(I1)); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             if (absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;             ++i;         }          starting_level_ = big_nodes_.size() - 2;         return {I1/omega, relative_error_estimate};     }  private:      template<class PreciseReal>     void add_level(size_t i) {         size_t current_num_levels = big_nodes_.size();         Real unit_roundoff = std::numeric_limits<Real>::epsilon()/2;         // h0 = 1. Then all further levels have h_i = 1/2^i.         // Since the nodes don't nest, we could conceivably divide h by (say) 1.5, or 3.         // It's not clear how much benefit (or loss) would be obtained from this.         PreciseReal h = PreciseReal(1)/PreciseReal(1<<i);          std::vector<Real> bnode_row;         std::vector<Real> bweight_row;         // Definitely could use a more sophisticated heuristic for how many elements         // will be placed in the vector. This is a pretty huge overestimate:         bnode_row.reserve((1<<i)*sizeof(Real));         bweight_row.reserve((1<<i)*sizeof(Real));          std::vector<Real> lnode_row;         std::vector<Real> lweight_row;          lnode_row.reserve((1<<i)*sizeof(Real));         lweight_row.reserve((1<<i)*sizeof(Real));          Real max_weight = 1;         auto alpha = calculate_ooura_alpha(h);         long n = 0;         Real w;         do {             auto precise_nw = ooura_sin_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             bnode_row.push_back(node);             bweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             ++n;             // f(t)->0 as t->infty, which is why the weights are computed up to the unit roundoff.         } while(abs(w) > unit_roundoff*max_weight);          // This class tends to consume a lot of memory; shrink the vectors back down to size:         bnode_row.shrink_to_fit();         bweight_row.shrink_to_fit();         // Why we are splitting the nodes into regimes where t_n >> 1 and t_n << 1?         // It will create the opportunity to sensibly truncate the quadrature sum to significant terms.         n = -1;         do {             auto precise_nw = ooura_sin_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             if (node <= 0) {                 break;             }             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             using std::isnan;             if (isnan(node)) {                 // This occurs at n = -11 in quad precision:                 break;             }             if (lnode_row.size() > 0) {                 if (lnode_row[lnode_row.size()-1] == node) {                     // The nodes have fused into each other:                     break;                 }             }             lnode_row.push_back(node);             lweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             --n;             // f(t)->infty is possible as t->0, hence compute up to the min.         } while(abs(w) > std::numeric_limits<Real>::min()*max_weight);          lnode_row.shrink_to_fit();         lweight_row.shrink_to_fit();          // std::scoped_lock once C++17 is more common?         std::lock_guard<std::mutex> lock(node_weight_mutex_);         // Another thread might have already finished this calculation and appended it to the nodes/weights:         if (current_num_levels == big_nodes_.size()) {             big_nodes_.push_back(bnode_row);             bweights_.push_back(bweight_row);              little_nodes_.push_back(lnode_row);             lweights_.push_back(lweight_row);         }     }      template<class F>     Real estimate_integral(F const & f, Real omega, size_t i) {         // Because so few function evaluations are required to get high accuracy on the integrals in the tests,         // Kahan summation doesn't really help.         //auto cond = boost::math::tools::summation_condition_number<Real, true>(0);         Real I0 = 0;         auto const & b_nodes = big_nodes_[i];         auto const & b_weights = bweights_[i];         // Will benchmark if this is helpful:         Real inv_omega = 1/omega;         for(size_t j = 0 ; j < b_nodes.size(); ++j) {             I0 += f(b_nodes[j]*inv_omega)*b_weights[j];         }          auto const & l_nodes = little_nodes_[i];         auto const & l_weights = lweights_[i];         // If f decays rapidly as |t|->infty, not all of these calls are necessary.         for (size_t j = 0; j < l_nodes.size(); ++j) {             I0 += f(l_nodes[j]*inv_omega)*l_weights[j];         }         return I0;     }      std::mutex node_weight_mutex_;     // Nodes for n >= 0, giving t_n = pi*phi(nh)/h. Generally t_n >> 1.     std::vector<std::vector<Real>> big_nodes_;     // The term bweights_ will indicate that these are weights corresponding     // to the big nodes:     std::vector<std::vector<Real>> bweights_;      // Nodes for n < 0: Generally t_n << 1, and an invariant is that t_n > 0.     std::vector<std::vector<Real>> little_nodes_;     std::vector<std::vector<Real>> lweights_;     Real rel_err_goal_;     std::atomic<long> starting_level_;     size_t requested_levels_; };  template<class Real> class ooura_fourier_cos_detail { public:     ooura_fourier_cos_detail(const Real relative_error_goal, size_t levels) {         if (relative_error_goal <= std::numeric_limits<Real>::epsilon()/2) {             throw std::domain_error("The relative error goal cannot be smaller than the unit roundoff.");         }         using std::abs;         requested_levels_ = levels;         starting_level_ = 0;         rel_err_goal_ = relative_error_goal;         big_nodes_.reserve(levels);         bweights_.reserve(levels);         little_nodes_.reserve(levels);         lweights_.reserve(levels);          for (size_t i = 0; i < levels; ++i) {             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }         }      }      template<class F>     std::pair<Real,Real> integrate(F const & f, Real omega) {         using std::abs;         using std::max;         using boost::math::constants::pi;          if (omega == 0) {             throw std::domain_error("At omega = 0, the integral is not oscillatory. The user must choose an appropriate method for this case.\n");         }          if (omega < 0) {             return this->integrate(f, -omega);         }          Real I1 = std::numeric_limits<Real>::quiet_NaN();         Real absolute_error_estimate = std::numeric_limits<Real>::quiet_NaN();         Real scale = std::numeric_limits<Real>::quiet_NaN();         size_t i = starting_level_;         do {             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             absolute_error_estimate = abs(I0-I1);             scale = max(abs(I0), abs(I1));             if (!isnan(I1) && absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;         } while(++i < big_nodes_.size());          size_t max_additional_levels = 4;         while (big_nodes_.size() < requested_levels_ + max_additional_levels) {             size_t i = big_nodes_.size();             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             absolute_error_estimate = abs(I0-I1);             scale = max(abs(I0), abs(I1));             if (absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;             ++i;         }          starting_level_ = big_nodes_.size() - 2;         return {I1/omega, absolute_error_estimate/scale};     }  private:      template<class PreciseReal>     void add_level(size_t i) {         size_t current_num_levels = big_nodes_.size();         Real unit_roundoff = std::numeric_limits<Real>::epsilon()/2;         PreciseReal h = PreciseReal(1)/PreciseReal(1<<i);          std::vector<Real> bnode_row;         std::vector<Real> bweight_row;         bnode_row.reserve((1<<i)*sizeof(Real));         bweight_row.reserve((1<<i)*sizeof(Real));          std::vector<Real> lnode_row;         std::vector<Real> lweight_row;          lnode_row.reserve((1<<i)*sizeof(Real));         lweight_row.reserve((1<<i)*sizeof(Real));          Real max_weight = 1;         auto alpha = calculate_ooura_alpha(h);         long n = 0;         Real w;         do {             auto precise_nw = ooura_cos_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             bnode_row.push_back(node);             bweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             ++n;             // f(t)->0 as t->infty, which is why the weights are computed up to the unit roundoff.         } while(abs(w) > unit_roundoff*max_weight);          bnode_row.shrink_to_fit();         bweight_row.shrink_to_fit();         n = -1;         do {             auto precise_nw = ooura_cos_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             // The function cannot be singular at zero,             // so zero is not a unreasonable node,             // unlike in the case of the Fourier Sine.             // Hence only break if the node is negative.             if (node < 0) {                 break;             }             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             if (lnode_row.size() > 0) {                 if (lnode_row.back() == node) {                     // The nodes have fused into each other:                     break;                 }             }             lnode_row.push_back(node);             lweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             --n;         } while(abs(w) > std::numeric_limits<Real>::min()*max_weight);          lnode_row.shrink_to_fit();         lweight_row.shrink_to_fit();          std::lock_guard<std::mutex> lock(node_weight_mutex_);         // Another thread might have already finished this calculation and appended it to the nodes/weights:         if (current_num_levels == big_nodes_.size()) {             big_nodes_.push_back(bnode_row);             bweights_.push_back(bweight_row);              little_nodes_.push_back(lnode_row);             lweights_.push_back(lweight_row);         }     }      template<class F>     Real estimate_integral(F const & f, Real omega, size_t i) {         Real I0 = 0;         auto const & b_nodes = big_nodes_[i];         auto const & b_weights = bweights_[i];         Real inv_omega = 1/omega;         for(size_t j = 0 ; j < b_nodes.size(); ++j) {             I0 += f(b_nodes[j]*inv_omega)*b_weights[j];         }          auto const & l_nodes = little_nodes_[i];         auto const & l_weights = lweights_[i];         for (size_t j = 0; j < l_nodes.size(); ++j) {             I0 += f(l_nodes[j]*inv_omega)*l_weights[j];         }         return I0;     }      std::mutex node_weight_mutex_;     std::vector<std::vector<Real>> big_nodes_;     std::vector<std::vector<Real>> bweights_;      std::vector<std::vector<Real>> little_nodes_;     std::vector<std::vector<Real>> lweights_;     Real rel_err_goal_;     std::atomic<long> starting_level_;     size_t requested_levels_; };   }}}} #endif ``` 

Is this a good method of separation?

I am creating a web application that is tiered in the following way:

Controller > Service > Repository

I have a ProductsController which has the following action:

[ValidateModel] [HttpPost] public async Task<IActionResult> CreateProduct([FromBody] CreateProductRequest request) {     var result = _productService.CreateProduct(_mapper.Map<ProductSM>(request));      switch (result.StatusCode)     {         case HttpStatusCode.BadRequest:              return NotFound(result);     }      return Ok(_mapper.Map<GetAllProductsResponse>(result.ToItemResponse().Data)); } 

This then calls the ProductService to create a product like so:

public IResponse<ProductSM> CreateProduct(ProductSM productSM)     {         if (DoesProductNameExists(productSM.Name))         {             return new ErrorResponse<ProductSM>             {                 Message = $  "Product {productSM.Name} already exists.",                 StatusCode = HttpStatusCode.BadRequest             };         }          _productRepository.Insert(_mapper.Map<Product>(productSM));         _productRepository.SaveChangesAsync();          return new ItemResponse<ProductSM>         {             StatusCode = HttpStatusCode.OK         };     } 

You will notice the following:

1.) The CreateProduct in ProductsController accepts a CreateProductRequest which is then mapped to ProductSM (service model) then within the ProductService the ProductSM is then mapped to Product (entity).

Is this the correct way of doing this? If not, then why because I feel as if this is decoupling the layers and strictly following a separation of concern idea.

2.) In the ProductService I return an IResponse which can be an ItemResponse (for successful calls) or ErrorResponse (for errors) which is then handled in the controller as the controller action returns a different result depending on the status code.

Is the best way of handling business layered errors. As for simple validations based on the request I’d expect the controller to handle this hence the [ValidateModel] attribute which checks that the model is valid, once this passes every other validation will be business layer related so im wondering what is the best way of handling these errors from business layer.

If you see anything else wrong with how I’ve done things then please do mention!

Overriding an internal method with Decorator Design Pattern

I am writing an object-oriented code in which I am trying to use Decorator pattern to implement a variety of optimizations to be applied on a family of core classes at runtime. The main behaviour of core classes is a complex behaviour that is fully implemented in those classes, which indeed calls other internal methods to fulfill pieces of the task. The decorators will only customize the internal methods which are called by the complex behaviour in core class.

Here is a pseudo-code of what I’m trying to reach:

interface I{   complex();   step1();   step2(); } 
class C implements I{   complex(){     ...     this.step1();     ...     this.step2();   }   step1(){     ...   }   step2(){     ...   } } 
abstract class Decorator implements I{   I wrapped;   constructor(I obj){     this.wrapped = obj;   }   complex(){     this.wrapped.complex();   }   step1(){     this.wrapped.step1();   }   step2(){     this.wrapped.step2();   } } 
class ConcreteDecorator extends Decorator{   constructor(I obj){     super(obj);   }   step2(){     ... // customizing step2()   } } 

There are a variety of customizations possible which could be combined together, and that is the main reason I’m using decorator pattern. otherwise I’ll get to create dozens to hundred subtypes for each possible combination of customizations.

Now if I try to create object of the decorated class:

x = new C(); y = new ConcreteDecorator(x); y.complex(); 

I expect the complex() method to be executed form the wrapped core object, while using the overridden step2() method from decorator. But it does not work this way as the complex() method in abstract decorator directly calls the method on core object which indeed skips the overridden step2() in decorator.

My overall goal is to enable the decorators only overriding one or few of the stepx() methods and that would be invoked by the complex() method which is already implemented in the core object and invokes all the steps.

Could this functionality be implemented using Decorator design pattern at all? If yes how, and if not what is the appropriate design pattern for tackling this problem.

Thanks.

Sophos Antivirus or other Firewall/AV blocking Tomcat or AmazonS3Client listObjects() method?

I have a pair of Java/Tomcat web applications running on a third party (customer) server, and of late, those applications can no longer list or download objects from AmazonS3.

This is a “nothing changed” situation, where I got a bug report out of the blue on what were stable systems. Our other users hosting the software on their own Windows networks don’t have this issue, and the instances we host on Amazon EC2 likewise also have no issue. I was able to identify the date it stopped working, but Customer IT likewise says “nothing changed”. I do see Sophos software running on the machine in question, but not sure if that’s the issue, and it appears to have been installed a while before this occurred.

To reiterate, I have two (2) applications running on this server that interact with S3, and they both started failing the exact same time. Of note, they interact via SQS messages. One app posts to SQS (this works) and another polls SQS (this works).

To debug this, I have attempted the following:

  • Install AWS CLI on problem server and attempt to list-objects. This worked.
  • Point my own development environment (outside customer network) at the problem server’s DB (available via VPN) to verify properties/config setup. This worked.
  • Hardcode the references to S3 resources and redeploy, to verify the issue isn’t app initialization/failure to resolve config. This still fails, but logs are outputting the correct bucket and key, so config/setup does not appear to be the issue.
  • Put explicit log statements all around the failing methods to iso the exact line that fails. This is a call to AmazonS3’s listObjects(string, string) method.
  • Checked Sophos McsAgent.log and McsClient.log to see if anything obviously related to my applications was popping up.
  • Tried to run a unit test within the application’s code base on the problem server that also invokes the listObjects() method. This worked.

In the live/running failure case, do not get an exception thrown by the listObjects method. It simply appears to execute indefinitely, after I set the browser timeouts that reproduce this to be fairly long = 9000000 ms

At this point I am not sure what the next debugging step would be, but I believe the evidence strongly points to an issue related to Tomcat making this request from within their four walls.

Sophos Antivirus or other Firewall/AV blocking Tomcat or AmazonS3Client listObjects() method?

I have a pair of Java/Tomcat web applications running on a third party (customer) server, and of late, those applications can no longer list or download objects from AmazonS3.

This is a “nothing changed” situation, where I got a bug report out of the blue on what were stable systems. Our other users hosting the software on their own Windows networks don’t have this issue, and the instances we host on Amazon EC2 likewise also have no issue. I was able to identify the date it stopped working, but Customer IT likewise says “nothing changed”. I do see Sophos software running on the machine in question, but not sure if that’s the issue, and it appears to have been installed a while before this occurred.

To reiterate, I have two (2) applications running on this server that interact with S3, and they both started failing the exact same time. Of note, they interact via SQS messages. One app posts to SQS (this works) and another polls SQS (this works).

To debug this, I have attempted the following:

  • Install AWS CLI on problem server and attempt to list-objects. This worked.
  • Point my own development environment (outside customer network) at the problem server’s DB (available via VPN) to verify properties/config setup. This worked.
  • Hardcode the references to S3 resources and redeploy, to verify the issue isn’t app initialization/failure to resolve config. This still fails, but logs are outputting the correct bucket and key, so config/setup does not appear to be the issue.
  • Put explicit log statements all around the failing methods to iso the exact line that fails. This is a call to AmazonS3’s listObjects(string, string) method.
  • Checked Sophos McsAgent.log and McsClient.log to see if anything obviously related to my applications was popping up.
  • Tried to run a unit test within the application’s code base on the problem server that also invokes the listObjects() method. This worked.

In the live/running failure case, do not get an exception thrown by the listObjects method. It simply appears to execute indefinitely, after I set the browser timeouts that reproduce this to be fairly long = 9000000 ms

At this point I am not sure what the next debugging step would be, but I believe the evidence strongly points to an issue related to Tomcat making this request from within their four walls.

Sophos Antivirus or other Firewall/AV blocking Tomcat or AmazonS3Client listObjects() method?

I have a pair of Java/Tomcat web applications running on a third party (customer) server, and of late, those applications can no longer list or download objects from AmazonS3.

This is a “nothing changed” situation, where I got a bug report out of the blue on what were stable systems. Our other users hosting the software on their own Windows networks don’t have this issue, and the instances we host on Amazon EC2 likewise also have no issue. I was able to identify the date it stopped working, but Customer IT likewise says “nothing changed”. I do see Sophos software running on the machine in question, but not sure if that’s the issue, and it appears to have been installed a while before this occurred.

To reiterate, I have two (2) applications running on this server that interact with S3, and they both started failing the exact same time. Of note, they interact via SQS messages. One app posts to SQS (this works) and another polls SQS (this works).

To debug this, I have attempted the following:

  • Install AWS CLI on problem server and attempt to list-objects. This worked.
  • Point my own development environment (outside customer network) at the problem server’s DB (available via VPN) to verify properties/config setup. This worked.
  • Hardcode the references to S3 resources and redeploy, to verify the issue isn’t app initialization/failure to resolve config. This still fails, but logs are outputting the correct bucket and key, so config/setup does not appear to be the issue.
  • Put explicit log statements all around the failing methods to iso the exact line that fails. This is a call to AmazonS3’s listObjects(string, string) method.
  • Checked Sophos McsAgent.log and McsClient.log to see if anything obviously related to my applications was popping up.
  • Tried to run a unit test within the application’s code base on the problem server that also invokes the listObjects() method. This worked.

In the live/running failure case, do not get an exception thrown by the listObjects method. It simply appears to execute indefinitely, after I set the browser timeouts that reproduce this to be fairly long = 9000000 ms

At this point I am not sure what the next debugging step would be, but I believe the evidence strongly points to an issue related to Tomcat making this request from within their four walls.