When do you have enough automatic testing to be confident in your continuous integration pipeline?

Continuous integration with testing is useful for making sure that you have “shippable” code checked in all the time.

However, it is really difficult to keep up a comprehensive suite of tests and often, it feels like the build is going to be buggy anyways.

How much tests should you have to feel confident in your CI pipeline testing? Do you use some sort of metric to decide when there is enough tests?

Is the injectivity radius (semi) continuous on a non-complete Riemannian manifolds?

Let $ \mathcal{M}$ be a Riemannian manifold, and let $ \mathrm{inj} \colon \cal M \to (0, \infty]$ be its injectivity radius function.

It is known that if $ \cal M$ is connected and complete, then $ \mathrm{inj}$ is a continuous function: see for example [Lee, Introduction to Riemannian Manifolds, 2018, Prop. 10.37].

What is known in the case where $ \cal M$ is not complete? Is $ \mathrm{inj}$ also continuous? If not, is there a known counter-example? Would $ \mathrm{inj}$ still be semi-continuous?

This question is similar to the question “The continuity of Injectivity radius”, but the discussion there focuses on compact or complete manifolds.

Structure Preserving Continuous Hash Function

This question was originally posted on super user, but redirected here based on some suggestions.

I am completely new with computer science, and not only recently did I run into the notion of hashing. Currently, I use md5sum for indexing reason, but am curious if hash functions can do more things. After educating myself on the surface by reading the wikipedia page on hash functions, I wondered if continuous hash functions can be made true. Luckily, it is called locality sensitive hashing, and I can find open algorithms online.

What makes me more curious, and is something that I have not found, is can I further ask the continuous hash functions to preserve structure? More specifically, the problem I have in mind is to index all of the files I have. Not only do I hope the indexing to be continuous (so the hash won’t change too much while I minor-edit a file), I also want it to preserve structure (so for example if I concatenate two files, the hash of the third file should be a reasonable function in the hash of the first two).

Please let me know if I should provide more specific information. Again, I am completely new to this field, so I might have missed something obvious.

Honour 4x mobile problem- continuous reboot

I have a Huawei honor 4x android mobile when I switch on the mobile it is continuously getting rebooted without booting booting I could see the logo of honor and it just getting rebooted battery is fine but am unable to recover the phone from this state , i don’t need any data just want to reset completely or bring back to normal state any idea or suggestions would be really helpful

Thanks

Characterization of continuous weakly closed 1-forms

Recall that a differential $ k$ -form $ \alpha$ on a smooth manifold $ M$ is called weakly closed if

$ $ \int_M \alpha \wedge d\beta = 0,$ $

for all smooth forms $ \beta$ of degree $ n-k-1$ , where $ n = \dim M$ . My question is:

If $ \alpha$ is a weakly closed continuous 1-form on a closed manifold $ M$ , can we conclude that $ \alpha$ is the sum of a smooth closed 1-form and $ d\phi$ , where $ \phi$ is a $ C^1$ function on $ M$ ?

This appears to follow from the properties of the de Rham regularization operator(s) and associated homotopy operator(s) on forms; see, for instance, Theorem 12.5 in V. Gol’dshtein and M. Troyanov, Sobolev inequalities for differential forms and $ L_{q,p}$ -cohomology, Journal of Geometric Analysis, vol. 6, no. 4, 2006.

Any thoughts on this would be appreciated.

How to handle continuous update of ON/OFF button in the database

This might be a stupid question but I don’t how it goes.

My questions can be based around the Facebook like button. That button can me spammed how much we want, liking or disliking. And my question is how are they handling such a continuous update action and not putting workloads on the server. Does this update the database everytime? Is there a queue implementation for it, if so how does it theoretically work?

These are my only two solutions to this “problem”.

p-Laplacian is intrinsically uniformly continuous

Definition– The operator $ G$ is intrinsically uniformly continuous(IUC) with respect to $ x$ in $ TM-\{0\}\times Sym TM$ if there exists a modulus of continuity $ w_{G}:[0,\infty)\to [0,\infty)$ with $ w_{G}(0^{+})=0$ such that

$ G(\zeta , A) – G(L_{x,y} \zeta , L_{x,y} A) \leq w_{G} (d(x,y))$

for any $ (\zeta,A) \in (T_{x}M-\{0\})\times Sym TM_{x}$ , and $ x,y \in M$ with $ d(x,y)<min\{i_{M}(x),i_{M}(y)\}$ , where $ i_{M}(x)$ denotes the injectivity radius at $ x$ of $ M$ .

Affirmation : the $ p$ -laplacian for $ p \geq 2$ is IUC. Here the $ p$ -laplacian is

$ \Delta_{p}u=|\nabla u|^{p-2}tr\Big [ \Big ( I +(p-2)\frac{\Delta u}{|\nabla u|}\otimes \frac{\nabla u}{|\nabla u|} \Big ) D^{2}u \Big ]$

The autor asserts that for any $ (\zeta,A) \in (T_{x}M-\{0\})\times Sym TM_{x}$ , and $ x,y \in M$ with $ d(x,y)<min\{i_{M}(x),i_{M}(y)\}$

\

\

$ <(L_{x,y}\zeta \otimes L_{x,y}\zeta)L_{x,y} A .v, v>_{y}= <(\zeta \otimes \zeta) A .L_{y,x}v, L_{y,x}v>_{x}$ for any $ v \in T_{y}M$ .

Thus the p-Laplacian is IUC with $ w_{G}=0$ .

I confess I have no idea how to prove it, i.e, what is the relation between the p-Laplacian is IUC and the identity above?. The unique idea is that one piece is invariant by parallel transport which is the trace.

Continuous Fourier integrals by Ooura’s method

I have a PR implementing Ooura and Mori’s method for continuous Fourier integrals. I used it here to compute an oscillatory integral that Mathematica got completely wrong, and then I thought “well maybe this method is pretty good!” and figured I’d finish it after letting it languish for over a year.

Here are a few concerns:

  • I agonized over computing the nodes and weights accurately. But in the end, I had to precompute the nodes and weights in higher accuracy and cast them back down. Is there any way to avoid this that I’m missing? (If I don’t do this, then the error goes down after a few levels, then starts increasing. Use -DBOOST_MATH_INSTRUMENT_OOURA to see it if you’re interested.)

  • There is also some code duplication that I can’t really figure out how to get rid of. For example, for the sine integral, f(0) = inf is allowed, but for the cosine integral, f(0) must be finite. So you have this very small difference that disallows extracting the generic code into a single location. But maybe I’m just not being creative enough here.

  • I’m also interested in whether the comments are well-written and informative. I couldn’t understand my comments from when I stopped working on this last year, and now I’m worried I won’t be able to understand my current comments next year!

So here’s the code:

// Copyright Nick Thompson, 2019 // Use, modification and distribution are subject to the // Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt // or copy at http://www.boost.org/LICENSE_1_0.txt)  /*  * References:  * Ooura, Takuya, and Masatake Mori. "A robust double exponential formula for Fourier-type integrals." Journal of computational and applied mathematics 112.1-2 (1999): 229-241.  * http://www.kurims.kyoto-u.ac.jp/~ooura/intde.html  */ #ifndef BOOST_MATH_QUADRATURE_OOURA_FOURIER_INTEGRALS_HPP #define BOOST_MATH_QUADRATURE_OOURA_FOURIER_INTEGRALS_HPP #include <memory> #include <boost/math/quadrature/detail/ooura_fourier_integrals_detail.hpp>  namespace boost { namespace math { namespace quadrature {  template<class Real> class ooura_fourier_sin { public:     ooura_fourier_sin(const Real relative_error_tolerance = tools::root_epsilon<Real>(), size_t levels = sizeof(Real)) : impl_(std::make_shared<detail::ooura_fourier_sin_detail<Real>>(relative_error_tolerance, levels))     {}      template<class F>     std::pair<Real, Real> integrate(F const & f, Real omega) {         return impl_->integrate(f, omega);     }      std::vector<std::vector<Real>> const & big_nodes() const {         return impl_->big_nodes();     }      std::vector<std::vector<Real>> const & weights_for_big_nodes() const {         return impl_->weights_for_big_nodes();     }      std::vector<std::vector<Real>> const & little_nodes() const {         return impl_->little_nodes();     }      std::vector<std::vector<Real>> const & weights_for_little_nodes() const {         return impl_->weights_for_little_nodes();     }  private:     std::shared_ptr<detail::ooura_fourier_sin_detail<Real>> impl_; };   template<class Real> class ooura_fourier_cos { public:     ooura_fourier_cos(const Real relative_error_tolerance = tools::root_epsilon<Real>(), size_t levels = sizeof(Real)) : impl_(std::make_shared<detail::ooura_fourier_cos_detail<Real>>(relative_error_tolerance, levels))     {}      template<class F>     std::pair<Real, Real> integrate(F const & f, Real omega) {         return impl_->integrate(f, omega);     } private:     std::shared_ptr<detail::ooura_fourier_cos_detail<Real>> impl_; };   }}} #endif 

And the detail (which contains the real meat):

// Copyright Nick Thompson, 2019 // Use, modification and distribution are subject to the // Boost Software License, Version 1.0. // (See accompanying file LICENSE_1_0.txt // or copy at http://www.boost.org/LICENSE_1_0.txt) #ifndef BOOST_MATH_QUADRATURE_DETAIL_OOURA_FOURIER_INTEGRALS_DETAIL_HPP #define BOOST_MATH_QUADRATURE_DETAIL_OOURA_FOURIER_INTEGRALS_DETAIL_HPP #include <utility> // for std::pair. #include <mutex> #include <atomic> #include <vector> #include <iostream> #include <boost/math/special_functions/expm1.hpp> #include <boost/math/special_functions/sin_pi.hpp> #include <boost/math/special_functions/cos_pi.hpp> #include <boost/math/constants/constants.hpp>  namespace boost { namespace math { namespace quadrature { namespace detail {  // Ooura and Mori, A robust double exponential formula for Fourier-type integrals, // eta is the argument to the exponential in equation 3.3: template<class Real> std::pair<Real, Real> ooura_eta(Real x, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     Real expx = exp(x);     Real eta_prime = 2 + alpha/expx + expx/4;     Real eta;     // This is the fast branch:     if (abs(x) > 0.125) {         eta = 2*x - alpha*(1/expx - 1) + (expx - 1)/4;     }     else {// this is the slow branch using expm1 for small x:         eta = 2*x - alpha*expm1(-x) + expm1(x)/4;     }     return {eta, eta_prime}; }  // Ooura and Mori, A robust double exponential formula for Fourier-type integrals, // equation 3.6: template<class Real> Real calculate_ooura_alpha(Real h) {     using boost::math::constants::pi;     using std::log1p;     using std::sqrt;     Real x = sqrt(16 + 4*log1p(pi<Real>()/h)/h);     return 1/x; }  template<class Real> std::pair<Real, Real> ooura_sin_node_and_weight(long n, Real h, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     using boost::math::constants::pi;     using std::isnan;      if (n == 0) {         // Equation 44 of https://arxiv.org/pdf/0911.4796.pdf         Real eta_prime_0 = Real(2) + alpha + Real(1)/Real(4);         Real node = pi<Real>()/(eta_prime_0*h);         Real weight = pi<Real>()*boost::math::sin_pi(1/(eta_prime_0*h));         Real eta_dbl_prime = -alpha + Real(1)/Real(4);         Real phi_prime_0 = (1 - eta_dbl_prime/(eta_prime_0*eta_prime_0))/2;         weight *= phi_prime_0;         return {node, weight};     }     Real x = n*h;     auto p = ooura_eta(x, alpha);     auto eta = p.first;     auto eta_prime = p.second;      Real expm1_meta = expm1(-eta);     Real exp_meta = exp(-eta);     Real node = -n*pi<Real>()/expm1_meta;       // I have verified that this is not a significant source of inaccuracy in the weight computation:     Real phi_prime = -(expm1_meta + x*exp_meta*eta_prime)/(expm1_meta*expm1_meta);      // The main source of inaccuracy is in computation of sin_pi.     // But I've agonized over this, and I think it's as good as it can get:     Real s = pi<Real>();     Real arg;     if(eta > 1) {         arg = n/( 1/exp_meta - 1 );         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }     else if (eta < -1) {         arg = n/(1-exp_meta);         s *= boost::math::sin_pi(arg);     }     else {         arg = -n*exp_meta/expm1_meta;         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }      Real weight = s*phi_prime;     return {node, weight}; }  #ifdef BOOST_MATH_INSTRUMENT_OOURA template<class Real> void print_ooura_estimate(size_t i, Real I0, Real I1, Real omega) {     using std::abs;     std::cout << std::defaultfloat               << std::setprecision(std::numeric_limits<Real>::digits10)               << std::fixed;     std::cout << "h = " << Real(1)/Real(1<<i) << ", I_h = " << I0/omega               << " = " << std::hexfloat << I0/omega << ", absolute error est = "               << std::defaultfloat << std::scientific << abs(I0-I1)  << "\n"; } #endif   template<class Real> std::pair<Real, Real> ooura_cos_node_and_weight(long n, Real h, Real alpha) {     using std::expm1;     using std::exp;     using std::abs;     using boost::math::constants::pi;      Real x = h*(n-Real(1)/Real(2));     auto p = ooura_eta(x, alpha);     auto eta = p.first;     auto eta_prime = p.second;     Real expm1_meta = expm1(-eta);     Real exp_meta = exp(-eta);     Real node = pi<Real>()*(Real(1)/Real(2)-n)/expm1_meta;      Real phi_prime = -(expm1_meta + x*exp_meta*eta_prime)/(expm1_meta*expm1_meta);      // Equation 4.6 of A robust double exponential formula for Fourier-type integrals     Real s = pi<Real>();     Real arg;     if (eta < -1) {         arg = -(n-Real(1)/Real(2))/expm1_meta;         s *= boost::math::cos_pi(arg);     }     else {         arg = -(n-Real(1)/Real(2))*exp_meta/expm1_meta;         s *= boost::math::sin_pi(arg);         if (n&1) {             s *= -1;         }     }      Real weight = s*phi_prime;     return {node, weight}; }   template<class Real> class ooura_fourier_sin_detail { public:     ooura_fourier_sin_detail(const Real relative_error_goal, size_t levels) {         if (relative_error_goal <= std::numeric_limits<Real>::epsilon()/2) {             throw std::domain_error("The relative error goal cannot be smaller than the unit roundoff.");         }         using std::abs;         requested_levels_ = levels;         starting_level_ = 0;         rel_err_goal_ = relative_error_goal;         big_nodes_.reserve(levels);         bweights_.reserve(levels);         little_nodes_.reserve(levels);         lweights_.reserve(levels);          for (size_t i = 0; i < levels; ++i) {             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }         }     }      std::vector<std::vector<Real>> const & big_nodes() const {         return big_nodes_;     }      std::vector<std::vector<Real>> const & weights_for_big_nodes() const {         return bweights_;     }      std::vector<std::vector<Real>> const & little_nodes() const {         return little_nodes_;     }      std::vector<std::vector<Real>> const & weights_for_little_nodes() const {         return lweights_;     }      template<class F>     std::pair<Real,Real> integrate(F const & f, Real omega) {         using std::abs;         using std::max;         using boost::math::constants::pi;          if (omega == 0) {             return {Real(0), Real(0)};         }         if (omega < 0) {             auto p = this->integrate(f, -omega);             return {-p.first, p.second};         }          Real I1 = std::numeric_limits<Real>::quiet_NaN();         Real relative_error_estimate = std::numeric_limits<Real>::quiet_NaN();         // As we compute integrals, we learn about their structure.         // Assuming we compute f(t)sin(wt) for many different omega, this gives some         // a posteriori ability to choose a refinement level that is roughly appropriate.         size_t i = starting_level_;         do {             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             Real absolute_error_estimate = abs(I0-I1);             Real scale = max(abs(I0), abs(I1));             if (!isnan(I1) && absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;         } while(++i < big_nodes_.size());          // We've used up all our precomputed levels.         // Now we need to add more.         // It might seems reasonable to just keep adding levels indefinitely, if that's what the user wants.         // But in fact the nodes and weights just merge into each other and the error gets worse after a certain number.         // This value for max_additional_levels was chosen by observation of a slowly converging oscillatory integral:         // f(x) := cos(7cos(x))sin(x)/x         size_t max_additional_levels = 4;         while (big_nodes_.size() < requested_levels_ + max_additional_levels) {             size_t i = big_nodes_.size();             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }             Real I0 = estimate_integral(f, omega, i);             Real absolute_error_estimate = abs(I0-I1);             Real scale = max(abs(I0), abs(I1)); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             if (absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;             ++i;         }          starting_level_ = big_nodes_.size() - 2;         return {I1/omega, relative_error_estimate};     }  private:      template<class PreciseReal>     void add_level(size_t i) {         size_t current_num_levels = big_nodes_.size();         Real unit_roundoff = std::numeric_limits<Real>::epsilon()/2;         // h0 = 1. Then all further levels have h_i = 1/2^i.         // Since the nodes don't nest, we could conceivably divide h by (say) 1.5, or 3.         // It's not clear how much benefit (or loss) would be obtained from this.         PreciseReal h = PreciseReal(1)/PreciseReal(1<<i);          std::vector<Real> bnode_row;         std::vector<Real> bweight_row;         // Definitely could use a more sophisticated heuristic for how many elements         // will be placed in the vector. This is a pretty huge overestimate:         bnode_row.reserve((1<<i)*sizeof(Real));         bweight_row.reserve((1<<i)*sizeof(Real));          std::vector<Real> lnode_row;         std::vector<Real> lweight_row;          lnode_row.reserve((1<<i)*sizeof(Real));         lweight_row.reserve((1<<i)*sizeof(Real));          Real max_weight = 1;         auto alpha = calculate_ooura_alpha(h);         long n = 0;         Real w;         do {             auto precise_nw = ooura_sin_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             bnode_row.push_back(node);             bweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             ++n;             // f(t)->0 as t->infty, which is why the weights are computed up to the unit roundoff.         } while(abs(w) > unit_roundoff*max_weight);          // This class tends to consume a lot of memory; shrink the vectors back down to size:         bnode_row.shrink_to_fit();         bweight_row.shrink_to_fit();         // Why we are splitting the nodes into regimes where t_n >> 1 and t_n << 1?         // It will create the opportunity to sensibly truncate the quadrature sum to significant terms.         n = -1;         do {             auto precise_nw = ooura_sin_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             if (node <= 0) {                 break;             }             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             using std::isnan;             if (isnan(node)) {                 // This occurs at n = -11 in quad precision:                 break;             }             if (lnode_row.size() > 0) {                 if (lnode_row[lnode_row.size()-1] == node) {                     // The nodes have fused into each other:                     break;                 }             }             lnode_row.push_back(node);             lweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             --n;             // f(t)->infty is possible as t->0, hence compute up to the min.         } while(abs(w) > std::numeric_limits<Real>::min()*max_weight);          lnode_row.shrink_to_fit();         lweight_row.shrink_to_fit();          // std::scoped_lock once C++17 is more common?         std::lock_guard<std::mutex> lock(node_weight_mutex_);         // Another thread might have already finished this calculation and appended it to the nodes/weights:         if (current_num_levels == big_nodes_.size()) {             big_nodes_.push_back(bnode_row);             bweights_.push_back(bweight_row);              little_nodes_.push_back(lnode_row);             lweights_.push_back(lweight_row);         }     }      template<class F>     Real estimate_integral(F const & f, Real omega, size_t i) {         // Because so few function evaluations are required to get high accuracy on the integrals in the tests,         // Kahan summation doesn't really help.         //auto cond = boost::math::tools::summation_condition_number<Real, true>(0);         Real I0 = 0;         auto const & b_nodes = big_nodes_[i];         auto const & b_weights = bweights_[i];         // Will benchmark if this is helpful:         Real inv_omega = 1/omega;         for(size_t j = 0 ; j < b_nodes.size(); ++j) {             I0 += f(b_nodes[j]*inv_omega)*b_weights[j];         }          auto const & l_nodes = little_nodes_[i];         auto const & l_weights = lweights_[i];         // If f decays rapidly as |t|->infty, not all of these calls are necessary.         for (size_t j = 0; j < l_nodes.size(); ++j) {             I0 += f(l_nodes[j]*inv_omega)*l_weights[j];         }         return I0;     }      std::mutex node_weight_mutex_;     // Nodes for n >= 0, giving t_n = pi*phi(nh)/h. Generally t_n >> 1.     std::vector<std::vector<Real>> big_nodes_;     // The term bweights_ will indicate that these are weights corresponding     // to the big nodes:     std::vector<std::vector<Real>> bweights_;      // Nodes for n < 0: Generally t_n << 1, and an invariant is that t_n > 0.     std::vector<std::vector<Real>> little_nodes_;     std::vector<std::vector<Real>> lweights_;     Real rel_err_goal_;     std::atomic<long> starting_level_;     size_t requested_levels_; };  template<class Real> class ooura_fourier_cos_detail { public:     ooura_fourier_cos_detail(const Real relative_error_goal, size_t levels) {         if (relative_error_goal <= std::numeric_limits<Real>::epsilon()/2) {             throw std::domain_error("The relative error goal cannot be smaller than the unit roundoff.");         }         using std::abs;         requested_levels_ = levels;         starting_level_ = 0;         rel_err_goal_ = relative_error_goal;         big_nodes_.reserve(levels);         bweights_.reserve(levels);         little_nodes_.reserve(levels);         lweights_.reserve(levels);          for (size_t i = 0; i < levels; ++i) {             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }         }      }      template<class F>     std::pair<Real,Real> integrate(F const & f, Real omega) {         using std::abs;         using std::max;         using boost::math::constants::pi;          if (omega == 0) {             throw std::domain_error("At omega = 0, the integral is not oscillatory. The user must choose an appropriate method for this case.\n");         }          if (omega < 0) {             return this->integrate(f, -omega);         }          Real I1 = std::numeric_limits<Real>::quiet_NaN();         Real absolute_error_estimate = std::numeric_limits<Real>::quiet_NaN();         Real scale = std::numeric_limits<Real>::quiet_NaN();         size_t i = starting_level_;         do {             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             absolute_error_estimate = abs(I0-I1);             scale = max(abs(I0), abs(I1));             if (!isnan(I1) && absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;         } while(++i < big_nodes_.size());          size_t max_additional_levels = 4;         while (big_nodes_.size() < requested_levels_ + max_additional_levels) {             size_t i = big_nodes_.size();             if (std::is_same<Real, float>::value) {                 add_level<double>(i);             }             else if (std::is_same<Real, double>::value) {                 add_level<long double>(i);             }             else {                 add_level<Real>(i);             }             Real I0 = estimate_integral(f, omega, i); #ifdef BOOST_MATH_INSTRUMENT_OOURA             print_ooura_estimate(i, I0, I1, omega); #endif             absolute_error_estimate = abs(I0-I1);             scale = max(abs(I0), abs(I1));             if (absolute_error_estimate <= rel_err_goal_*scale) {                 starting_level_ = std::max(long(i) - 1, long(0));                 return {I0/omega, absolute_error_estimate/scale};             }             I1 = I0;             ++i;         }          starting_level_ = big_nodes_.size() - 2;         return {I1/omega, absolute_error_estimate/scale};     }  private:      template<class PreciseReal>     void add_level(size_t i) {         size_t current_num_levels = big_nodes_.size();         Real unit_roundoff = std::numeric_limits<Real>::epsilon()/2;         PreciseReal h = PreciseReal(1)/PreciseReal(1<<i);          std::vector<Real> bnode_row;         std::vector<Real> bweight_row;         bnode_row.reserve((1<<i)*sizeof(Real));         bweight_row.reserve((1<<i)*sizeof(Real));          std::vector<Real> lnode_row;         std::vector<Real> lweight_row;          lnode_row.reserve((1<<i)*sizeof(Real));         lweight_row.reserve((1<<i)*sizeof(Real));          Real max_weight = 1;         auto alpha = calculate_ooura_alpha(h);         long n = 0;         Real w;         do {             auto precise_nw = ooura_cos_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             bnode_row.push_back(node);             bweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             ++n;             // f(t)->0 as t->infty, which is why the weights are computed up to the unit roundoff.         } while(abs(w) > unit_roundoff*max_weight);          bnode_row.shrink_to_fit();         bweight_row.shrink_to_fit();         n = -1;         do {             auto precise_nw = ooura_cos_node_and_weight(n, h, alpha);             Real node = static_cast<Real>(precise_nw.first);             // The function cannot be singular at zero,             // so zero is not a unreasonable node,             // unlike in the case of the Fourier Sine.             // Hence only break if the node is negative.             if (node < 0) {                 break;             }             Real weight = static_cast<Real>(precise_nw.second);             w = weight;             if (lnode_row.size() > 0) {                 if (lnode_row.back() == node) {                     // The nodes have fused into each other:                     break;                 }             }             lnode_row.push_back(node);             lweight_row.push_back(weight);             if (abs(weight) > max_weight) {                 max_weight = abs(weight);             }             --n;         } while(abs(w) > std::numeric_limits<Real>::min()*max_weight);          lnode_row.shrink_to_fit();         lweight_row.shrink_to_fit();          std::lock_guard<std::mutex> lock(node_weight_mutex_);         // Another thread might have already finished this calculation and appended it to the nodes/weights:         if (current_num_levels == big_nodes_.size()) {             big_nodes_.push_back(bnode_row);             bweights_.push_back(bweight_row);              little_nodes_.push_back(lnode_row);             lweights_.push_back(lweight_row);         }     }      template<class F>     Real estimate_integral(F const & f, Real omega, size_t i) {         Real I0 = 0;         auto const & b_nodes = big_nodes_[i];         auto const & b_weights = bweights_[i];         Real inv_omega = 1/omega;         for(size_t j = 0 ; j < b_nodes.size(); ++j) {             I0 += f(b_nodes[j]*inv_omega)*b_weights[j];         }          auto const & l_nodes = little_nodes_[i];         auto const & l_weights = lweights_[i];         for (size_t j = 0; j < l_nodes.size(); ++j) {             I0 += f(l_nodes[j]*inv_omega)*l_weights[j];         }         return I0;     }      std::mutex node_weight_mutex_;     std::vector<std::vector<Real>> big_nodes_;     std::vector<std::vector<Real>> bweights_;      std::vector<std::vector<Real>> little_nodes_;     std::vector<std::vector<Real>> lweights_;     Real rel_err_goal_;     std::atomic<long> starting_level_;     size_t requested_levels_; };   }}}} #endif ``` 

Graph of function, continuous projection

$ X$ and $ Y$ are topological spaces. $ f:X\rightarrow Y$ a map (we don’t suppose that $ f$ is continuous). Consider $ A=\{(x,f(x))\in X\times Y| x\in X\}$ . is $ \pi: A\rightarrow X$ , $ $ (x,f(x))\mapsto x$ $ a homeomorphism ? If not, is it enough to assume that $ A$ is closed subspace in $ X\times Y$ ? If $ X$ and $ Y$ are metrizable spaces, how to prove that $ \pi$ is a homemorphism using sequential continuity ? Suppose that by miracle $ \pi$ is homeomorphism but $ f$ is not continuous, is it possible such phenomenon ?

How to show the following function is continuous?

Let $ \mathcal{F}:\mathbb{R}^n \rightarrow \mathbb{R}^n$ be a smooth function, $ I$ a closed interval in $ \mathbb{R}$ , $ B_{0}(z)$ the open ball of radius $ z$ around $ 0 \in \mathbb{R}^n$ . $ D$ some closed subset of $ \mathbb{R}^n$ .
Let $ $ T(z) = \min_{x \in I} \ {dist} \left( x \mathcal{F} (D), x \mathcal{F}(\partial B_{0}(z) ) \right). $ $ I would like to show that this function is continuous… Any hints/comments would be appreciated. Thank you.