Defining the standard model of PA so that a space alien could understand

First, some context. In one of the comments to an answer to the recent question Why not adopt the constructibility axiom V=L? I was directed to some papers of Nik Weaver at this link, on conceptualism. Many of the ideas in those papers appeal to me, especially the idea (put in my own words, but hopefully accurate) that the power set of the natural numbers is a work in progress and not a completed infinity like $ \mathbb{N}$ .

In some of those papers the idea of a supertask is used to argue for the existence of the completed natural numbers. One could think of performing a supertask as building a machine that does an infinite computation in a finite amount of time and space, say by doing the $ n$ th step, and then building a machine of half the size that will work twice as fast to do the $ (n+1)$ th step and also recurse. (We will suppose that the concept of a supertask machine is not unreasonable, although I think this point can definitely be argued.)

The way I’m picturing such a machine is that it would be a $ \Sigma_1$ oracle, able to answer certain questions about the natural numbers. I suppose we would also have machines that do “super-supertasks”, and so forth, yielding higher order oracles.

To help motivate my question, suppose that beings from outer space came to earth and taught us how to build such machines. I suppose that some of us would start checking the validity of our work as it appears in the literature. Others would turn to the big questions: P vs. NP, RH, Goldbach, twin primes. With sufficient iterations of “super” we could even use the machines to start writing our proofs for us. Some would stop bothering.

Others would want to do quality control to check that the machines were working as intended. Suppose that the machine came back with: “Con(PA) is false.” We would go to our space alien friends and say, “Something is wrong. The machines say that PA is not consistent.” The aliens respond, “They are only saying that Con(PA) is false.”

We start experimenting and discover that the machines also tell us that the shortest proof that “Con(PA) is false” is larger than BB(1000). It is larger than BB(BB(BB(1000))), and so forth. Thus, there would be no hope that we could ever verify by hand (or even realize in our own universe with atoms) a proof that $ 0=1$ .

One possibility would be that the machines were not working as intended. Another possibility, that we could simply never rule out (but could perhaps verify to our satisfaction if we had access to many more atoms), is that these machines were giving evidence that PA is inconsistent. But a third, important possibility would be that they were doing supertasks on a nonstandard model of PA. We would then have the option of defining natural numbers as those things “counted” by these supertask machines. And indeed, suppose our alien friends did just that–their natural numbers were those expressed by the supertask machines. From our point of view, with the standard model in mind, we might say that there were these “extra” natural numbers that the machines had to pass through in order to finish their computations–something vaguely similar to those extra compact dimensions that many versions of string theory posit. But from the aliens’ perspective, these extra numbers were not extra–they were just as actual to reality as the (very) small numbers we encounter in everyday life.

So, here (finally!) come my questions.

Question 1: How would we communicate to these aliens what we mean, precisely, by “the standard model”?

The one way I know to define the standard model is via second order quantification over subsets. But we know that the axiom of the power set leads to all sorts of different models for set theory. Does this fact affect the claim that the standard model is “unique”? More to the point:

Question 2: To assert the existence of a “standard model” we have to go well beyond assuming PA (and Con(PA)). Is that extra part really expressible?

Defining a function through an ODE containing unspecified operators

I want to do some algebra using a function only defined through a DE containing unspecified operators. The DE is $ $ \partial_zu(z) = \left[\hat{D}+\hat{N}(z,u)\right] u(z). $ $ Here $ u$ lives in some function space (it is bounded, integrable, continuously differentiable,…) and $ \hat{D},\hat{N}$ are bounded operators on said function space that don’t necessarilly commute and only $ \hat{D}$ is linear. How do I define $ u$ in Mathematica?

My reason for asking this question: I derived that for $ u_I(z):=e^{-(z-z’)\hat{D}}\cdot u(z)$ that \begin{align} \partial_zu_I(z) &= \hat{N}_I(z,u_I)\cdot u_I(z),\ \hat{N}_I(z,u_I) :&= e^{-(z-z’)\hat{D}}\cdot \hat{N}(z,u_I)\cdot e^{(z-z’)\hat{D}}. \end{align} I was wondering how to do such derivations with Mathematica.

Need help with defining a subsequence

How would you define the definition of a subsequence of $ x_n$ if $ x_n$ is an integer? Theorem Let X=(x_n)=c then for real numbers it’s subsequence is x_n1=…=x_nk

Proof by contradiction By definition a subsequence of $ x_n$ is defined by $ n_1$ <…<$ n_k$ (Referring to Bartle, Intro to Analysis 3rd ed,pg 75)

Since our sequence in some integer c we get c < c… ridiculous. Thus all subsequences must be equal

The correct determinant exponent of the weight $k$-operator for defining Hecke operators/adelizing modular forms

For $ g \in \operatorname{SL}_2(\mathbb R)$ , and $ \mathbb H$ the upper half plane, and $ k\geq 1$ an integer, the weight $ k$ -operator on functions $ f: \mathbb H \rightarrow \mathbb C$ is defined by

$ $ f[g](z) = f(g.z) j(g,z)^{-k}$ $

where $ j(g,z) = (cz+d)^{-1}$ , $ g = \begin{pmatrix} a & b \ c & d\end{pmatrix}$ .

In order to define Hecke operators, or to adelize modular forms, or to identify modular forms as functions on $ \operatorname{GL}_2(\mathbb R)^+$ , it is necessary to extend this definition to $ g \in \operatorname{GL}_2^+(\mathbb R)$ . In A First Course in Modular Forms, in Chapter 5.1 Diamond and Shurman set

$ $ f[g](z) = f(g.z)j(g,z)^{-k} \det(g)^{k-1}$ $

In Automorphic Forms and Representations, in Chapter 1.4 Bump sets

$ $ f[g](z) = f(g.z)j(g,z)^{-k} \det(g)^{k/2}$ $

Which exponent of the determinant is better to use, and why? If we adelize a Hecke eigenform for $ \operatorname{SL}_2(\mathbb Z)$ and look at the corresponding automorphic representation $ \pi = \otimes_p \pi_p$ , which normalization is better to define Hecke operators with, if we want the classical Hecke operator $ T_p$ to coincide naturally with an action of the spherical Hecke algebra $ \mathscr H(\operatorname{GL}_2(\mathbb Q), \operatorname{GL}_2(\mathbb Z_p))$ on the local component $ \pi_p$ ?

Recall that to adelize a modular form $ f$ of $ \operatorname{SL}_2(\mathbb Z)$ of some given weight, we would first identify $ f$ with a function $ \phi$ on $ \operatorname{GL}_2^+(\mathbb R)$ by setting

$ $ \phi(g) = f[g](i)$ $

and then we would define an automorphic form $ \varphi$ on $ \operatorname{GL}_2(\mathbb Q) \backslash \operatorname{GL}_2(\mathbb A)$ by using the decomposition $ \operatorname{GL}_2(\mathbb A) = \operatorname{GL}_2(\mathbb Q) \operatorname{GL}_2^+(\mathbb R)K$ for $ K$ a suitable compact subgroup, writing $ g = \alpha g_{\infty}k$ , and setting $ \varphi(g) = \phi(g_{\infty})$ .

Defining Measure 0 Sets with Open or Closed Rectangles

I have been reading through Calculus on Manifolds by Michael Spivak, and I am not understanding what he states after defining measure 0 sets. On page 50, he states

“A subset $ A$ of $ \mathbb{R}^n$ has (n-dimensional) measure 0 if for every $ \varepsilon > 0$ there is a cover $ \{U_1, U_2, U_3, \dotsc\}$ of $ A$ by closed rectangles such that $ \sum_{i = 1}^\infty v(U_i) < \varepsilon$ .”

And follows this by stating

“The reader may verify that open rectangles may be used instead of closed rectangles in the definition of measure 0.”

I have been trying to prove the equivalence between the definition of measure zero with open rectangles and closed rectangles. I have been able to prove that if we can do this with open rectangles, then we can do it with closed rectangles. However, I have not been able to prove the other direction. I have written the question as the following statement:

Suppose that for $ A \subset \mathbb{R}^n$ and $ \varepsilon > 0$ , there is a cover $ \{F_1, F_2, F_3, \dotsc\}$ of $ A$ by closed rectangles such that $ \sum_{i = 1}^\infty v(F_i) < \varepsilon$ . Then, for any $ \varepsilon’ > 0$ , there is a cover $ \{U_1, U_2, U_3, \dotsc\}$ of $ A$ by open rectangles such that $ \sum_{i = 1}^\infty v(U_i) < \varepsilon’$ .

Spivak Page 50

I included the Lebesgue measure, since I understand Spivak’s definition of volume of a rectangle to be connected, but if this is incorrect, feel free to remove that tag.

How does explicitly defining proxy workers relate to MaxRequestWorkers in Apache?

I have the exact setup and question which was asked and answered here:

Apache mod_proxy_fcgi: One proxy worker per vhost?

However, I do not fully understand the answer, which suggests that each explicitly defined proxy worker is an “mpm worker”.

How do the 2 built-in forward/reverse proxy workers and the explicitly defined proxy workers relate to the event mpm worker configuration options?

Should each vhost define its own proxy worker by using a unique name?

Defining a class that behaves exactly like an `int`

Am I missing anything?
Or did I going about this about this the wrong way?
Is there anything I could improve on?

Are there any tricks that I could learn from this?
How about style; does the style look OK?

#include <iostream> // std::cout #include <utility>  // std::move  class jd_int { public:     jd_int() = default;      jd_int(int i)             : _i{i}                   { }     jd_int(const jd_int& jdi) : _i{jdi._i}              { }     jd_int(jd_int&& jdi)      : _i{std::move(jdi._i)}   { }      jd_int operator= (int i)             { _i = i; return *this;                 }     jd_int operator= (double d)          { _i = d; return *this;                 }     jd_int operator= (const jd_int& jdi) { _i = jdi._i; return *this;            }     jd_int operator= (jd_int&& jdi)      { _i = std::move(jdi._i); return *this; }      ~jd_int() = default;      operator bool()   { return !!_i;                    }     operator int()    { return static_cast<int>(_i);    }     operator double() { return static_cast<double>(_i); }      jd_int operator+=(jd_int jdi) { return _i += jdi._i; }     jd_int operator+ (jd_int jdi) { return _i +  jdi._i; }      jd_int operator-=(jd_int jdi) { return _i -= jdi._i; }     jd_int operator- (jd_int jdi) { return _i -  jdi._i; }      jd_int operator*=(jd_int jdi) { return _i *= jdi._i; }     jd_int operator* (jd_int jdi) { return _i *  jdi._i; }      jd_int operator/=(jd_int jdi) { return _i /= jdi._i; }     jd_int operator/ (jd_int jdi) { return _i /  jdi._i; }      jd_int operator%=(jd_int jdi) { return _i %= jdi._i; }     jd_int operator% (jd_int jdi) { return _i %  jdi._i; }      jd_int operator++()    { return ++_i;                          }     jd_int operator++(int) { jd_int tmp = *this; ++_i; return tmp; }      jd_int operator--()    { return --_i;                          }     jd_int operator--(int) { jd_int tmp = *this; --_i; return tmp; }      friend bool operator< (jd_int lhs, jd_int rhs);     friend bool operator> (jd_int lhs, jd_int rhs);     friend bool operator<=(jd_int lhs, jd_int rhs);     friend bool operator>=(jd_int lhs, jd_int rhs);     friend bool operator==(jd_int lhs, jd_int rhs);     friend bool operator!=(jd_int lhs, jd_int rhs);  private:     int _i;      friend std::ostream& operator<<(std::ostream& os, const jd_int jdi);     friend std::istream& operator>>(std::istream& is, jd_int jdi); };  bool operator< (jd_int lhs, jd_int rhs) { return (lhs._i <  rhs._i); } bool operator> (jd_int lhs, jd_int rhs) { return (lhs._i >  rhs._i); } bool operator<=(jd_int lhs, jd_int rhs) { return (lhs._i <= rhs._i); } bool operator>=(jd_int lhs, jd_int rhs) { return (lhs._i >= rhs._i); } bool operator==(jd_int lhs, jd_int rhs) { return (lhs._i == rhs._i); } bool operator!=(jd_int lhs, jd_int rhs) { return (lhs._i != rhs._i); }  std::ostream& operator<<(std::ostream& os, const jd_int jdi) {     os << jdi._i;     return os; }  std::istream& operator>>(std::istream& is, jd_int jdi) {     is >> jdi._i;     return is; } ``` 

Defining colour themes of gradient scales: should I use hex or RGBA?

I’m relatively new to CSS. I’m trying to figure out why my UX team sometimes defines colours with hex codes and sometimes uses RGBA.

Context: We build highly technical, management web apps. All of our web apps have a white background and don’t tend to layer elements (e.g., not marketing images as backdrops). Some of the designers feel RGBA helps control colour contrast ratios. Some designers just prefer using RGBA over hex. Some designers use hex. No one has given me a clear reason for their choice. I’d like to know the pros and cons of each technique and in which situations one method is better than the other because I’m building a colour theming solution for our core framework. We want gradient scales for each of our primary and secondary colours. There’s no current requirement for transparency, but I suppose one day there could be.

I came across a related UX SE post: Why isn't primary text full opacity? Answers talk about RGBA helping to enforce standard use of colour. That is, if you start with an RGB colour and use the alpha value to adjust light/dark, you could ensure a consistent colour gradient scale. (Note: That post has a good image showing a colour scale using hex and then the equivalent alpha value beside it: https://i.stack.imgur.com/MWust.png)

But then what happens when you have HTML elements overlapping and you don’t want to them to appear partially transparent and yet want to use the appropriate colour? Do you use an equivalent RGB with alpha 1 or a hex code?

As for the contrast ratio theory, here’s what one UX designer told me: RGBA color always maintains the same level of contrast from whatever it’s placed on. If you put an #AAA body text on an #FFF background, versus if you put it on a #EEE background, the #AAA text will look lighter on the #EEE background. But if you put rgba(0,0,0,0.33) on an #FFF vs #EEE background, the text will always have a 33% darker contrast on both. Is that true? Using a contrast ratio calculator (https://contrast-ratio.com/) rgba(0,0,0,0.33) on #FFF has a 2.3 ratio whereas rgba(0,0,0,0.33) on #EEE has a 2.26 ratio. Close, but not identical. #DDD goes down to 2.23.

Material UI Color Palettes seems to use hex codes (see https://material.io/design/color/#color-theme-creation ), but I’ve seen other writing to suggest at times Material UI uses RGBA sometimes. Not that Material UI is always right. 🙂

So again, I’m looking for the pros and cons of hex values vs. RGBA values and when it’s best to use which.

What’s the difference between methods for defining a matrix function ( Jordan canonical form, Hermite interpolation and Cauchy integral)?

There’s many equivalent way of defining $ f(A)$ . We focus on Jordan canonical form, Hermite interpolation and Cauchy integral.

What’s the difference between methods for defining a matrix function ( in application )? What is superiority to each other? Can you introduce a source?Thanks