## Why does the unbounded $\mu$ operator preserve effective computability?

Let $$f$$ be a partial function from $$\mathbb{N}^{p+1}$$ to $$\mathbb{N}$$. The partial function $$(x_1,…,x_p)\mapsto \mu y[f(x_1,…,x_p,y)=0]$$ is defined in the following way: If there exists at least one integer $$z$$ such that $$f(x_1,\dots, x_p,z)=$$ and if for every $$z', $$f(x_1,\dots, x_p,z’)$$ is defined, then $$\mu y[f(x_1,\dots, x_p,y)=0]$$ is equal to the least such $$z$$. In the opposite case, $$\mu y[f(x_1,\dots, x_p,y)=0]$$ is not defined.

I don’t understand why this unbounded $$\mu$$ operator preserves effective computability, in my textbook and in a note that I found online, this is mentioned as if it is a trivial fact.

I appreciate any help!

## The relation between the number of square-free divisors of n and the Mobius $\mu- function.$

I was solving this problem:

But I do not understand why the first equality is correct (for the second equality I have no problem), could anyone explain it for me?

## Can all regular tree types be expressed as $\mu$ types?

In “Types and Programming Languages”, Pierce gives a translation from recursive types ($$\mu$$ types) to types expressed as regular trees: possibly infinite trees, but with finitely many distinct subtrees.

I’m wondering, is the converse true? Can every regular tree type be expressed using the $$\mu$$ fixpoint notation? It seems obvious that this can be done if you have mutually recursive types: you have a type for each subtree of the regular tree type. But can it be done with singly recursive types?

## Topological entropy of logistic map $f(x) = \mu x (1-x)$, $f:[0,1] \to [0,1]$ for $\mu \in (1,3)$

As stated in the question, I want to find the topological entropy of the logistic map on the interval $$[0,1]$$ for a “nice” range of the parameter $$\mu$$, namely $$\mu \in (1,3)$$. I think the fact that $$f:[0,1] \to [0,1]$$ is a very important additional condition here which simplifies things.

I’ve tried something, which I’m not sure is the right way to approach the problem, but I’ll outline it here anyway.

I know a theorem that states $$h_{top}(f) = h_{top}(f|_{NW(f)})$$, where $$NW(f)$$ is the set of non-wandering points of $$f$$, so I wanted to find that set. By drawing a lot of little pictures, I concluded that for $$x \notin \{0,1\}$$, we should have $$\lim_{n\to \infty} f^{n}(x) = 1-\frac{1}{\mu}$$, which is the other fixed point of $$f$$. Also, the convergence seems fairly straightforward (i.e. it gets closer with every iteration), so I somehow got it into my head that I should have $$NW(f) = \{0, 1- \frac{1}{\mu}$$.

To confirm this, Wikipedia says:

By varying the parameter r, the following behavior is observed:

With r between 0 and 1, the population will eventually die, independent of the initial population. With r between 1 and 2, the population will quickly approach the value r − 1/r, independent of the initial population. With r between 2 and 3, the population will also eventually approach the same value r − 1/r, but first will fluctuate around that value for some time. The rate of convergence is linear, except for r = 3, when it is dramatically slow, less than linear (see Bifurcation memory). 

However, I haven’t been able to find a proof of these claims. Can anyone show me how to prove this, or give me a reference where the proof is clearly written out?

Also, if there is an easier way of finding the topological entropy (again, I emphasize that $$f:[0,1] \to [0,1]$$; I’ve lost a lot of time reading about Mandelbrot sets by conjugating $$f$$ to $$g(z) = z^2 + c$$ and looking at formulas for the entropy of $$g$$ which exist, but with domains $$\mathbb{C}$$ or some variant), I’d be very happy to hear it.

## Prove that $\mu$ is a measure if $\mu(E) = 0$ if $E$ is countable, otherwise, $\mu(E) = 1$.

Let $$\Omega$$ be a non-countable set and $$\Sigma = \{S \subset \Omega \mid S\text{ or }S^{c}\text{ is countable}\}$$. Define $$\mu: \Sigma \to \{0,1\}$$ by $$\mu(E) = 0$$ if $$E$$ is countable, otherwise $$\mu(E) = 1$$. Prove that $$\mu$$ is a measure.

I can prove that $$\mu$$ is a measure if in the disjoint sequence $$\{E_{i}\}_{i \in \mathbb{N}}$$, each $$E_{i}$$ is countable. But if, at least, two $$E_{i}$$ are non-countable, I just get $$\mu\left(\bigcup_{i}E_{i}\right) \leq \sum_{i}\mu(E_{i}).$$

Can someone help me?

## Prove that $\mu \left(\left\{t\in X\,;\;\sum_{i=1}^d|\phi_i(t)|^2>r \right\}\right)=0$

Let $$(X,\mu)$$ be a measure space and $$\phi=(\phi_1,\cdots,\phi_d)\in L^{\infty}(X)$$.

Let $$r=\max\left\{\sum_{i=1}^d|z_i|^2; (z_1,\cdots,z_d)\in \mathcal{C}(\phi)\right\},$$ where $$\mathcal{C}(\phi)$$ is consisting of all $$z = (z_1,\cdots,z_d)\in \mathbb{C}^d$$ such that for every $$\varepsilon>0$$ $$\mu \left(\left\{t\in X\,;\;\sum_{i=1}^d|\phi_i(t)-z_i|<\varepsilon \right\}\right)>0 .$$

Why $$\sum_{i=1}^d |\phi_i(t)|^2\le r$$ for $$\mu$$-almost every $$t\in X$$.

## Showing a random variable converges to $\mu$ in probability

Let $$X_{1}, \ldots X_{n}$$ be a sequence of i.i.d random variables with $$\mathbb{E}[X_{1}] = \mu$$ and $$\text{Var}(X_{1}) = \sigma^{2}$$. Let

$$Y_{n} = \frac{2}{n(n + 1)}\sum_{i = 1}^{n} iX_{i}.$$

Use Chebyshev’s Inequality to show that $$Y_{N}$$ converges to $$\mu$$ in probability as $$n \to \infty$$. That is, for every $$\epsilon > 0$$, show that $$\lim_{n\to\infty} P(|Y_{N} – \mu| > \epsilon) = 0$$.

So, I wasn’t too sure about how to approach this problem. First, I computed $$\mathbb{E}[Y_{n}]$$ as follows:

$$\mathbb{E}\left[\frac{2}{n(n + 1)}\sum_{i = 1}^{n} iX_{i}\right] = \frac{2}{n(n + 1)}\sum_{i = 1}^{n} \mathbb{E}[iX_{i}]$$

$$= \frac{2}{n(n + 1)} \left(1(\mu) + 2(\mu) + 3(\mu) + \cdots n(\mu) \right)$$

$$= \frac{2}{n(n + 1)} \cdot \mu\left(\frac{n(n + 1)}{2}\right)$$

$$= \mu.$$

Then, I computed the variance. First compute the second moment:

$$\mathbb{E}\left[Y_{n}^{2}\right] = \frac{4}{n^{2}(n + 1)^{2}} \sum_{i = 1}^{n} \mathbb{E}[i^{2} X_{i}^{2}] = \frac{4}{n^{2}(n + 1)^{2}} \cdot \left(1^2 \mu^2 + 2^2 \mu^2 + \cdots n^2 \mu^2 \right)$$

$$= \mu^{2},$$

which means that $$\text{Var}(Y_{N}) = 0.$$

I don’t know if I actually computed the variance right. I also don’t know what to do next. Any help would be appreciated. In particular, I think that Chebyshev’s Inequality states that

$$P(|Y_{N} – \mu| \geq \epsilon\} \leq \frac{\sigma^{2}}{k^{2}} = 0,$$

but since probability cannot be negative, we must have it equal to $$0$$? I don’t really know.

## $\sigma$-finite measure $\mu$ so that $L^p(\mu) \subsetneq L^q(\mu)$ (proper subset)

I’m looking for a $$\sigma$$-finite measure $$\mu$$ and a measure space so that for

$$1 \le p

$$L^p(\mu) \subsetneq L^q(\mu)$$

I tried the following:

Let $$1 \le p and $$\lambda$$ the Lebesgue measure on $$(1,\infty)$$ which is $$\sigma$$-finite.

$$x^\alpha$$ is integrable on $$(1,\infty) \Leftrightarrow \alpha <-1$$.

Choose $$b$$ so that $$1/q-1$$.

Then $$x^{-b}\chi_{(1,\infty)} \in L^q$$ but $$\notin L^p$$ because $$x^{-bq}$$ is integrable because the exponent $$-bq<-1$$ and $$x^{-bp}$$ isn’t integrable because the exponent $$-bp>-1$$. Now I found a function that is in $$L^p$$ but not in $$L^q$$. But that doesn’t really show that $$L^p \subsetneq L^q$$, meaning $$L^p$$ is a proper subset of $$L^q$$, right (because I don’t know if every element of $$L^p$$ is also an element of $$L^q$$)?