Topological entropy of logistic map $f(x) = \mu x (1-x)$, $f:[0,1] \to [0,1]$ for $\mu \in (1,3)$

As stated in the question, I want to find the topological entropy of the logistic map on the interval $ [0,1]$ for a “nice” range of the parameter $ \mu$ , namely $ \mu \in (1,3)$ . I think the fact that $ f:[0,1] \to [0,1]$ is a very important additional condition here which simplifies things.

I’ve tried something, which I’m not sure is the right way to approach the problem, but I’ll outline it here anyway.

I know a theorem that states $ h_{top}(f) = h_{top}(f|_{NW(f)})$ , where $ NW(f)$ is the set of non-wandering points of $ f$ , so I wanted to find that set. By drawing a lot of little pictures, I concluded that for $ x \notin \{0,1\}$ , we should have $ \lim_{n\to \infty} f^{n}(x) = 1-\frac{1}{\mu}$ , which is the other fixed point of $ f$ . Also, the convergence seems fairly straightforward (i.e. it gets closer with every iteration), so I somehow got it into my head that I should have $ NW(f) = \{0, 1- \frac{1}{\mu}$ .

To confirm this, Wikipedia says:

By varying the parameter r, the following behavior is observed:

With r between 0 and 1, the population will eventually die, independent of the initial population. With r between 1 and 2, the population will quickly approach the value r − 1/r, independent of the initial population. With r between 2 and 3, the population will also eventually approach the same value r − 1/r, but first will fluctuate around that value for some time. The rate of convergence is linear, except for r = 3, when it is dramatically slow, less than linear (see Bifurcation memory). 

However, I haven’t been able to find a proof of these claims. Can anyone show me how to prove this, or give me a reference where the proof is clearly written out?

Also, if there is an easier way of finding the topological entropy (again, I emphasize that $ f:[0,1] \to [0,1]$ ; I’ve lost a lot of time reading about Mandelbrot sets by conjugating $ f$ to $ g(z) = z^2 + c$ and looking at formulas for the entropy of $ g$ which exist, but with domains $ \mathbb{C}$ or some variant), I’d be very happy to hear it.

Prove that $\mu$ is a measure if $\mu(E) = 0$ if $E$ is countable, otherwise, $\mu(E) = 1$.


Let $ \Omega$ be a non-countable set and $ \Sigma = \{S \subset \Omega \mid S\text{ or }S^{c}\text{ is countable}\}$ . Define $ \mu: \Sigma \to \{0,1\}$ by $ \mu(E) = 0$ if $ E$ is countable, otherwise $ \mu(E) = 1$ . Prove that $ \mu$ is a measure.

I can prove that $ \mu$ is a measure if in the disjoint sequence $ \{E_{i}\}_{i \in \mathbb{N}}$ , each $ E_{i}$ is countable. But if, at least, two $ E_{i}$ are non-countable, I just get $ $ \mu\left(\bigcup_{i}E_{i}\right) \leq \sum_{i}\mu(E_{i}).$ $

Can someone help me?

Prove that $\mu \left(\left\{t\in X\,;\;\sum_{i=1}^d|\phi_i(t)|^2>r \right\}\right)=0$

Let $ (X,\mu)$ be a measure space and $ \phi=(\phi_1,\cdots,\phi_d)\in L^{\infty}(X)$ .

Let $ $ r=\max\left\{\sum_{i=1}^d|z_i|^2; (z_1,\cdots,z_d)\in \mathcal{C}(\phi)\right\},$ $ where $ \mathcal{C}(\phi)$ is consisting of all $ z = (z_1,\cdots,z_d)\in \mathbb{C}^d$ such that for every $ \varepsilon>0$ $ $ \mu \left(\left\{t\in X\,;\;\sum_{i=1}^d|\phi_i(t)-z_i|<\varepsilon \right\}\right)>0 .$ $

Why $ $ \sum_{i=1}^d |\phi_i(t)|^2\le r $ $ for $ \mu$ -almost every $ t\in X$ .

Showing a random variable converges to $\mu$ in probability


Let $ X_{1}, \ldots X_{n}$ be a sequence of i.i.d random variables with $ \mathbb{E}[X_{1}] = \mu$ and $ \text{Var}(X_{1}) = \sigma^{2}$ . Let

$ $ Y_{n} = \frac{2}{n(n + 1)}\sum_{i = 1}^{n} iX_{i}.$ $

Use Chebyshev’s Inequality to show that $ Y_{N}$ converges to $ \mu$ in probability as $ n \to \infty$ . That is, for every $ \epsilon > 0$ , show that $ \lim_{n\to\infty} P(|Y_{N} – \mu| > \epsilon) = 0$ .

So, I wasn’t too sure about how to approach this problem. First, I computed $ \mathbb{E}[Y_{n}]$ as follows:

$ $ \mathbb{E}\left[\frac{2}{n(n + 1)}\sum_{i = 1}^{n} iX_{i}\right] = \frac{2}{n(n + 1)}\sum_{i = 1}^{n} \mathbb{E}[iX_{i}] $ $

$ $ = \frac{2}{n(n + 1)} \left(1(\mu) + 2(\mu) + 3(\mu) + \cdots n(\mu) \right) $ $

$ $ = \frac{2}{n(n + 1)} \cdot \mu\left(\frac{n(n + 1)}{2}\right) $ $

$ $ = \mu.$ $

Then, I computed the variance. First compute the second moment:

$ $ \mathbb{E}\left[Y_{n}^{2}\right] = \frac{4}{n^{2}(n + 1)^{2}} \sum_{i = 1}^{n} \mathbb{E}[i^{2} X_{i}^{2}] = \frac{4}{n^{2}(n + 1)^{2}} \cdot \left(1^2 \mu^2 + 2^2 \mu^2 + \cdots n^2 \mu^2 \right) $ $

$ $ = \mu^{2},$ $

which means that $ \text{Var}(Y_{N}) = 0.$

I don’t know if I actually computed the variance right. I also don’t know what to do next. Any help would be appreciated. In particular, I think that Chebyshev’s Inequality states that

$ $ P(|Y_{N} – \mu| \geq \epsilon\} \leq \frac{\sigma^{2}}{k^{2}} = 0,$ $

but since probability cannot be negative, we must have it equal to $ 0$ ? I don’t really know.

$\sigma$-finite measure $\mu$ so that $L^p(\mu) \subsetneq L^q(\mu)$ (proper subset)

I’m looking for a $ \sigma$ -finite measure $ \mu$ and a measure space so that for

$ 1 \le p <q \le \infty$

$ $ L^p(\mu) \subsetneq L^q(\mu)$ $

I tried the following:

Let $ 1 \le p <q \le \infty$ and $ \lambda$ the Lebesgue measure on $ (1,\infty)$ which is $ \sigma$ -finite.

$ x^\alpha$ is integrable on $ (1,\infty) \Leftrightarrow \alpha <-1$ .

Choose $ b$ so that $ 1/q<b<1/p \Leftrightarrow -bq<-1, -bp>-1$ .

Then $ x^{-b}\chi_{(1,\infty)}$ $ \in L^q$ but $ \notin L^p$ because $ x^{-bq}$ is integrable because the exponent $ -bq<-1$ and $ x^{-bp}$ isn’t integrable because the exponent $ -bp>-1$ . Now I found a function that is in $ L^p$ but not in $ L^q$ . But that doesn’t really show that $ L^p \subsetneq L^q$ , meaning $ L^p$ is a proper subset of $ L^q$ , right (because I don’t know if every element of $ L^p$ is also an element of $ L^q$ )?

Thanks in advance!