Prove that $\mu \left(\left\{t\in X\,;\;\sum_{i=1}^d|\phi_i(t)|^2>r \right\}\right)=0$

Let $ (X,\mu)$ be a measure space and $ \phi=(\phi_1,\cdots,\phi_d)\in L^{\infty}(X)$ .

Let $ $ r=\max\left\{\sum_{i=1}^d|z_i|^2; (z_1,\cdots,z_d)\in \mathcal{C}(\phi)\right\},$ $ where $ \mathcal{C}(\phi)$ is consisting of all $ z = (z_1,\cdots,z_d)\in \mathbb{C}^d$ such that for every $ \varepsilon>0$ $ $ \mu \left(\left\{t\in X\,;\;\sum_{i=1}^d|\phi_i(t)-z_i|<\varepsilon \right\}\right)>0 .$ $

Why $ $ \sum_{i=1}^d |\phi_i(t)|^2\le r $ $ for $ \mu$ -almost every $ t\in X$ .

Showing a random variable converges to $\mu$ in probability


Let $ X_{1}, \ldots X_{n}$ be a sequence of i.i.d random variables with $ \mathbb{E}[X_{1}] = \mu$ and $ \text{Var}(X_{1}) = \sigma^{2}$ . Let

$ $ Y_{n} = \frac{2}{n(n + 1)}\sum_{i = 1}^{n} iX_{i}.$ $

Use Chebyshev’s Inequality to show that $ Y_{N}$ converges to $ \mu$ in probability as $ n \to \infty$ . That is, for every $ \epsilon > 0$ , show that $ \lim_{n\to\infty} P(|Y_{N} – \mu| > \epsilon) = 0$ .

So, I wasn’t too sure about how to approach this problem. First, I computed $ \mathbb{E}[Y_{n}]$ as follows:

$ $ \mathbb{E}\left[\frac{2}{n(n + 1)}\sum_{i = 1}^{n} iX_{i}\right] = \frac{2}{n(n + 1)}\sum_{i = 1}^{n} \mathbb{E}[iX_{i}] $ $

$ $ = \frac{2}{n(n + 1)} \left(1(\mu) + 2(\mu) + 3(\mu) + \cdots n(\mu) \right) $ $

$ $ = \frac{2}{n(n + 1)} \cdot \mu\left(\frac{n(n + 1)}{2}\right) $ $

$ $ = \mu.$ $

Then, I computed the variance. First compute the second moment:

$ $ \mathbb{E}\left[Y_{n}^{2}\right] = \frac{4}{n^{2}(n + 1)^{2}} \sum_{i = 1}^{n} \mathbb{E}[i^{2} X_{i}^{2}] = \frac{4}{n^{2}(n + 1)^{2}} \cdot \left(1^2 \mu^2 + 2^2 \mu^2 + \cdots n^2 \mu^2 \right) $ $

$ $ = \mu^{2},$ $

which means that $ \text{Var}(Y_{N}) = 0.$

I don’t know if I actually computed the variance right. I also don’t know what to do next. Any help would be appreciated. In particular, I think that Chebyshev’s Inequality states that

$ $ P(|Y_{N} – \mu| \geq \epsilon\} \leq \frac{\sigma^{2}}{k^{2}} = 0,$ $

but since probability cannot be negative, we must have it equal to $ 0$ ? I don’t really know.

$\sigma$-finite measure $\mu$ so that $L^p(\mu) \subsetneq L^q(\mu)$ (proper subset)

I’m looking for a $ \sigma$ -finite measure $ \mu$ and a measure space so that for

$ 1 \le p <q \le \infty$

$ $ L^p(\mu) \subsetneq L^q(\mu)$ $

I tried the following:

Let $ 1 \le p <q \le \infty$ and $ \lambda$ the Lebesgue measure on $ (1,\infty)$ which is $ \sigma$ -finite.

$ x^\alpha$ is integrable on $ (1,\infty) \Leftrightarrow \alpha <-1$ .

Choose $ b$ so that $ 1/q<b<1/p \Leftrightarrow -bq<-1, -bp>-1$ .

Then $ x^{-b}\chi_{(1,\infty)}$ $ \in L^q$ but $ \notin L^p$ because $ x^{-bq}$ is integrable because the exponent $ -bq<-1$ and $ x^{-bp}$ isn’t integrable because the exponent $ -bp>-1$ . Now I found a function that is in $ L^p$ but not in $ L^q$ . But that doesn’t really show that $ L^p \subsetneq L^q$ , meaning $ L^p$ is a proper subset of $ L^q$ , right (because I don’t know if every element of $ L^p$ is also an element of $ L^q$ )?

Thanks in advance!