Ratio of expectation involving random unit vectors

Let $ u=(u_1,…,u_n), v=(v_1,…,v_n)$ be two random vectors independently and uniformly distributed on the unit sphere in $ \mathbb{R}^n$ . Define two other random variables $ X=\sqrt{\sum_{i=1}^nu_i^2v_i^2}$ , $ Y=|u_1v_1|$ . Consider the following ratio of expectation: $ $ r_n(\alpha)=\frac{\mathbb{E}\{\exp[-\frac{\alpha^2-\alpha^2X^2+\alpha X}{2}]\}}{\mathbb{E}\{\exp[-(\alpha^2-\alpha^2Y^2+\alpha Y)]\}}$ $ Does there exist a finite upper bound for $ r_n(\alpha)$ , independent of $ \alpha$ , for all $ \alpha\geq0$ ?

Ratio of expectation involving random unit vectors

Let $ u=(u_1,…,u_n), v=(v_1,…,v_n)$ be two random vectors independently and uniformly distributed on the unit sphere in $ \mathbb{R}^n$ . Define two other random variables $ X=\sqrt{\sum_{i=1}^nu_i^2v_i^2}$ , $ Y=|u_1v_1|$ . Consider the following ratio of expectation: $ $ r_n(\alpha)=\frac{\mathbb{E}\{\exp[-\frac{\alpha^2-\alpha^2X^2+\alpha X}{2}]\}}{\mathbb{E}\{\exp[-(\alpha^2-\alpha^2Y^2+\alpha Y)]\}}$ $ Does there exist a finite upper bound for $ r_n(\alpha)$ , independent of $ \alpha$ , for all $ \alpha\geq0$ ?

Root of the expectation of a random rational function

I am trying to figure out a formula for the unique $ \lambda>1$ such that $ $ \mathbb{E}\bigg[\frac{X}{\lambda -X}\bigg]=1 $ $ where $ X$ is a discrete random variable taking values in $ \{\frac{1}{n},…,\frac{n-1}{n},1\}$ , distributed w.r.t. some distribution $ \mu$ .

We can rewrite the expression above which yields $ $ \sum_{k=1}^n \frac{\mu(\frac{k}{n})\frac{k}{n}}{x-\frac{k}{n}} = 1. $ $

I know that there are no closed solutions for the roots of such a function, since they are based on solving for zeros of a high degree polynomial. Still, I think I miss some obvious results here how to analyse such a function.

I’d appreciate any kind of help. Thank’s a lot!

Calculating expectation and variance for having rolled 1 and 6 twice out of rolling a die 12 times

First i have calculated the probability to get each possible number {1,2,3,4,5,6} twice from 12 rolls(A).

$ Pr[A]=\frac{\binom{12}{2,2,2,2,2,2}}{6^{12}}.$

Then there are 2 random variables:

X-number of times that 1 was received.

Y-number of times that 6 was received.

Before calculating $ E(X),Var(X),E(Y),Var(Y)$ i’m uncertain of how i should calculate the probabilities of X and Y

Obtaining a lower bound on the expectation using the Sudakov-Fernique inequality

In my work I wish to obtain a lower bound for the term below, independent of the vector $ x$ . Here the expectation is taken over $ h$ , a standard random Gaussian vector of length $ n$ . The vector $ x$ is fixed. The minimum is taken over all $ \{i_1,\dots,i_L\} \in \{1,\dots,n\}$ . Can this be done using the Sudakov-Fernique inequality? $ $ \mathbb{E}_{h} \min _{i_{1}, \ldots, i_{L}}\left[\sum_{j\neq i_1,\dots,i_L}h_j\mathrm{sign}(x_j^*)\right]. $ $

Conditional Expectation: Intergrating indicator function multiplied by the joint denisity

I am currently reading “Measure, Integral and Probability” by Capinski, Marek (see p179). It includes some motivation for the definition of the conditional expectation. For example, given two random variables $ X,Y$ with joint density $ f_{(X,Y)}$ (and so the marginal and conditional densities), we want to show that for any set $ A \subset \Omega, A=X^{-1}(B), B$ Borel, that $ $ \int_A\mathbb{E}(Y|X)dP= \int_A \mathbb{E}(Y)dP.$ $ This is one of the defining condition of an conditional expectation. The book shows the following calculation, \begin{align} \int_A\mathbb{E}(Y|X)dP &= \int_\Omega 1_B(X)\mathbb{E}(Y|X)dP\ &= \int_\Omega 1_B(X(\omega))\left(\int_\mathbb{R}yf_{Y|X}(y|X(\omega))dy\right)dP(\omega)\ &=\int_\mathbb{R}\int_\mathbb{R}1_B(x)yf_{(Y|X)}(y|x)dy f_X(x)dx\ &=\int_\mathbb{R}\int_\mathbb{R}1_B(x)yf_{X,Y}(x,y)dxdy\ &= \int_\Omega 1_A(X)YdP\ &= \int_A YdP. \end{align} What I don’t understand is the second to last equality immediately above, i.e. $ $ \int_\mathbb{R} y \int_\mathbb{R}1_B(x)f_{X,Y}(x,y)dxdy = \int_\Omega 1_A(X)YdP .$ $ I think it is a typo since $ X\in \mathbb{R}$ and $ A \subset \Omega$ — however, I cant figure the correction either!

Expectation of number of hubs in a random graph


Suppose $ \Gamma(V, E)$ is a finite simple graph. Let’s call a vertex $ v \in V$ a hub if $ deg(v)^2 > \Sigma_{w \in O(v)} deg(w)$ . Here $ deg$ stands for the vertex degree, and $ O(v)$ for the set of all vertices adjacent to $ v$ . Let’s define $ H(\Gamma)$ as the number of all hubs in $ \Gamma$ .

Now, suppose $ G(n, p)$ is an Erdos-Renyi random graph with $ n$ vertices and edge probability $ p$ . Does there exist some sort of explicit formula for $ E(H(G(n, p)))$ (as a function of $ n$ and $ p$ )?

How did this question arise:

I have recently heard of a so called «friendship paradox» that states, that the number of your friends usually does not exceed the average number of friends your friends have. When I at first heard about it, I wondered, if that is just some specifics of human society, or is there a mathematical explanation behind it. First I tried to translate the statement of the «friendship paradox» to the mathematical language as

Any graph with $ n$ vertices has $ o(n)$ hubs,

but then I quickly found, that it it is blatantly false this way:

Suppose $ n > 2$ , let’s take the full graph on $ n$ vertices $ K_n$ and then remove one edge from it, then, the resulting graph will have $ n – 2$ hubs, which is clearly not $ o(n)$ .

So, the «friendship paradox» can not be translated to deterministic graph theory using the notion of «hubs». So it can not be interpreted that way. So, I thought, that maybe something similar with random graphs should work.

Also:

Later, I found out, that there actually is a consistent mathematical interpretation of «friendship paradox» in terms of deterministic graph theory: Friendship paradox demonstration

However it does not solve my problem – which is finding the expectation of the number of hubs in a random graph. So, please do not mark my question as a duplicate of the aforementioned question.

Is my interpretation of the expression on the expectation on a correct one?

Consider the following equation expression

$ $ E_{x\sim p_{data}(x)}[ f(x) ] $ $

I am understanding it as

1) if X is a collection of a discrete random variable (X_1, X_2,……, X_n), and x is generated from X by a particular assignment of all random variables in the tuple

Then

$ $ E_{x\sim p_{data}(x)}[ f(x) ] = \sum\limits_{x} f(x) p_{data}(x) $ $

2) if X is a collection of a continuous random variable (X_1, X_2,……, X_n), and x is generated from X by a particular assignment of all random variables in the tuple

Then

$ $ E_{x\sim p_{data}(x)}[ f(x) ] = \int\limits_{x_1}\int\limits_{x_2} \cdots \int\limits_{x_n} f(x) p_{data}(x) dx_n \cdots dx_1$ $

Is my interpretation exact? If wrong, where am I going wrong?

The expectation of partition times needed separate two elements in a set

I met a problem which can be formulated as set partition.

Given a set $ S=\{s_1,s_2,…,s_n\}$ having $ n$ elements, I want to separate two elements, say $ s_1,s_2$ , in $ S$ by repeatedly using set partition operations. Each set partition operation randomly partitions a set, say $ A$ , into two non-empty subsets, $ B$ and $ C$ , such that $ A=B\cup C$ and $ B\cap C=\emptyset$ . I want to calculate or approximate the expectation of partition time, $ E(n)$ , to separate $ s_1$ and $ s_2$ .

Let see two simple cases:

(1) $ n=2$ :

In this case, $ S=\{s_1,s_2\}$ . The only feasible partition will separate $ S=\{s_1,s_2\}$ as $ \{s_1\}$ and $ \{s_2\}$ . So, $ E(2) = 1$ .

(2) $ n=3$ :

In this case, $ S=\{s_1,s_2,s_3\}$ . There are two situations:

(a) if the first partition is $ \{s_1\},\{s_2,s_3\}$ or $ \{s_2\},\{s_1,s_3\}$ , then 1 partition is ok!

(b) if the first partition is $ \{s_1,s_2\},\{s_3\}$ , then I need a second partition making $ \{s_1,s_2\}$ into $ \{s_1\},\{s_2\}$ . So the partition time is 2.

The possibility of situation (a) is 2/3 and situation (b) 1/3. So the $ E(3)=1*(2/3)+2*(1/3)=4/3$ .

I tried using recursive formula but it seems to be a non-closed form. I also wonder whether or not $ E(n)$ can be approximated by some other continuous functions?

Bound for Expectation of Singular Value

In my case, $ X_{\boldsymbol{\delta}}\in\mathbb{R}^{d\times M}$ is a function of Rademacher variables $ \boldsymbol{\delta}\in\{1,-1\}^M$ with $ \delta_i$ independent uniform random variables taking values in $ \{−1, +1\}$ . $ X_{\boldsymbol{\delta}}=[\sum_{i=1}^{I_1}\delta_{i}\mathbf{x}_{i},\sum_{i=I_1+1}^{I_2}\delta_{i}\mathbf{x}_{i},…,\sum_{i=I_{M-1}+1}^{I_M}\delta_i\mathbf{x}_{i}]$ is a group-wise sum with known $ I_1,I_2,…,I_M$ and non-singular $ X=(\mathbf{x}_1,\mathbf{x}_2,…,\mathbf{x}_N)\in\mathbb{R}^{d\times N}$ where $ N>M\gg d$ .

Given that $ \sigma_i(X_{\boldsymbol{\delta}})$ denotes $ i$ -th smallest singular value, how can I find the lower bound of the expectation $ \underset{\boldsymbol{\delta}}E\left[\sum_{i=1}^{k} \sigma_{i}^{2}\left(X_{\boldsymbol{\delta}}\right)\right]$ assuming $ k<d$ ?

Note: I can find an upper bound by Jensen’s inequality and concavity of sum of $ k$ smallest eigenvalue, but I am curious about whether it is possible to get a lower bound.

I have also posted the question here.