Expected value of a random variable conditioned on a positively correlated event

I have a random variable $ x \in [a, b]$ with PDF $ f(x)$ and an event $ E$ which satisfies the following property for any $ x'<b$ .

$ $ \Pr[E|x > x’] \geq \Pr[E]$ $

My question is whether or not the following inequality holds.

$ $ \int_{a}^{b} uf(u)\Pr[E|x=u]du \geq \Pr[E]\int_{a}^{b} uf(u)du$ $

Probability that maximal elements has the same position in samples from correlated random variables

Let $ x$ and $ y$ be two correlated random variable (say, standard normal) with correlation coefficient $ \rho>0$ . Let $ X= \{x_1, x_2, …, x_L\}$ and $ Y= \{y_1, y_2, .. y_L\}$ be samples of size $ L$ from $ x$ and $ y$ respectively.

What is the probability that $ \mbox{argmax}\ X = \mbox{argmax}\ Y$ .

Alternatively, suppose that $ x_1$ is maximal element, what is the probability that $ y_1$ is maximal too.

Any references pointing to the solution of either question would also be appreciated.

Multidimensional Correlated Geometric Brownian Motion, finding exact form of the matrices

My goal is to understand the dimensions of the matrices involved, so I am initially writing things as column vectors, and defining all the dimensions.

I am working with the following setup: Probability space $ (\Omega, \mathcal F, \mathbb Q)$ , equipped with a $ (d \times 1)$ -dimensional Correlated Brownian Motion, $ W$ , and the natural filtration of $ W$ is $ (\mathcal F)_s$ .

The martingale, $ X$ , (with respect to $ \mathcal F_t$ and $ \mathbb Q$ ) is $ (d \times 1)$ -dimensional and of the form: \begin{equation} dX_t^i = \sigma_t^i X_t^idW_t^i, \: i \in [1,d], \qquad d\langle W^i, W^j \rangle_t = \rho^{i,j}_tdt \end{equation}

I have been trying to find the correct matrix form for this equation, but whenever I have looked online, the equation seems to always be written in the above form for each $ i$ , rather than as the matrices themselves.

So far, I have defined the $ (d \times d)$ covariance matrix $ \Sigma$ , and another $ (d \times d)$ matrix $ A$ : \begin{equation} AA^T \equiv \Sigma, \qquad \Sigma_{i,j} = \rho^{i,j}\sigma^i\sigma^j \end{equation} and a $ (d \times 1)$ -dimensional standard Brownian Motion, $ B$ , and a $ (d \times 1)$ -dimensional vector $ L$ , so that : \begin{equation} \frac{dX_t^i}{X_t^i} \equiv L_i \end{equation}

So now, I have that: \begin{equation} L = AdB \end{equation} I am not sure if this is correct, but it seems to contain all the relevant information. The covariances between each $ \frac{dX_t^i}{X_t^i}$ is found through $ \Sigma$ as $ \rho^{i,j}\sigma^i\sigma^j = \text{Cov}(\frac{dX_t^i}{X_t^i}, \frac{dX_t^j}{X_t^j})$ , so I think it should be correct.

From there I tried to convert $ L$ to the $ (d \times 1)$ dimensional vector $ dX$ , by multiplying by the diagonal $ (d \times d)$ matrix $ D = \text{diag}(X_t^1,X_t^2,…)$ , which leads to:

\begin{equation} DL = dX = DAdB \end{equation}

I assumed this would work, and tried to check by using Ito’s Lemma on both $ dX_t^i = \sigma_t^i X_t^idW_t^i, \: i \in [1,d]$ , and on $ dX_t = DAdB_t$ , to check and the results seem to match.

I am using this form of Ito’s Lemma: \begin{align} df = \frac{\partial f}{\partial t}dt + \sum_i\frac{\partial f}{\partial x_i}dx_i + \frac{1}{2}\sum_{i,j}\frac{\partial^2 f}{\partial x_ix_j}[dx_i,dx_j] \end{align} I was just calculating the $ \frac{1}{2}\sum_{i,j}\frac{\partial^2 f}{\partial x_ix_j}[dx_i,dx_j]$ term, so using $ dX_t^i = \sigma_t^i X_t^idW_t^i, \: i \in [1,d]$ results in $ \frac{1}{2}\sum_{i,j}^d\frac{\partial^2 f}{\partial x_ix_j}\rho^{i,j}\sigma^i\sigma^jX^iX^jdt$ , as expected.

For the form $ dX_t = DAdB_t$ , I used that $ \frac{1}{2}\sum_{i,j}\frac{\partial^2 f}{\partial x_ix_j}[dx_i,dx_j] = \frac{1}{2}\sum_{i,j}(\beta\beta^T)_{i,j}\frac{\partial^2 f}{\partial x_i \partial x_j} dt$ , for any Ito process of the form $ dY_t = \beta dB_t$ .

This gives \begin{equation} \frac{1}{2}\sum_{i,j}(DA(DA)^T)_{i,j}\frac{\partial^2 f}{\partial x_i \partial x_j}dt = \frac{1}{2}\sum_{i,j}^d(D\Sigma D)_{i,j}\frac{\partial^2 f}{\partial x_i \partial x_j}dt = \quad \frac{1}{2}\sum_{i,j}^d(D_{i,i}\Sigma_{i,j} D_{j,j})\frac{\partial^2 f}{\partial x_i \partial x_j}dt = \frac{1}{2}\sum_{i,j}^d\frac{\partial^2 f}{\partial x_ix_j}\rho^{i,j}\sigma^i\sigma^jX^iX^jdt \end{equation}

I am wondering if this is correct, or if I did something incorrectly here. The dimensions seem to match everywhere. Is it possible to find a solution, like in this post: https://mathoverflow.net/questions/285251/solution-of-multivariate-geometric-brownian-motion. I can’t seem to get to that point using the form $ dX_t = DAdB_t$ .

Thanks a lot for the help!

Any algorithm to align two correlated random sequences the corresponding members of which are aligned

Two random sequences $ $ A = {a_1,a_2,a_3,…..a_i,….},B = {b_1,b_2,b_3,…..b_i,….}$ $ ,are correlated, but the corresponding position or Subscripts of one are consistently move forward, or backward.for a special case ignoring randomness, $ A ={1,2,3,…..n,…}$ ,and $ B ={2,4,6,…..2n,….}$ , is dislocated as $ B ={6,…..2j,….}$ .Is there an Any algorithm to align two correlated random sequences? And the computational complexity?

Non-Negative irreducible matrices with random (correlated or independent) non-zero entries

Lets $ M$ be a non-negative irreducible matrix. According to Perron-Frobenius Theorem, the maximum eigenvalue of $ M$ , $ \lambda$ , is positive and equal to its spectral radius $ \rho(M)$ .

Now assume the matrix $ M$ is not deterministic and its nonzero elements are equal to random variables $ \tanh(x_i)$ with $ x_i\sim N(m>0, \sigma^2)$ . However, the zero elements are the same deterministic zeros as before. My question is that what will happen to the expected value of the maximum eigenvalue if $ x_i$ s are correlated compared to the case where they are independent.

My observation is that existence of positive correlation among the non-zero entries increases the expected maximum eigenvalue compared to the case where the entries are independent. But I am not able to justify this experiment.

Is it possible to replace count distinct inside Correlated sub-queries?

I’m trying to optimize the query bellow. The execution Plan is showing multiple sorting before aggregations (I think this is due to the count distincts in the correlated sub-queries)…

Is it possible to minimize the number of sorting by doing it after the aggregations or at least replacing the count distinct by another function?

**EXECUTION PLAN**

Query:

 SELECT   "R"."Lat" AS "Lat", "R"."Long R" AS "Long R", "R"."Dept" AS "Dept", "R"."Reg" AS "Reg",  "B3"."M" AS "M", "B3"."St" AS "St", "B3"."Sp"AS "Sp", "B3"."Reg" AS "Reg", "B3"."year" AS "year", "B3"."id_program" AS "id_program", "B3"."program" AS "program", "B3"."Lib" AS "Lib", "B3"."Ef"/"R2"."NB Dept" AS "Agg_Ef", "B3"."Eex"/"R2"."NB Dept" AS "Agg_Eex", "B3"."Ein"/"R2"."NB Dept" AS "Agg_Ein", "B3"."Sehr"/"R2"."NB Dept" AS "Agg_Sehr", "B3"."Sin"/"R2"."NB Dept" AS "Agg_Sin", "B3"."Sr"/"R2"."NB Dept" AS "Agg_Sr", "B3"."Sc"/"R2"."NB Dept" AS "Agg_Sr",  "C2"."RE"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_RE", "C2"."RD"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_RD", "C2"."RDP"/("R2"."NB Dept"*"B2"."NB St")AS "Agg_RDP", "C2"."RC"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_RC", "C2"."RA"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_RA", "C2"."EE"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_EE", "C2"."BE"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_BE", "C2"."BD"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_BD", "C2"."BDP"/("R2"."NB Dept"*"B2"."NB St")AS "Agg_BDP", "C2"."BC"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_BC", "C2"."BA"/("R2"."NB Dept"*"B2"."NB St") AS "Agg_BA"  FROM "Database"."R_Table" "R"  INNER JOIN (  SELECT  "R"."Reg" AS "Reg",  COUNT (DISTINCT "R"."Dpt") AS "NB Dept" FROM "Database"."R_Table" "R"    GROUP BY "R"."Reg") "R2" on "R2"."Reg" = "R"."Reg"  INNER JOIN (  SELECT  "B"."Reg" AS "Reg",  COUNT (DISTINCT "B"."St") AS "NB St" FROM "Database"."B_Table" "B"    GROUP BY "B"."Reg") "B2" on "B2"."Reg" = "R"."Reg"  INNER JOIN (  SELECT "B"."Job" AS "Job", "B"."St" AS "St", "B"."Sp" AS "Sp", "B"."Reg" AS "Reg", "B"."year" AS "year", "B"."id_program" AS "id_program", "B"."program" AS "program", "B"."Lib" AS "Lib", SUM("B"."Ef") AS "Ef", SUM("B"."Eex") AS "Eex", SUM("B"."Ein") AS "Ein", SUM("B"."Sehr") AS "Sehr", SUM("B"."Sin") AS "Sin", SUM("B"."Sr") AS "Sr", SUM("B"."Sc") AS "Sc" FROM "Database"."B_Table" "B" GROUP BY "B"."id_program","B"."program","B"."year","B"."Reg","B"."Job",          "B"."Sp","B"."Lib","B"."St") "B3" on "B3"."Reg" = "R"."Reg"  LEFT JOIN ( SELECT "C"."Job" AS "Job", "C"."Sp" AS "Sp", "C"."Reg" AS "Reg", "C"."year" AS "year", "C"."id_program" AS "id_program", "C"."program" AS "program", "C"."Lib" AS "Lib", SUM("C"."RE") AS "RE", SUM("C"."RD") AS "RE", SUM("C"."RDP") AS "RDP", SUM("C"."RC") AS "RC", SUM("C"."RA") AS "RA", SUM("C"."EE") AS "EE", SUM("C"."BE") AS "BE", SUM("C"."BD") AS "BD", SUM("C"."BDP") AS "BPE", SUM("C"."BC") AS "BC", SUM("C"."BA") AS "BA" FROM "Database"."Content" "C" GROUP BY "C"."id_program","C"."program","C"."year",          "C"."Reg","C"."Job", "C"."Sp","C"."Lib") "C2" on            concat("C2"."id_program","C2"."program","C2"."year","C2"."Reg","C2"."Job","C2"."Sp","C2"."Lib") =           concat("B3"."id_program","B3"."program","B3"."year","B3"."Reg","B3"."Job","B3"."Sp","B3"."Lib") 

Bound on the mutual information between a product of correlated random variables

Let $ G$ be a finite group.

Suppose the random variables $ X_1,\dots,X_N$ are sampled uniformly at random from $ G$ . Let $ Y_1,\dots,Y_N$ be random variables where $ Y_i$ is correlated with $ X_i$ and sampled according to some unknown distribution.

Given a bound on the mutual information $ I(X_k:Y_k) \leq \epsilon_k$ for all $ k$ , what is a good upper bound on $ I(X_1\dots X_N: Y_1\dots Y_N)$ ? I.e., the product of the two sets of random variables.

I believe a bound like $ $ I(X_1\dots X_N: Y_1\dots Y_N) \leq C\prod\epsilon_k$ $ for some $ C$ might exist but have had no luck in proving it.

What analysis / algorithm helps stabilizing the fit of correlated parameters (but not colinear)?

I have many curves that I want to fit using a convolution of some functions. These functions include Weibull distributions with 2 parameters lambda and k, as well as a function B(t) such as measured curves to fit to model = F(lambda1, k1, kambda2, k2) + B(t)

The main problem here is that even if the lambda’s, k’s and B are not colinear, they can be “kind of” substituted and the optimization can lead to different local minima, with a close final error, but parameters not close at all.

This is a problem because I intend to interpret the value of these parameters as natural characteristics of the objects I observe.

Our actual approach is to minimize the number of parameters, i.e. fixing some of the lambda’s and k’s, as we would do if there were a function linking them. However this is arbitrary + this is a sacrifice as I can not interpret this parameters value anymore.

So question : is there a method / analysis / related problem / science paper dealing with this problem of unstable optimization when parameters are not exactly perpendicular degrees of liberty ?

A limit for two correlated variables

Suppose we have two correlated Normal variables $ X_A$ and $ X_B$ , with respective standard deviation $ A$ and $ B$ , and correlation $ \rho$ . The variable $ X_A + X_B$ has a standard deviation in excess (or deficiency) of that of $ X_A$ given by: $ $ E = \sqrt{A^2 + 2 \rho A B + B^2} – A $ $ My question is: as $ A$ becomes large (with $ A >> B$ ), show rigorously that $ E \rightarrow \rho B$ .

This is straightforward to verify for $ \rho = \pm 1$ , but the intermediate case eludes me.

Bootloader/BIOS, flashing ROM and correlated risks. Why Android devices are more brickable than PCs?

I have a solid experience of installing different OS (Linux, Windows,…) on PC. I would like to try just for fun to install Linux on a unbranded Android low cost tablet acquired in 2015. I spent some time browsing the web and as far as I understood there is a risk that during the flash procedure the device could be potentially damaged. So I read extensively on how to backup the ROM using TWRP and all related matters. I would like just to have some explanations on the below topic:

Scenario #1:
I have a PC, if I want to try another OS I can just format the hard disk and install it, in no way there is the risk of damaging the BIOS motherboard. Motherboard and hard-disk are separated, so no problems may arise.

Scenario #2:
I have a tablet, want to wipe out Android and install an upgraded version of Android or a Linux distro suitable for mobile devices.

  • Why in this scenario there is a risk to get an unusable device?
  • Is this because in this case motherboard and unit memory are bundled together? So wiping the memory will also erase the configuration settings of the motherboard?
  • Do we have here the equivalent of BIOS settings?

Thanks to everybody who will like to explain

Ghera