## Multidimensional Correlated Geometric Brownian Motion, finding exact form of the matrices

My goal is to understand the dimensions of the matrices involved, so I am initially writing things as column vectors, and defining all the dimensions.

I am working with the following setup: Probability space $$(\Omega, \mathcal F, \mathbb Q)$$, equipped with a $$(d \times 1)$$-dimensional Correlated Brownian Motion, $$W$$, and the natural filtration of $$W$$ is $$(\mathcal F)_s$$.

The martingale, $$X$$, (with respect to $$\mathcal F_t$$ and $$\mathbb Q$$) is $$(d \times 1)$$-dimensional and of the form: $$$$dX_t^i = \sigma_t^i X_t^idW_t^i, \: i \in [1,d], \qquad d\langle W^i, W^j \rangle_t = \rho^{i,j}_tdt$$$$

I have been trying to find the correct matrix form for this equation, but whenever I have looked online, the equation seems to always be written in the above form for each $$i$$, rather than as the matrices themselves.

So far, I have defined the $$(d \times d)$$ covariance matrix $$\Sigma$$, and another $$(d \times d)$$ matrix $$A$$: $$$$AA^T \equiv \Sigma, \qquad \Sigma_{i,j} = \rho^{i,j}\sigma^i\sigma^j$$$$ and a $$(d \times 1)$$-dimensional standard Brownian Motion, $$B$$, and a $$(d \times 1)$$-dimensional vector $$L$$, so that : $$$$\frac{dX_t^i}{X_t^i} \equiv L_i$$$$

So now, I have that: $$$$L = AdB$$$$ I am not sure if this is correct, but it seems to contain all the relevant information. The covariances between each $$\frac{dX_t^i}{X_t^i}$$ is found through $$\Sigma$$ as $$\rho^{i,j}\sigma^i\sigma^j = \text{Cov}(\frac{dX_t^i}{X_t^i}, \frac{dX_t^j}{X_t^j})$$, so I think it should be correct.

From there I tried to convert $$L$$ to the $$(d \times 1)$$ dimensional vector $$dX$$, by multiplying by the diagonal $$(d \times d)$$ matrix $$D = \text{diag}(X_t^1,X_t^2,…)$$, which leads to:

$$$$DL = dX = DAdB$$$$

I assumed this would work, and tried to check by using Ito’s Lemma on both $$dX_t^i = \sigma_t^i X_t^idW_t^i, \: i \in [1,d]$$, and on $$dX_t = DAdB_t$$, to check and the results seem to match.

I am using this form of Ito’s Lemma: \begin{align} df = \frac{\partial f}{\partial t}dt + \sum_i\frac{\partial f}{\partial x_i}dx_i + \frac{1}{2}\sum_{i,j}\frac{\partial^2 f}{\partial x_ix_j}[dx_i,dx_j] \end{align} I was just calculating the $$\frac{1}{2}\sum_{i,j}\frac{\partial^2 f}{\partial x_ix_j}[dx_i,dx_j]$$ term, so using $$dX_t^i = \sigma_t^i X_t^idW_t^i, \: i \in [1,d]$$ results in $$\frac{1}{2}\sum_{i,j}^d\frac{\partial^2 f}{\partial x_ix_j}\rho^{i,j}\sigma^i\sigma^jX^iX^jdt$$, as expected.

For the form $$dX_t = DAdB_t$$, I used that $$\frac{1}{2}\sum_{i,j}\frac{\partial^2 f}{\partial x_ix_j}[dx_i,dx_j] = \frac{1}{2}\sum_{i,j}(\beta\beta^T)_{i,j}\frac{\partial^2 f}{\partial x_i \partial x_j} dt$$, for any Ito process of the form $$dY_t = \beta dB_t$$.

This gives $$$$\frac{1}{2}\sum_{i,j}(DA(DA)^T)_{i,j}\frac{\partial^2 f}{\partial x_i \partial x_j}dt = \frac{1}{2}\sum_{i,j}^d(D\Sigma D)_{i,j}\frac{\partial^2 f}{\partial x_i \partial x_j}dt = \quad \frac{1}{2}\sum_{i,j}^d(D_{i,i}\Sigma_{i,j} D_{j,j})\frac{\partial^2 f}{\partial x_i \partial x_j}dt = \frac{1}{2}\sum_{i,j}^d\frac{\partial^2 f}{\partial x_ix_j}\rho^{i,j}\sigma^i\sigma^jX^iX^jdt$$$$

I am wondering if this is correct, or if I did something incorrectly here. The dimensions seem to match everywhere. Is it possible to find a solution, like in this post: https://mathoverflow.net/questions/285251/solution-of-multivariate-geometric-brownian-motion. I can’t seem to get to that point using the form $$dX_t = DAdB_t$$.

Thanks a lot for the help!

## Large time behavior of Girsanov type Geometric Brownian Motion with time-dependent drift and diffusion

Recall the Geometric Brownian Motion $$X={\rm e}^{\mu W+\left(\sigma-\frac{\mu^2}{2}\right)t’}$$. If $$\sigma<\frac{\mu^2}{2}$$, then $$X$$ tends to 0 almost surely. But if we consider the following case, $$X=\exp\left\{\int_0^t\mu(t’){\rm d} W+\int_0^t\sigma(t’)-\frac{\mu^2(t’)}{2}{\rm d}t’\right\},$$ and also assume that $$\sigma(t)<\frac{\mu^2(t)}{2}$$ for all the $$t>0$$ ($$\mu$$ and $$\sigma$$ are assumed to be good enough), do we also have the almost decay property? I mean, $$X$$ tends to $$0$$, almost surely? It looks like right, but how would the proof look like? I’m not really sure how to approach it at the moment. Any help is appreciated. Many thanks!

## UMVUE of a Geometric distribution for $\tau \left ( p \right )=p^{-1}$

Suppose that {$$X_1,…,X_n$$} is a random sample from geometric $$(p)$$, where $$x$$ denotes the number of Bernoulli trials needed to get the first success. Find the UMVUE of $$\tau \left ( p \right )=p^{-1}$$.

Since $$\sum_i X_i=t$$ is complete sufficient statistic for geometric distribution, then by Lehman-Scheffe Theorem, how do I find the UMVUE of $$\tau \left ( p \right )=p^{-1}$$?

## Injectivity of the simple closed curves under geometric intersection number

Let $$\Sigma$$ be a closed surface of genus $$g\geq 2$$ and $$\mathcal{C}$$ be the set of all free homotopy classes of simple closed curves in $$\Sigma$$. Define $$i:\mathcal{C}\rightarrow \mathbb{R}^{\mathcal{C}}$$ by $$i(x)(y)=i(x,y)$$ for $$x,y\in \mathcal{C}$$ where $$i(x,y)$$ is the geometric intersection number.

Q) Does there exist a finite subset $$K\subset\mathcal{C}$$ such that $$i:\mathcal{C}\rightarrow R^{K}$$ is injective?

## Geometric intepretation of number of free variables in a solution to linear system?

Is there a geometric consequent to the number of free variables in a solved, consistent linear system represented as a row-echelon matrix (I use the row-echelon terminology just for an easy way of getting at the number of free variables)? That is, if we have a system of equations with 1 solution and (therefore no free variables), our solution is just a point and exists in 0 dimensions. If, as another example, we have two equivalent lines (and thus one free variable), we have infinitely many solutions, and the solution set is of course a line.

What I’m wondering is if this pattern generalizes: If we have 2 free variables in a consistent, row-echelon form linear system, is our solution set a (2d) plane? Clearly this is the case when we might have 3 equivalent planes, but what about the case where we might have a system of 5 variables, 2 of which being free? Still a plane? Similarly, if we have 3 free variables in a consistent row-echelon matrix, is our solution set a 3d plane? (and etc…?)

## Conditional expectation of geometric RV

Let X be a geometric random variable whose probability for success is itself random, and has the standard uniform distribution. Compute the pmf of X.

## A difficult geometric inequality

Let $$O$$ and $$I$$ be the circumcenter and incenter of $$\Delta ABC$$ respectively. The projection of a point $$P$$ to $$BC, AC, AB$$ are $$D, E, F$$ respectively. $$r, r’$$ are the radius of the inscribed circle of $$\Delta ABC$$ and $$\Delta DEF$$, respectively. Suppose $$OP \geq OI$$. Then $$r’ \leq \frac{r}{2}$$ Seeing the incenter, inradius and the incircles, and also the $$OI$$ and the inequality with inradiuses, i thought Euler’s Theorem would be best, but it does not help. Also, the arbitrary point $$P$$ and its projections made me to think about pedal triangles, and one thing i know is that the oriented area of $$P$$ to the original triangle’s pedal triangle is $$\frac{R ^2 -OP ^2}{4R ^2} × S_{A_1A_2A_3}$$. But this problem is too difficult. Can you help?

## Geometric irreducibility of fiber product of geometric irreducible schemes

Given three geometrically irreducible normal $$k$$-curves, $$X$$, $$Y$$, $$Z$$, and two morphisms $$f \colon\ X \to Z$$, $$g \colon\ Y \to Z$$. Assume that $$X \times_Z Y$$ is irreducible. Does $$X \times_Z Y$$ is geometrically irreducible? If not, which conditions should $$f$$ and $$g$$ satisfy? I am interested in case that $$f$$ and $$g$$ are étale morphisms.

More general form of the above question: Assume $$X$$, $$Y$$, $$Z$$, $$f$$ and $$g$$ as above. Let $$W$$ be an irreducible component of $$X \times_Z Y$$. Does $$W$$ is geometrically irreducible?

Thank you!

## Interesting geometric flow of space curves with non-vanishing torsion

Recently, while thinking about CMC surfaces, I came up with an interesting geometric flow for curves in $$\mathbb{R}^3$$ given by $$$$\partial_t \gamma = \tau^{-\frac{1}{2}} n,$$$$ where $$\gamma$$ denotes the parametrization, $$\tau$$ is torsion and $$n$$ is the principal normal vector. Here and after, we consider only closed curves and we use $$\Gamma_t$$ to denote the curve given by the parametrization $$\gamma(\cdot, t)$$.

This flow has many intriguing properties. First of all, curves evolving according to this motion law trace out a zero mean curvature surface! Thus, it might be used for generating minimal surface with a prescribed boundary given by the initial curve.

There are several problems with this flow. Most importantly, the term $$\tau^{-\frac{1}{2}}$$ is defined only when $$\tau$$ is strictly positive. However, when torsion of the initial curve $$\Gamma_0$$ is lower bounded by some positive constant $$C$$, we get $$$$\tau(u, t) = \left( \sqrt{\tau(u, 0)} + \int_{0}^{t} \kappa(u, \bar{t}) \mathrm{d} \bar{t} \right)^2 \geq \tau(u, 0) \geq C > 0,$$$$ because $$$$\partial_t \sqrt{\tau} = \tfrac{1}{2} \tau^{-\frac{1}{2}} \partial_t \tau = \tfrac{1}{2} \tau^{-\frac{1}{2}} \left( 2 \tau^{\frac{1}{2}} \kappa + \partial_s \left[ \tfrac{1}{\kappa} \left( \tau^{-\frac{1}{2}} \partial_s \tau + 2\tau \partial_s \left( \tau^{-\frac{1}{2}} \right) \right) \right] \right) = \kappa,$$$$ where $$\partial_s$$ is the arclength derivative and $$\kappa$$ is the curvature. Thus the curve cannot develop a vertex (point of vanishing torsion) and $$\tau^{-\frac{1}{2}}$$ remains well-defined.

Questions:

1. Is there any simple way to proof or disproof the existence and uniqueness of this flow?

2. Is there any similar geometric flow involving torsion that has been already studied?

I will end this post with a list of interesting properties (I can provide proofs upon request):

• The integral of $$\sqrt{\tau}$$ is preserved, i.e. $$$$\frac{\mathrm{d}}{\mathrm{d} t} \int_{\Gamma_t} \tau^{\frac{1}{2}} \mathrm{d} s = 0.$$$$

• The length of the curve $$\Gamma_t$$ is non-increasing, i.e. $$$$\frac{\mathrm{d}}{\mathrm{d} t} \int_{\Gamma_t} \mathrm{d} s = – \int_{\Gamma_t} \kappa \tau^{-\frac{1}{2}} \mathrm{d} s \leq 0.$$$$ In fact, using Fenchel’s theorem and Gauss-Bonnet theorem, one can show that $$$$\int_{0}^{t} \left( \int_{\Gamma_\bar{t}} \mathrm{d} s \right) \mathrm{d} \bar{t} \leq \frac{1}{\inf_{\Gamma_0} \tau^{\frac{3}{2}}} \left( \int_{\Gamma_0} \kappa \mathrm{d} s -2\pi \right).$$$$ If the right-hand side is finite and the flow exists for $$t \in [0,+\infty)$$, the length of $$\Gamma_t$$ must approach zero – the curve shrinks to a point as time approaches infinity.

• Area $$A_t$$ of the generated surface is bounded by a constant which depends only on the shape of the initial curve $$\Gamma_0$$ (it does not depend on time $$t$$): $$$$A_t \leq \frac{1}{\inf_{\Gamma_0} \tau^2} \left( \int_{\Gamma_0} \kappa \mathrm{d} s – 2 \pi \right).$$$$

• A simple analytical solution for this motion is a shrinking helix curve, which generates the helicoid surface. I do not know any analytical solution for a closed curve.

## Improper Use of Geometric Formula

Assume that every time you hear a song on the radio, the chance of it being your favorite song is $$2\%$$. How many songs must you listen to so that the probability of hearing your favorite song exceeds $$90\%$$?

My initial approach was:

This is a geometric distribution with probability of success $$p=0.02$$. Let the random variable $$X$$ be the number of songs heard BEFORE I hear my favorite song. For example, $$X=3$$ means I heard 3 mediocre songs before my favorite song. So we want,

$$P(X=k)=(1-p)^kp > 0.9\\ ~~~~~~~~~~~~~~~~\Rightarrow (1-p)^k > 0.9/p\\ ~~~~~~~~~~~~~~~~\Rightarrow k(\log(1-p)) > \log(0.9/p)\\ ~~~~~~~~~~~~~~~~\Rightarrow k > \frac{\log(0.9/p)}{\log(1-p)}\\ ~~~~~~~~~~~~~~~~= k > \frac{\log(0.9/0.02)}{\log(0.98)}\$$

The correct approach was:

$$P$$(good song) $$=0.02$$

$$P$$(bad song) = $$0.98$$

$$P$$(n bad songs) = $$0.98^n$$

$$P$$(good song after n) = $$1-(0.98)^n$$

thus,

$$1-(0.98)^n > 0.9 \Rightarrow n > \frac{\log{(1-0.9)}}{\log{0.98}}$$

What did I do wrong in my initial approach?