## How to quickly solve a linear equation for 7000 times?

I need to solve a linear equation Ax=b for 7000 times (A is sparse and complex square matrix), and at each time only 4 elements (A(i,k), A(i,m), A(j,k) and A(j,m)) are changed while all other elements are the same (at each time the indices i,j,k,m are different). I used block to obtain the updated inverse of matrix A. The total CPU time is more than 20 minutes. I am wondering if there is a faster way to solve this equation and control the CPU time within 1 minute.

Benson from Texas

## How do I prove this equation

if n==0:     return 1 else:     return x*recursion(x, n-1) 

I already tried proofing it, I replaced n with n= k+1 and 2K +1 but I’m not getting the correct answer, what should i do

## Covariant derivative of the Monge-Ampere equation on Kähler manifolds

I am reading D. Joyce book “Compact manifolds with special holonomy” and I have some problems of understanding some computation on page 111, the first line in the proof of Proposition 5.4.6. More specific the following:

Let $$(M,\omega, J)$$ be a compact Kähler manifold with Kähler form $$\omega$$ and complex structure $$J$$. In holomorphic coordinates $$\omega$$ is of the form $$\omega = ig_{\alpha \overline{\beta}}dz^{\alpha} \wedge d\overline{z}^{\beta}$$. Associated to the above data we have the Riemannian metric $$g$$ which may be written in holomorphic coordinates as $$g=g_{\alpha \overline{\beta}}(dz^{\alpha}\otimes d\overline{z}^{\beta} + d\overline{z}^{\beta} \otimes dz^{\alpha})$$. Associated to $$g$$ let $$\nabla$$ be the Levi-Civita connection which also defines a covariant derivative on tensors. For a function $$\phi$$ on $$M$$ one may compute $$\nabla^{k}\phi$$. For example $$\nabla \phi = (\nabla_{\lambda}\phi)dz^{\lambda} + (\nabla_{\overline{\lambda}}\phi)d\overline{z}^{\lambda}=(\partial_{\lambda}\phi)dz^{\lambda} + (\partial_{\overline{\lambda}}\phi)d\overline{z}^{\lambda}$$ (once applied on functions is as the usual $$d$$) and $$\nabla_{\alpha \beta}\phi = \partial_{\alpha \beta} \phi – \partial_{\gamma}\phi \Gamma^{\gamma}_{\alpha \beta}$$, $$\nabla_{\alpha \overline{\beta}}\phi = \partial_{\alpha \overline{\beta}}\phi$$ etc.

In the first sentence of the proof of proposition 5.4.6 Joyce considers the equation $$\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi) = e^{f}\det(g_{\alpha \overline{\beta}})$$, where $$f:M\rightarrow \mathbb{R}$$ is a smooth function on $$M$$. After taking the $$\log$$ of this equation he obtains $$\log[\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi)] – \log[\det(g_{\alpha \overline{\beta}} )] = f$$ which is obviously a globaly defined equality of functions on $$M$$. Now he takes the covariant derivative $$\nabla$$ of this equation and obtains $$\nabla_{\overline{\lambda}}f = g’^{\mu \overline{\nu}}\nabla_{\overline{\lambda} \mu \overline{\nu}}\phi$$ where $$g’^{\mu \overline{\nu}}$$ is the inverse of the metric $$g’_{\alpha \overline{\beta}} = g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi$$ (which he assumes to exists). This last step (when taking the covariant derivative) I do not understant.

In my computation I have the following: When taking the covariant derivative $$\nabla_{\overline{\lambda}}$$ of the equation $$\log[\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi)] – \log[\det(g_{\alpha \overline{\beta}} )] = f$$ and using the formula for the derivative of the determinant I obtain $$g’^{\alpha \overline{\beta}}(\partial_{\overline{\lambda}}g_{\alpha \overline{\beta}} + \partial_{\overline{\lambda} \alpha \overline{\beta}}\phi) – g^{\alpha \overline{\beta}}(\partial_{\overline{\lambda}}g_{\alpha \overline{\beta}}) = \partial_{\overline{\lambda}}f = \nabla_{\overline{\lambda}}f$$. This is obviously different to his formula. Moreover the term $$\nabla_{\overline{\lambda}\mu \overline{\nu}}\phi$$ contains not only derivatives of order $$3$$ of $$\phi$$ but it also contains a term with second derivatives of $$\phi$$.

My question is: Where is my mistake? Have I understood something wrong?

## How to understand a equation related to speaker recognition?

This question refers to the paper int the link: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.445.6034&rep=rep1&type=pdf. I am trying to implement the algorithm in table 1 and table 2 in page 18. In step 6 of of table 1 they are calculating $$b_z^i$$ as a mean (or sum) of $$b(z_i)$$ and number of entries is $$N_z$$ which they claim to be the number of features.

The question is what is $$N_z$$ here. As I understand each feature set, which is of dimension $$N_z$$, has been used to create $$b(z_i)$$, so what this summation means? One can only sum over time dimension, which has nothing to do $$N_z$$. $$N_z$$ is kind of spatial dimension as one time frame of data is converted to features.

Any insight would be highly appreciated.

## How to solve a Poisson’s differential equation with a boundary condition at infinity?

Context: This question is relevant to the physical problem of calculating potential for a set of p-n-p junctions. We have to solve a Poisson’s differential equation for a p-n-p junction with potential equal zero outside it on the left and right sides. For simplification and due to symmetry law we analyze only right side from 0 to some delta (from which potential is the same as for Infinity) and do not analyze left side. Boundary conditions are that in point on Infinity function and its 1st derivative is equal 0. Derivative in x=0 is equal 0. Alpha is a random very small number for Fermi step. In code bcd are boundary conditions

α = 0.00001; bcd1 = ϕ'[0] == 0; bcd2 = ϕ'[Infinity] == 0; bcd3 = ϕ[Infinity] == 0; eqn = Div[ Grad  [ϕ[x], x], x] == -((1/(exp ((x - 1)/α) + 1)) - (1/(exp (((-x - 1)/α) + 1))) + exp (-ϕ[x]) - exp (ϕ[x])); DSolve[{eqn, bcd1, bcd2, bcd}, ϕ, {x, 0, Infinity}] 

i have tried to use numbers(some delta from which Phi is 0) instead of Infinity or set boundary conditions like

ϕ'[x == 0] == 0 ϕ[x == -Infinity] == 0 ϕ'[x == -Infinity] == 0 

and put it directly into eqn but it does not seem to work. And I obtain as a result

DSolve[{Div[Grad[ϕ[x],x],x] == 1/(exp (1 + 100000. (-1 - x))) - 1/(1 + 100000. exp (-1 + x)) + 2 exp ϕ[x], Derivative[1][ϕ][0] == 0,    Derivative[1][ϕ][∞] == 0, ϕ[∞] ==  0}, ϕ, {x, 0, ∞}] 

If I try to vary boundary conditions or use more complex version of equation I obtain this

DSolve::dsvar: ∞ (-∞..) cannot be used as a variable. 

## Difference quotient for solutions of ODE and Liouville equation

Suppose that $$\Phi$$ is the solution of $$\begin{cases} \frac{d}{dt}\Phi(x,t) = f(\Phi(x,t),t) \quad t >0 \ \Phi(x,0) = x \quad x \in \mathbb{R}^N \end{cases}$$

How does one prove that $$\tilde \Phi(x,y,t) = \left(\Phi(x,t), \frac{\Phi(x + r y,t) – \Phi(x,t)}{r} \right)$$ is the flow of the ODE with $$\tilde{f}_r(x,y,t) = \left(f(x,t), \frac{f(x+r y,t) – f(x,t)}{r} \right)$$ as a vector field?

Also, in an answer to Prove that the flow of a divergence-free vector field is measure preserving, it was proved that if $$\mu_t = (\Phi(\cdot,t))_{\sharp} \mu$$ denote the image of the measure $$\mu$$ by the flow of $$f$$, then the family of measures $$\{\mu_t\}_{t\in \mathbb R}$$ satisfies Liouville equation $$\begin{cases} \partial_t \mu_t + \operatorname{div\,} (f \mu_t) = 0 \ \mu_0 = \mu \end{cases}$$ in the sense of distributions.

What PDE does $$\tilde\mu_t = (\tilde\Phi_t)_{\sharp} \mu$$ solve?

## Why does this iterative way of solving of equation work?

I was solving some semiconductor physics problem and in order to get the temperature I got this nasty equation:

$$T = \dfrac{7020}{\dfrac{3}{2}\ln(T)+12}.$$

I was thougt that I can solve this kind of equation simply by guessing solution for $$T$$ and then substituting that answer back into equation and then again substituting answer back into equation and so on until I am satisfied by precision of result. Somehow this method works.

Concretly for my example, my first guess was $$T=1$$ and I got this sequance of numbers $$(585.0, 325.6419704169386, 339.4797907885183, 338.4580701961562, 338.53186591337385,338.52652733834424, …)$$ and they really seem to solve equation better and better.

Questions.

1) What is intuitive way to see why this method works?

2) How can I show rigoursly that this method actualy converges to solution of equation?

3) Obvious generalization for which the method will works seems to be: $$x = \dfrac{a}{b\ln(x)+c}.$$ For which $$a,b,c$$ will this method work? Is this equation special case of some natural generalization of this equation? What are some similar equations which I can solve via this described method?

4) When will sequance of numbers in iteratiton process be finite to exacly solve equatiton? Does that case exist? Is solution to equation: $$x = \dfrac{a}{b\ln(x)+c}$$ always (for every $$a,b,c$$) irational? Is it transcendental? If not, for which $$a,b,c$$ will that be the case?

Thank you for any help.

## In how many solutions of equation : $x_1+x_2+…+x_n=m$

In how many solutions of equation: $$x_1+x_2+…+x_n=m$$ satisfied: $$x_i\in \mathbb{N}(i=\overline{1,n}),1\le x_i\le 26,n\le m\le 26n,m\in \mathbb{N}$$

## Understanding the book “The Schrödinger Equation, Berezin,Schubin”

I’m trying to understand the book “The Schrödinger Equation” whose authors are Berezin & Schubin, but I’m not a matematician, so i would like to know which books i must read before to reach a comprenhension of chapter 2 of Berezin’s book as minimun, please if someone can help, i would be very thankfully, grettings from México.

## Finding solutions to an equation

Is there any method to find the non negative solutions of the equation–

$$x1^2$$+$$x2^2$$+……………$$x10^2$$=$$3/4$$($$x1$$+$$x2$$+…………$$x10$$)

Where $$x1,x2,……..x10$$ are non negative real numbers. I could find only 2 solutions $$3/4,0$$