non linear second order pde

I try to solve the following problem with zero luck. Any suggestions?

d = 5*10^27; t0 = 0; t1 = 10^7; t2 = 2*10^7; Q = 1;` 

pde = -dD[n[r, t], {r, 2}] – (2d)/rD[n[r, t], r] + D[n[r, t], t] – QDiracDelta[t – t0] – Q*DiracDelta[t – t1]==0`

ic = {n[r, 0] == 1/r, n[0, t] == t^-1.5}

sol[r_, t_] == DSolveValue[{pde, ic}, n[r, t], {r, t}]

Solving Kronecker-structured linear equations

I need to approximately solve the following underdetermined system of $ n$ linear equations

$ $ y_i=a_i^T X b_i$ $

Where $ X$ is $ d\times d$ unknown matrix, $ a_i$ and $ b_i$ are given vectors and $ n=d$ . Is there a way to do this much faster than vectorizing both sides of each equation and calling LinearSolve?

LinearSolve approach destroys the structure, it reduces to solving a system with $ d^3$ coefficients instead of original one with $ 2d^2$ coefficients.

Below is an example of this approach, it’s too slow for my application where $ d=1000$ . On that scale, backward is 2000x slower than forward, with majority of the time spent in LinearSolve. I was hoping for 100x slower since that seems like typical LinearSolve overhead for unstructured systems on that scale.

n = d = 50; {a, b} = Table[RandomReal[{-1, 1}, {n, d}], {2}]; X = RandomReal[{-1, 1}, {d, d}]; forward[X_] := MapThread[#1.X.#2 &, {a, b}]; y = forward[X];  backward[Y_] :=    With[{mat = MapThread[Flatten@Outer[Times, #1, #2] &, {a, b}]},    x = LinearSolve[mat, y];    ArrayReshape[x, {d, d}]    ]; Print["forward time is ", forward[X]; // Timing // First]; {timing, error} = Timing[Norm[forward[backward[y]] - y, Infinity]]; Print["backward time is ", timing]; Print["error is ", error] 

Online Mathematica, pros and cons, linear algebra problem

I apologize in advance if this question is irrelevant to this website.

I would like to use Mathematica to solve a system of linear equations with lots of unknowns(729 unknowns), the unknowns are tensor components of curvature tensors arising from a differential geometry problem.

I would like to buy Mathematica for this purpose and I have to decide between buying it online or installing the desktop version on a PC. I m thinking of buying the online version. I have the following questions:

  1. What are the advantages and disadvantages of the desktop version over the online version ? For example, are there mathematical or programming functionalities which are available only on the desktop version and not in the online version ?

  2. I assume that if I buy the online version, then I will get a username and a password to access an online version of mathematica from any computer. (Just like how one can type latex on from an online account using any PC). Is my assumption correct ?

3)Does Mathematica provide a user friendly way for solving linear simultaneous equations with lots of unknowns ? Let me elaborate with an example: Say I want to solve the simultaneous equations $ x=2y+a, y-3x=7x+2$ for $ x,y$ . I would like a software where I can just type: $ x=2y+a, y-3x=7x+2$ and ask the software to solve for $ x,y$ and just give me the solution symbolically in terms of parameter $ a$ instead of me having to rearrange terms so that the equations become $ x-2y=a, y-10x=2$ and then write it in matrix form, then ask it to make a matrix inversion. The difference I am talking about might seem silly in this example but it will not be silly in my original problem where I have 700 unknowns. If this feature exists in Mathematica, it will save me a lot of time.

Thank you,

How to find efficiently all positive linear dependencies between some vectors

I’ve got these vectors

vecs= {{0,1,0,0,0,0,0,-1,0},    {1,-1,1,0,0,0,-1,1,-1},  {1,0,-1,1,0,-1,1,0,-1},  {1,0,-1,1,0,0,-1,0,1},   {1,0,0,-1,0,1,0,0,-1},   {1,0,0,-1,1,-1,1,-1,1},  {1,0,0,0,-1,0,0,1,0},    {-1,0,1,0,0,-1,1,0,-1},  {-1,0,1,0,0,0,-1,0,1},  {-1,1,-1,1,-1,1,0,0,-1}, {-1,1,-1,1,0,-1,1,-1,1}, {-1,1,0,-1,0,1,0,-1,1},   {-1,1,0,-1,1,-1,0,1,0},  {0,-1,0,0,1,0,0,0,-1},   {0,-1,0,1,-1,1,0,-1,1},  {0,-1,0,1,0,-1,0,1,0},   {0,-1,1,-1,0,1,-1,1,0},  {0,0,-1,0,0,0,1,0,0}} 

And I would like to find all linear dependencies with positive coefficients between them. I started with

ns = NullSpace[Transpose[vecs]]  

which gave me

{{2,2,-1,0,-1,0,0,0,0,0,0,0,0,0,0,0,0,3},  {2,-1,2,0,-1,0,0,0,0,0,0,0,0,0,0,0,3,0},   {2,-1,-1,0,2,0,0,0,0,0,0,0,0,0,0,3,0,0},  {1,1,1,0,1,0,0,0,0,0,0,0,3,0,3,0,0,0},   {2,-1,-1,0,-1,0,3,0,0,0,0,0,0,3,0,0,0,0}, {-1,2,2,0,-1,0,0,0,0,0,0,3,0,0,0,0,0,0},   {-1,2,-1,0,2,0,0,0,0,0,3,0,0,0,0,0,0,0},  {-1,2,-1,0,-1,3,0,0,0,3,0,0,0,0,0,0,0,0},   {-1,-1,2,0,2,0,0,0,3,0,0,0,0,0,0,0,0,0},  {-1,-1,-1,3,2,0,0,3,0,0,0,0,0,0,0,0,0,0}} 

so there is one linear dependence with nonnegative coefficients (the fourth one). To check whether there are others, I made a system of inequalities with

ineqs = Simplify[Union[Map[# >= 0 &, Table[x[k], {k, Length[ns]}].ns]]] 

which returns

{x[1]>=0,x[2]>=0,x[3]>=0,x[4]>=0,x[5]>=0,x[6]>=0,x[7]>=0,x[8]>=0,x[9]>=0,x[10]>=0,  2 x[1]+2 x[2]+2 x[3]+x[4]+2 x[5] >= x[6]+x[7]+x[8]+x[9]+x[10],  2 x[1]+x[4]+2 (x[6]+x[7]+x[8])   >= x[2]+x[3]+x[5]+x[9]+x[10],  2 x[2]+x[4]+2 (x[6]+x[9])        >= x[1]+x[3]+x[5]+x[7]+x[8]+x[10],  2 x[3]+x[4]+2 (x[7]+x[9]+x[10])  >= x[1]+x[2]+x[5]+x[6]+x[8]} 

but my notebook runs out of memory on both Solve[ineqs] and Reduce[ineqs].

What is the proper way?

How to solve this 2nd-order linear ODE analytically?

I want to analytically solve the eigenvalue problem $ $ y”(x) – 2\gamma\, y'(x) + [\lambda^2 + \gamma^2 – (\frac{x^2}{2}+\alpha)^2 + x]\, y(x)=0$ $ where $ \lambda$ is the eigenvalue and $ \alpha,\gamma$ are parameters. The boundary condition is $ y(\pm\infty)=0$ .

Or instead of the eigenvalue problem, it will as well be nice to just solve it with freely running $ \lambda$ . Then probably I can tackle the eigenproblem by imposing the boundary condition.

The following code doesn’t work well. Is there any possible way beyond?

F := (D[#, {x, 2}] -       2 \[Gamma] D[#, x] + (\[Lambda]^2 + \[Gamma]^2 + (x^2/2 + \[Alpha])^2 + x) #) &; DEigensystem[{F[y[x]] /. \[Lambda] -> 0,    DirichletCondition[y[x] == 0, True]},   y[x], {x, -\[Infinity], \[Infinity]}, 5] DSolve[F[y[x]] == 0, y[x], x] 

Logplot and linear plot in the same plot

I have the following code:

q := 1.6*10^-19; me := 9.1*10^-31; (* Free electron rest mass in kg *) h :=  6.63*10^-34;  (* Reduced Planck's constant in J.s *) kb := 1.38*10^-23;(* Boltzmann constant in J/K *) LogPlot[Abs[Jschottky[V, 77]], {V, -0.5, 0.5}, PlotRange -> All,   Frame -> True,   FrameLabel -> {"Voltage (V)",     "\!\(\*FractionBox[\(J\), SuperscriptBox[\(T\), \(3/2\)]]\)"},   BaseStyle -> {FontSize -> 15}, PlotStyle -> {Thick, Red} ,   AspectRatio -> GoldenRatio, ImageSize -> 400, FrameStyle -> Black,   FrameTicks -> {{{#, Superscript[10, Log10@#]} & /@ ({10^-21, 10^-11,         10^-1, 10^9, 10^19}), None}, {Automatic, None}}]  Plot[Abs[Jschottky[V, 77]], {V, -0.5, 0.5}, PlotRange -> All,   Frame -> True,   FrameLabel -> {"Voltage (V)",     "\!\(\*FractionBox[\(J\), SuperscriptBox[\(T\), \(3/2\)]]\)"},   BaseStyle -> {FontSize -> 15}, PlotStyle -> {Thick, Blue} ,   AspectRatio -> GoldenRatio, ImageSize -> 400, FrameStyle -> Black] 

I get the following results:

enter image description here enter image description here

Now I want to plot them on the same plot with the logplot on the left y axis and the linear plot on the right yaxis. What should I do? Also any recommendations for a good grayscale plot of the same?

In what cases is solving Binary Linear Program easy (i.e. **P** complexity)? I’m looking at scheduling problems in particular

In what cases is solving Binary Linear Program easy (i.e. P complexity)?

The reason I’m asking is to understand if I can reformulate a scheduling problem I’m currently working on in such a way to guarantee finding the global optimum within reasonable time, so any advice in that direction is most welcome.

I was under the impression that when solving a scheduling problem, where a variable value of 1 represents that a particular (timeslot x person) pair is part of the schedule, if the result contains non-integers, that means that there exist multiple valid schedules, and the result is a linear combination of such schedules; to obtain a valid integer solution, one simply needs to re-run the algorithm from the current solution, with an additional constraint for one of the real-valued variables equal to either 0 or 1.

Am I mistaken in this understanding? Is there a particular subset of (scheduling) problems where this would be a valid strategy? Any papers / textbook chapter suggestions are most welcome also.

How to prove that the dual linear program of the max-flow linear program indeed is a min-cut linear program?

So the wikipedia page gives the following linear programs for max-flow, and the dual program :

enter image description here

While it is quite straight forward to see that the max-flow linear program indeed computes a maximum flow (every feasable solution is a flow, and every flow is a feasable solution), i couldn’t find convincing proof that the dual of the max-flow linear program indeed is the LP of the min-cut problem.

An ‘intuitive’ proof is given on wikipedia, namely : $ d_{uv}$ is 1 if the edge $ (u,v)$ is counted in the cut and else $ 0$ , $ z_u$ is $ 1$ if $ u$ is in the same side than $ s$ in the cut, and $ 0$ if $ u$ is in the same side of the cut than $ t$

But that doesn’t convince me a lot, mainly why should all the variables be integers, while we don’t have integer conditions ?

And in general, do you have a convincing proof that the dual of the max-flow LP indeed is the LP formulation for min-cut ?

Problem in the CLRS Linear Programming chapter

I’m currently reading the CLRS Linear Programming chapter and there is something i don’t understand.

The goal is to prove that given a basic set of variables, the associated slack form is unique

They first prove a lemma :

enter image description here

And then they prove the result :

enter image description here

My concern is that to prove the second lemma, they apply the first lemma. However equations (29.79) -> (29.82) only holds for feasable solutions, which is not for any x, so why can they apply the first lemma ?