Numerically solving 3D Maxwell equations with NDEigensystem

I am trying to get electric $ \vec{E}$ and magnetic $ \vec{B}$ fields in a cylindrical cavity with a dielectric, as in the following Figure. Cavity (pink) with a dielectric inside (blue).

Both the cavity (pink) and dielectric (blue, with dielectric constant \epsilon_r) are cylindrical and share the main axis. The cavity is assumed to be conducting, such as the field at its surface has to be null.

By resorting to the vector potential $ \vec{A}$ , and using the generalized Coulomb Gauge transformation such that:

$ \vec{E}(t,\vec{r})=-\partial_t \vec{A}(t,\vec{r}),$

$ \vec{B}(t,\vec{r})=\vec{\nabla} \times \vec{A}(t,\vec{r}),$

$ \vec{\nabla} \cdot \left[ \epsilon_r(\vec{r}) \vec{A}(t,\vec{r}) \right]= 0,$

I got the system of differential equations that $ \vec{A}$ has to satisfy: \begin{equation} \vec{\nabla}^2 \vec{A}(t,\vec{r}) – \frac{1 + \epsilon_r(\vec{r})}{c^2}\partial_t^2 \vec{A}(t,\vec{r})=0. \end{equation} Here, $ \vec{r} = (x,y,z)$ .

I would like to have the eigenfrequencies and (spatial) eigenfunctions of this last operator. As far as I have understood, NDEigensystem with DirichletCondition is what I have to use; with the first output being the frequencies of the modes and the second output the spatial envelopes of the eigenfunctions.

I tried that, and failed (quite miserably). Not only the eigenfrequencies are imaginary and the spatial envelopes of the eigenfunctions do have an imaginary part$ ^{*}$ . If I plug the outcome of NDEigensystem into the differential equations, I find that these equations are not even satisfied. I am sure that it is me failing somewhere, but after long time spent trying, I am starting being really frustrated. My code is the following:

{vals, funs} = NDEigensystem[{EqSt[t, x, y, z, z1, z2, e, c] == {0, 0, 0}, BndCnd}, {Ax[t, x, y, z], Ay[t, x, y, z], Az[t, x, y, z]}, t, {x, y, z} \[Element] Cylinder[{{0, 0, 0}, {0, 0, d}}, r], 16, Method -> {"PDEDiscretization" -> {"FiniteElement", "MeshOptions" -> {"MaxCellMeasure" -> 0.01}}}}]; 

where EqSt[t, x, y, z, z1, z2, e, c] is the system of differential equations

EqSt[t_, x_, y_, z_, z1_, z2_, er_, c_] := Laplacian[{Ax[t, x, y, z], Ay[t, x, y, z], Az[t, x, y, z]}, {x, y, z}] - (1 + fer[z, z1, z2, er])/c^2 {D[Ax[t, x, y, z], {t, 2}], D[Ay[t, x, y, z], {t, 2}], D[Az[t, x, y, z], {t, 2}]}; 

and BndCnd the boundary conditions

BndCnd = {DirichletCondition[Ax[t, x, y, z] == 0, True], DirichletCondition[Ay[t, x, y, z] == ,True], DirichletCondition[Az[t, x, y, z] == 0, True]}; 

Finally, the dielectric function fer[z, z1, z2, er] that I use to mimic the dielectric is the following:

fer[z_, z1_, z2_, e_] := e (HeavisideTheta[z - z1] - HeavisideTheta[z - z2]); 

which is a step function in the z-th coordinate (= the axis of both cylinders) having value e betweem z1 and z2 and zero elsewhere.

I have tried different methods and different values of "MaxCellMeasure" (there is little improvement, but the error which I obtain by plugging the solution into the differential equations is above one!!!). Do you have any idea what is wrong here?


I have few "bonus questions" here.

-First, I tried increasing the mesh resolution for getting better outcomes, but all the times the eigenfunctions look quite bad, not smooth at all. I guess that a 3D mesh is quite demanding, but I know that the most difficult part is going to be at the dielectric. How can I tell NDEigensystem to have a finer mesh there?

-Second, there is a way to change the normalization of the eigenfunctions? I have read that NDEigensystem spits out eigenfunctions $ \vec{\phi}_i$ s.t.: $ \int \vec{\phi}_i^* \cdot \vec{\phi}_j d \vec{r} = \delta_i,j$ . I would change that to $ \int \epsilon_r(\vec{r}) \vec{\phi}_i^* \cdot \vec{\phi}_j d \vec{r} = \delta_i,j$ , would this be possible?

-Finally, I guess that it might be easier for NDEigensolver to remove the time dependence from the differential equations. This can be easily done by assuming that $ \vec{A}(t,\vec{r}) \rightarrow \vec{A}(\vec{r})e^{-i \omega t})$ . In this case, one can rewrite the operator equation above as: \begin{equation} \vec{\nabla}^2 \vec{A}(\vec{r}) + \frac{\omega^{2}}{c^2}[1 + \epsilon_r(\vec{r})] \vec{A}(\vec{r})=0. \end{equation} However, I do not know how to tell NDEigensystem that $ \omega$ is not a parameter, but should be found with the system constraints… There is a way to remove the time dependence from the equation to be given in NDEigensystem?


$ ^*$ I guess this is a stupid problem; if I change the sign of the Laplacian I get real eigenfrequencies and eigenfunctions… But I am quite confident that the sign in the above equation is correct. There is something that I am missing here?

Solving trigonometric equations with two variables in fixed range?

I am trying to use ‘Solve’ function to solve two trigonometric equations with two variables a1 and a2 (with range 0,2Pi). I wrote the code in the following:

  Vets={Cos[2a1](I+Cos[2a2])+Sin[2a1]Sin[2a2], (I-Cos[2a2])Sin[2a1]+Cos[2a1]Sin[2a2]};   Solve[Vets[[1]] == 1 && Vets[[2]] == 0 && 0<=a1<=2Pi && 0<=a2<=2Pi, {a1,a2}]  

but it gives errors:

Solve::nsmet: This system cannot be solved with the methods available to Solve.

I looked up the documents for Solve function and it should be no problem with the above code. But it doesn’t do anything and give errors, I don’t understand why?

It will be super cool if anyone could answer this question, thank you very much in advance!

Solving Kronecker-structured linear equations

I need to approximately solve the following underdetermined system of $ n$ linear equations

$ $ y_i=a_i^T X b_i$ $

Where $ X$ is $ d\times d$ unknown matrix, $ a_i$ and $ b_i$ are given vectors and $ n=d$ . Is there a way to do this much faster than vectorizing both sides of each equation and calling LinearSolve?

LinearSolve approach destroys the structure, it reduces to solving a system with $ d^3$ coefficients instead of original one with $ 2d^2$ coefficients.

Below is an example of this approach, it’s too slow for my application where $ d=1000$ . On that scale, backward is 2000x slower than forward, with majority of the time spent in LinearSolve. I was hoping for 100x slower since that seems like typical LinearSolve overhead for unstructured systems on that scale.

n = d = 50; {a, b} = Table[RandomReal[{-1, 1}, {n, d}], {2}]; X = RandomReal[{-1, 1}, {d, d}]; forward[X_] := MapThread[#1.X.#2 &, {a, b}]; y = forward[X];  backward[Y_] :=    With[{mat = MapThread[Flatten@Outer[Times, #1, #2] &, {a, b}]},    x = LinearSolve[mat, y];    ArrayReshape[x, {d, d}]    ]; Print["forward time is ", forward[X]; // Timing // First]; {timing, error} = Timing[Norm[forward[backward[y]] - y, Infinity]]; Print["backward time is ", timing]; Print["error is ", error] 

Solving the heat equation using Laplace Transforms

I am trying to solve the 1-D heat equation using Laplace Transform theory. The equation is as follows. I don’t have the capability to write the symbols so I will write it out.

                     partial u/partial t = 2(partial squared u/ partial x squared) -x    boundary conditions are partial u/partial x(0,t)=1, partial u/partial x(2,t)=beta. 

The problem asks the following: (a). For what value of beta does there exist a steady-state solution? (b). if the initial temperature is uniform such that u(x,0)=5 and beta takes the value suggested by the answer to part (a), derive the equilibrium temperature distribution.

I was able to get an equation that looks like U(x,s)=c e^(s/2)^1/2 -(1/s)((x/s)-u(x,0)). But I am not sure how to go from here to solve for beta using the boundary conditions. I need some assistance from someone.

Solving a system of differential equations whose one of the coefficients is imported data

Suppose we have a coupled system of differential equations: \begin{equation} \frac{db}{dt}=(- \gamma_b -i\omega_b)b-i\frac{g}{2}p;\quad \frac{dp}{dt}=i\frac{g}{2}\Delta N(t) b-(\gamma_a+\gamma_b+2iJ)p. \end{equation} If $ \Delta N$ was fixed, the solution of the system would be like \begin{equation} \begin{pmatrix} b(t)\ p(t) \end{pmatrix}=\begin{pmatrix} a_{11}&a_{12}\ a_{21}&a_{22} \end{pmatrix}\begin{pmatrix} b(0)\ p(0) \end{pmatrix} \end{equation} Using the following code, I have found a $ 2\times 2$ matrix (called sol) whose entries are $ a_{ij}$ in the above equation:

rb=630;wb=75*10^6;g=0.63;ra=2.6*10^6;rm=3.6*10^6;J=6.3*10^7;DeltaN=0.164*10^5; m ={{-rb-I wb,-I g/2},{I g DeltaN/2,-(ra+rm+2 I J)}}; eigvec = Eigenvectors[m] // Transpose // Simplify; eigval = Eigenvalues[m] // Simplify; inv = Inverse[eigvec] // Simplify; v1 = eigval[[1]]; v2 = eigval[[2]]; sol = eigvec.{{E^(v1 t), 0}, {0, E^(v2 t) }}.inv; 

If we suppose that $ p(0)=0$ , then one can easily plot $ |b(t)/b(0)|^2$ : simply plot $ a_{11}(t)$ . But the problem is that $ \Delta N$ is not fixed. It is a $ N\times 1$ matrix which I have obtained from another code written with Fortran and its type is data.txt. The elements of this file are calculated by assuming the time interval between each one is $ 0.001$ . That is, for $ t=0.001$ we have $ \Delta N_1$ , for $ t=0.002$ we have $ \Delta N_2$ , etc. But the time intervals are not included in the txt file.

One way that comes to my mind is this: Assuming we know the analytical form of solfor a fixed $ \Delta N$ , we set time, i.g., equal to $ 0.001$ and then substitute the first row of the txt file (I call it $ \Delta N_1$ ) into sol and find $ a_{11}$ . Then we raise time to $ 0.002$ , substitute $ \Delta N_1$ into sol, find $ a_{11}$ , and repeat the procedure to the last row of the txt file.

Now the question is this: how can I import the txt file to the code and do the procedure that I explained above to get some data like $ \{\{0.001,a11(0.001)\},\{0.002,a11(0.002)\},….\}$ where the first elements are time intervals and the second ones are $ a_{ij}$ corresponding to that particular time?

I had asked a similar question here enter link description here, but in that problem I did not have an external file with txt format.

I could not upload my txt file, so I write the first 10 elements if necessary:

0.164E+05

0.655E+05

0.146E+06

0.258E+06

0.400E+06

0.572E+06

0.776E+06

0.101E+07

0.129E+07

0.159E+07

In what cases is solving Binary Linear Program easy (i.e. **P** complexity)? I’m looking at scheduling problems in particular

In what cases is solving Binary Linear Program easy (i.e. P complexity)?

The reason I’m asking is to understand if I can reformulate a scheduling problem I’m currently working on in such a way to guarantee finding the global optimum within reasonable time, so any advice in that direction is most welcome.

I was under the impression that when solving a scheduling problem, where a variable value of 1 represents that a particular (timeslot x person) pair is part of the schedule, if the result contains non-integers, that means that there exist multiple valid schedules, and the result is a linear combination of such schedules; to obtain a valid integer solution, one simply needs to re-run the algorithm from the current solution, with an additional constraint for one of the real-valued variables equal to either 0 or 1.

Am I mistaken in this understanding? Is there a particular subset of (scheduling) problems where this would be a valid strategy? Any papers / textbook chapter suggestions are most welcome also.

Solving recurrence relation $T(n) \leq \sqrt{n}T(\sqrt{n}) + n$

Given the condition: $ T(O(1)) = O(1)$ and $ T(n) \leq \sqrt{n}T(\sqrt{n}) + n$ . I need to solve this recurrence relation. The hardest part for me is the number of subproblems $ \sqrt{n}$ is not a constant, it’s really difficult to apply tree method and master theorem here. Any hint? My thought is that let $ c = \sqrt{n}$ such that $ c^2 = n$ so we have $ T(c^2) \leq cT(c) + c^2$ but I does not look good.

Solving shortest path problem with Dijkstra’s algorithm for n negative-weight edges and no negative-weight cycle

Although many texts state Dijkstra’s algorithm does not work for negative-weight edges, the modification of Dijkstra’s algorithm can. Here is the algorithm to solve a single negative-weight edge without negative-weight edges.

Let $ d_s(v)$ be the shortest distance from source vertex s to vertex v.
Suppose the negative edge $ e$ is $ (u, v)$
First, remove the negative edge $ e$ , and run Dijkstra from the source vertex s.
Then, check if $ d_s(u) + w(u, v) \leq d_s(v)$ . If not, we are done. If yes, then run Dijkstra from $ v$ , with the negative edge still removed.
Then, $ \forall t \in V $ , $ d(t) = min(d_s(t), d_s(u) + w(u, v) + d_v(t))$

Given the above algorithm, I want to modify the above algorithm again to solve n negative-weight edges and no negative weight cycle. Any hint?

Solving Laplace PDE with DSolve

I’m trying to get an analytical solution of Laplace PDE with Dirichlet boundary conditions (in polar coordinates). I managed to solve it numerically with NDSolveValue and I know there is an analytical solution and I know what it is, but I would like DSolve to return it. But DSolve returns the input.

sol = DSolve[{Laplacian[       u[\[Rho], \[CurlyPhi]], {\[Rho], \[CurlyPhi]}, "Polar"] == 0,     DirichletCondition[u[\[Rho], \[CurlyPhi]] == 0,       1 <= \[Rho] <= 2 && \[CurlyPhi] == 0],     DirichletCondition[u[\[Rho], \[CurlyPhi]] == 0,       1 <= \[Rho] <= 2 && \[CurlyPhi] == \[Pi]],      DirichletCondition[      u[\[Rho], \[CurlyPhi]] == Sin[\[CurlyPhi]], \[Rho] == 1 &&        0 <= \[CurlyPhi] <= \[Pi]],      DirichletCondition[      u[\[Rho], \[CurlyPhi]] == 0., \[Rho] == 2 &&        0 <= \[CurlyPhi] <= \[Pi]]},     u, {\[Rho], 1, 2}, {\[CurlyPhi], 0, \[Pi]}];