Full line trajectories plot for the solution of Second Order nonlinear coupled differential equations

I wanted to plot a phase plane containing the trajectories of the solutions found by using ‘NDSolve’ using the initial conditions for x[0], y[0], x'[0] and y'[0]. The equations are: x”[t] – 2 y'[t] == -x[t] + y[t]^2; y”[t] + 2 x'[t] == x[t] + y[t] + x[t]*y[t]

The equilibrium point for the system is (0,0). I have plotted the stream plot for the system but unable to plot a phase portrait that would give me the full line trajectories of the system for different initial conditions. I am also looking for any periodic solution if present in it. The stream plot I got is given below and I would take initial conditions from it.

enter image description here

I get this by using the Parametric Plot of the NDSolve solution:

enter image description here

Kindly help in this capacity. Thanks in advance.

Nonlinear Differential Equation with Paramaters in Sqrt Function

For a university project, I am trying to see if my system will have choked flow and also plot the resulting pressure spike. I set up the system below to try to model the transient response.

I am able to solve the equations TEq and mEq simultaneously if I use the mDotOutChoked equation in place of the mDotOut equation. But I want to know if it ever even reaches the choked condition, and what the steady-state pressure and temperature will be so I want to start with just using the mDotOut equation.

When I run this as it is below, I get "This computation has exceeded the time limit for your plan" followed by "{sol1} is neither a list or replacement rules nor a valid dispatch table, and so…" for all the graphs.

If I replace the pressure term with just a constant, I can get some limited success depending on the value I use. This makes me think there might be an issue with having the T[t] and P[t] terms in the square root in the mDotOut equation.

Is there some issue in {sol1} or is that issue just a result of me needing to upgrade for more compute power?

ClearAll["Global`*"]  Q =  100 ;(*Heat into the shroud in Watts. Based on roughly 1350 W/m^2 from the solar simulator on one face of the shroud*) QAmb = 0  ;(*Heat loss to ambinet. Zero for now*) A =  2*1.935*10^-5 ;(*Area of orifice m^2  Based on 1/4 inch pipe with 0.028 inch wall thickness (two outlets)*) h =  199000 ;(*Heat of vaporization of LN2 J/kg*) Cp =  1039 ;(*Specific heat of nitrogen J/(kg K) *) R =  296.8 ;(*Gas constant for nitrogen J/(kg K)*) γ = 1.40  ;(*Specific heat ratio*) V = 0.001 ; (*Enclosed volume m^3*) Pe = 101000 ; (*External pressure in pa*) ρo = 4 ;(*Approx density of nitrogen at 80K in kg/m^3. This was the lowest temp data I could find*) tf = 300 ;(*Final time in seconds*)  P = m[t]*R*T[t]/V ;(*Pressure term*) mDotEvap = Q/h ; (*rate of evap*) mDotOut = (P*A/Sqrt[T[t]])*Sqrt[(2γ/(R(γ-1)))*((Pe/P)^(2/γ)-(Pe/P)^((γ+1)/γ))];  (*mass flow out of the orifice*) mDotOutChoked =  (P*A/Sqrt[T[t]])*Sqrt[γ/R]*(2/(γ+1))^((γ+1)/(2(γ-1))); (*mass flow out of the orifice if choked*)  TEq = T'[t] == 1/(m[t]*Cp)(mDotEvap*h-mDotOut*Cp*T[t]-QAmb) ; (*Diff Eq for Temperature in the cavity*) mEq = m'[t] == mDotEvap - mDotOut ; (*Conservation of mass*)  icT = T[0] == 77 ;(*initial temp in the cavity in K*) icm = m[0] == ρo*V ;(*initial mass of the vaporized gas. Assuming it just starts at 77k at 1atm and then adding heat*)  sol1 = NDSolve[{TEq,mEq, icT, icm}, {T[t], m[t]},{t, 0, tf} ] ; P2[t_] = m[t]*R*T[t]/V /.sol1 ; (*Plugging back to get shroud pressure as functon of time*)  Plot[{T[t]/.sol1},{t,0,tf},PlotRange -> Automatic, ImageSize->"Large",PlotLabels->Automatic, AxesLabel -> {Time (s),Temperature (K) }] Plot[{m[t]/.sol1},{t,0,tf},PlotRange -> Automatic, ImageSize->"Large",PlotLabels->Automatic,  AxesLabel -> {Time (s),Mass (Kg) }] Plot[P2[t], {t, 0, tf}, PlotRange -> Automatic, ImageSize -> "Large", PlotLabels -> Automatic,  AxesLabel -> {Time (s),Pressure (Pa) }] 

non-linear model fit function not working

I have attempted to exercise a non-linear model fit to my data, dadtAbs, and got the following puzzle

nlm =    NonlinearModelFit[     Transpose[{Table[t, {t, 1, tmax}],dadtAbs}],      (1 - aaa)*bbb*NSPI[bbb, ddd, population, t] *        (1 - ddd)*(population - NSPI[bbb, ddd, population, t]),      {aaa, bbb, ddd},      t] 

where dadtAbs is a list, population is a known constant, aaa, bbb, ddd are the desired answers, and t is the variable.

When I queried nlm, I got

[0.] 

Here is the function NSPI:

NSPI[alpha_, delta_, population_, tt_] :=    population/(1 + (population - 1)*Exp[-alpha*population*(1 - delta)*tt]) 

population is a large constant, for example, 1000000

Steady state solution (1D) of nonlinear dispersal equation

Now I’m interested in the equation $ $ \frac{\partial }{\partial x}\Bigl(\text{sgn}(x) u \Big) +\frac{\partial}{\partial x} \Bigl[ u^2 \frac{\partial u}{\partial x} \Bigr] =0$ $ with boundary conditions $ u(-5)=u(5)=0$

Since $ \text{sgn}(x)$ is not differentiable at $ x=0$ , I expectd ND solve to have some problems. I tried

sol = NDSolveValue[{   0 == D[Sign[x]*u[x],x] + D[u[x]^2 D[u[x], x], x],    u[-6] == 0, u[6] == 0}   , u, {x, -7, 7}] 

but I can’t even plot it and I think that I’m writing it in the wrong way. Could someone confirm I wrote the right snippet and show the plot I should obtain?

  • I asked a related question three days ago, where the equation was the PDE $ \partial_t u = \partial_x (\text{sign}(x) u) + \partial_x (u^2\partial_x u)$ . The one I have above it’s the steady state solution, and I want to compute it directly, instead of integrating in time.

Why don’t they use all kinds of non-linear functions in Neural Network Activation Functions? [duplicate]

Pardon my ignorance, but after just learning about Sigmoid and Tanh activation functions (and a few others), I am wondering why they choose functions that always go up and to the right? Why not use all kinds of crazy input functions, those that fluctuate up and down, ones that are directed down instead of up, etc.? What if used functions like those in your neurons, what is the problem, why isn’t it done? Why do they stick to very primitive very simple functions?

enter image description here enter image description here enter image description here

How solve nonlinear equations by 14 unknowns?

This is my code, i want to solve system by 14 equations and 14 unknowns(rr list).But the code doesn’t run…

Clear["Global`*"] T[0, t_] = 1; T[1, t_] = t; T[n_, t_] := 2 *t*T[n - 1, t] - T[n - 2, t]; For[n = 0, n <= 7, n++, Print["T[", n, ",t]= ", T[n, t], "\n"]]; tableoft = tt /.NSolve[T[7, tt], tt]; tableoft[[1]]; Subscript[z, 1][t_] = Sum[Simplify[Subscript[a, j]*T[j, t]], {j, 0, 6}]; Subscript[z, 2][t_] = Sum[Simplify[Subscript[b, l]*T[l, t]], {l, 0, 6}]; f[t_] = (t^4/6 - t^3/3 + t) /. t -> 1/2*(\[Tau] + 1); Subscript[k, 1][t_, s_] = s^3 /. s -> 1/4*(\[Tau] + 1)*(r + 1); Subscript[k, 1][t, s]*Subscript[z, 1][r]; p[\[Tau]_] =   Integrate[Subscript[k, 1][t, s]*Subscript[z, 1][r], {r, -1, 1}] Subscript[k, 2][t_, s_] = -2 (t - s) /. t -> 1/2*(\[Tau] + 1) /.    s -> 1/4*(\[Tau] + 1)*(r + 1); Subscript[k, 2][t, s]*Subscript[z, 2][r];     pp[\[Tau]_] =   Integrate[Subscript[k, 2][t, s]*Subscript[z, 2][r], {r, -1, 1}]; Subscript[g, 2][\[Tau]_] =   Expand[(f[t] + p[\[Tau]] + pp[\[Tau]])^2] // N; Subscript[\[Delta],    1][\[Tau]_] = ((Subscript[z,      1][\[Tau]])*((f[t] + p[\[Tau]] + pp[\[Tau]])) - 1) // N; Subscript[\[Delta],    2][\[Tau]_] = (Subscript[z, 2][\[Tau]] -Subscript[g, 2][\[Tau]]) // N;  Table[N[Subscript[\[Delta], 1][tableoft[[i]]]], {i, 1, 7}];  Table[N[Subscript[\[Delta], 2][tableoft[[i]]]], {i, 1, 7}];  r = Flatten[Table[N[{Subscript[\[Delta], 1][tableoft[[i]]],   Subscript[\[Delta], 2][tableoft[[i]]]}], {i, 1, 7}]];  rr =Table[Simplify[r[[i]]] == 0, {i, 1, 14}]  list =Flatten[Table[{Subscript[a, i], Subscript[b, i]}, {i, 0, 6}]];  Solve[rr, list] 

How to get solution fast?

Solving symoblic system of non-linear equations takes too long

I am trying to solve a set of system of symbolic non-linear equations:

g1 = ptz + pz + 2 pty q0 q1 - 2 ptz q1^2 + 2 px q0 q2 - 2 pz q2^2 -     2 px q1 q3 - 2 pty q2 q3 - 2 ptz q3^2 - 2 pz q3^2 ; g2 = 2 (ptx q0 q1 + px q0 q1 + ptz q1 q2 - pz q1 q2 + ptz q0 q3 +       pz q0 q3 - ptx q2 q3 + px q2 q3); g3 = ptx + px - 2 ptx q1^2 - 2 px q1^2 - 2 pz q0 q2 - 2 pty q1 q2 -     2 px q2^2 - 2 pty q0 q3 - 2 pz q1 q3 - 2 ptx q3^2 ; g4 = -2 pty q0 q2 - 2 py q0 q2 + 2 ptz q1 q2 - 2 pz q1 q2 -     2 ptz q0 q3 - 2 pz q0 q3 - 2 pty q1 q3 + 2 py q1 q3 ; g5 = ptz + pz - 2 py q0 q1 - 2 pz q1^2 - 2 ptx q0 q2 - 2 ptz q2^2 -     2 ptx q1 q3 - 2 py q2 q3 - 2 ptz q3^2 - 2 pz q3^2 ; g6 = -pty - py - 2 pz q0 q1 + 2 py q1^2 + 2 ptx q1 q2 + 2 pty q2^2 +     2 py q2^2 - 2 ptx q0 q3 + 2 pz q2 q3 + 2 pty q3^2 ; g7 = q0^2 + q1^2 + q2^2 + q3^2;  NSolve[{g1 == 0, g2 == 0, g3 == 0, g4 == 0, g5 == 0, g6 == 0,    g7 == 1}, {q0, q1, q2, q3}, Reals] 

Here all variables except q0, q1, q2 and q3 are considered fixed. The variables represent a unit quaternion. Testing for corner cases (by setting single element of quaternion to 0) reveals that these set of equations don’t have a solution, which is what I intend to prove. But the code takes too long to run. Any suggestions would be appreciated.

I could treat the elements of quaternion and the permutations of the elements as separate variable and solve the system as Linear Equations, which I did for the corner cases. But here I don’t have enough constraints (10 unknowns with 7 constraints) and hence can’t employ that method.

Approximate solution of a nonlinear ODE in the form of a Fourier series containing the coefficients of the initial ODE

In this topic we considering nonlinear ODE:

$ \frac{dx}{dt}= (x^4) \cdot a_1 \cdot sin(\omega_1 \cdot t)-a_1 \cdot sin(\omega_1 \cdot t + \frac{\pi}{2})$ – Chini ODE

And system of nonlinears ODE:

$ \frac{dx}{dt}= (x^4+y^4) \cdot a_1 \cdot sin(\omega_1 \cdot t)-a_1 \cdot sin(\omega_1 \cdot t + \frac{\pi}{2})$

$ \frac{dy}{dt}= (x^4+y^4) \cdot a_2 \cdot sin(\omega_2 \cdot t)-a_2 \cdot sin(\omega_2 \cdot t + \frac{\pi}{2})$

Chini ODE’s NDSolve in Mathematica:

pars = {a1 = 0.25, \[Omega]1 = 1} sol1 = NDSolve[{x'[t] == (x[t]^4) a1 Sin[\[Omega]1 t] - a1 Cos[\[Omega]1 t], x[0] == 1}, {x}, {t, 0, 200}] Plot[Evaluate[x[t] /. sol1], {t, 0, 200}, PlotRange -> Full] 

System of Chini ODE’s NDSolve in Mathematica:

pars = {a1 = 0.25, \[Omega]1 = 3, a2 = 0.2, \[Omega]2 = 4} sol2 = NDSolve[{x'[t] == (x[t]^4 + y[t]^4) a1 Sin[\[Omega]1 t] - a1 Cos[\[Omega]1 t], y'[t] == (x[t]^4 + y[t]^4) a2 Sin[\[Omega]2 t] - a2 Cos[\[Omega]2 t], x[0] == 1, y[0] == -1}, {x, y}, {t, 0, 250}] Plot[Evaluate[{x[t], y[t]} /. sol2], {t, 0, 250}, PlotRange -> Full] 

There is no exact solution to these equations, therefore, the task is to obtain an approximate solution.

Using AsymptoticDSolveValue was ineffective, because the solution is not expanded anywhere except point 0.

The numerical solution contains a strong periodic component; moreover, it is necessary to evaluate the oscillation parameters. Earlier, we solved this problem with some users as numerically: Estimation of parameters of limit cycles for systems of high-order differential equations (n> = 3)

How to approximate the solution of the equation by the Fourier series so that it contains the parameters of the original differential equation in symbolic form, namely $ a_1$ , $ \omega_1$ , $ a_2$ and $ \omega_2$ .

Is this the correct “standard form” of nonlinear programming (optimization) problem and if it is why it’s in this form?

Rather a simple question I guess, though makes me wonder. The standard form I’ve found in the book (and on wiki) is something like this:

$ min f(x)$

$ s.t.$

$ h_i(x) = 0$

$ g_i(x) <= 0$

Is this considered a “standard form” for nonlinear optimization problems? And if it is why it’s defined like this? Why it has to be exactly the min of the function and why constraints have to be either equal or less than 0 or equal to 0? I couldn’t find any answer why it is as it is actually. Is there some important thing why it couldn’t be max actually for example?

Complexity of numerical derivation for general nonlinear functions

In classical optimization literature numerical derivation of functions is often mentioned to be a computationally expensive step. For example Quasi-Newton methods are presented as a method to avoid the computation of first and/or second derivatives when these are “too expensive” to compute.

What are the state of the art approaches to computing derivatives, and what is their time complexity? If this is heavily problem-dependent, I am particularly interested in the computation of first and second order derivatives for Nonlinear Least Squares problems, specifically the part concerning first order derivatives (Jacobians).