how to get rid of curly brackets writing minimization output to file

I have the following code

SetDirectory["C:\test"]; fname = FileNameJoin[{%, "results.dat"}]; str = OpenWrite[fname, FormatType -> StandardForm]  D1 = 0.4; D2start = 0.26; D2fin = 0.5; Ntot = 12; D2step = (D2fin - D2start)/Ntot;  For[i = 0, i <= Ntot, i++,   D2 = D2start + i*D2step ;   With[{minsol = NMinimize[fnew[D1, D2, x], x]},   fmin = First@minsol;   xn = Values@ Last@ minsol;];   Write[str, D2, " ", xn]; ]   Close[str]; 

i.e I minimize the function fnew wrt x and write the value of x in the file results.dat. The problem is that the output is

0.26 {0.711259} 0.28 {0.744881} 0.3 {0.776204} 0.32 {0.805418} etc. 

How do I get rid of these annoying curly brackets?

Minimization of amount in the change coins problem using the dynamic programing approach

I’m learning the dynamic programming approach to solve the coins change problem, I don’t understand the substitution part

Given: amount=9, coins = [6,5,1],

the instructor simplified it with this function:

minCoins = min {(9-6)+1 , (9-5)+1, (9-1) +1} = min{4, 5, 9} = 4

I don’t understand the logic of this min method: why we say that to change amount of 9 coins, we can simply, take the minimum of: 9 – {all coins} +1 ?

here’s a Gif that visualizes the instructor’s approach:

(*taken from the Algorithmic Toolbox course/ instructor: Prof. Pavel A. Pevzner)

Function minimization as genetic algorithm stop condition

I have implemented genetic algorithm for a problem where I have an objective function which I need to minimize:

Cost = ax + by + cz -> min 

The genetic algorithm should have found the optimal solution if objective function is satisfied.

The function will never reach 0 or below, so I just need to minimize it. Now, since there are no objective values to be reached, how do I know whether the objective function is satisfied or not?

Fitting Experimental Data To SRK-Type Equation of State. Minimization

I’m trying to find parameters to fit an EoS to saturation pressures to different temperatures.

My experimental data are like this


Then I defined a function to calculate saturation pressures

psat[T_, p0_, Tc_, a0_, b_, c1_, E11r_, v11_] := (Do[    p[0] = p0;    f11 = Exp[-E11r/T] - 1;    m11 = v11*f11;    a = a0*(1 + c1*(1 - Sqrt[T/Tc]))^2;    \[Alpha] = p[i]*a/(R*T)^2; (* Dimensionless groups*)    \[Beta] = p[i]*b/R/T;    \[Gamma] = p[i]*m11/R/T;    d0 = -\[Gamma]*\[Beta]*(\[Beta] + \[Alpha]);(*Coefficients of the equation*)    d1 = \[Alpha]*(\[Gamma] - \[Beta]) - \[Beta]*\[Gamma]*(1 + \[Beta]);    d2 = \[Alpha] - \[Beta]*(1 + \[Beta]);    d3 = \[Gamma] - 1;    d4 = 1;    polin = d4*z^4 + d3*z^3 + d2*z^2 + d1*z + d0;    Raices = NSolve[polin == 0, z, PositiveReals]; (* Solving the 4th grade polinomy for compressibility factor*)    zv = Max[z /. Raices];    zl = Min[z /. Raices];    vv = zv*R*T/p[i];    vl = zl*R*T/p[i];    ln\[CapitalPhi]v =      zv - 1 - Log[zv] + Log[vv/(vv - b)] + a/b/R/T*Log[vv/(vv + b)] +       Log[vv/(vv + m11)]; (*Fugacity coefficients*)    ln\[CapitalPhi]l =      zl - 1 - Log[zl] + Log[vl/(vl - b)] + a/b/R/T*Log[vl/(vl + b)] +       Log[vl/(vl + m11)];    \[CapitalPhi]v = Exp[ln\[CapitalPhi]v];    \[CapitalPhi]l = Exp[ln\[CapitalPhi]l];    p[i + 1] = p[i]*\[CapitalPhi]l/\[CapitalPhi]v,    {i, 0, 9}];   p[9])  

Where T is the temperature, p0 is the initial guess for pressure, Tc is the critical temperature and a0, b, c1, E11r, v11 are the equation’s parameters.

Up to this point, we have a saturation pressure calculator, given the parameters, and it works just fine, now the thing that I can’t seem to solve is fitting it to my experimental data, by minimizing an objective function, which is:

enter image description here

I declared it like this:

F[a0_, b_, c1_, E11r_, v11_] :=   Sum[(psat[psatx[[k, 1]], psatx[[k, 2]], Tcaceto, a0, b, c1, E11r,        v11] - psatx[[k, 2]])^2/psatx[[k, 3]]^2, {k, 200}]; (* I declared the Tc as "Tcaceto", a constant, and I use as initial guess for each psat calculation the experimental pressure*) 

And then I just used NMinimize, in this way.

NMinimize[F[a0, b, c1, E11r, v11], {a0, b, c1, E11r, v11}] 

I run it, and it just never finishes. I don’t know what could be the thing that doesn’t work, I’ve tried setting the method, starting points, but the result is the same. I would really apreciate if someone helped me in this matter. Thanks.

Logic minimization via 2 inputs NOR gates: Is it monotone w.r.t to adding a minterm?

  • notation: $ x+y:=\mbox{OR}(x,y)$ , $ \bar x:=\mbox{NOT}(x)$ , $ xy:=\mbox{AND}(x,y)$ , 1:=TRUE, 0:=FALSE.

  • Let $ f$ be a Boolean function of $ n$ -variables, i.e. $ f: \{0,1\}^n \to \{0,1\}$ .

  • minterm:= any product (AND) of $ n$ literals (complemented or uncomplemented). e.g, $ x_1 \bar x_2 x_3 $ is a minterm in 3 variables

  • $ \mbox{NOR2}(f)$ is the minimum number of 2-input NOR gates required to represent a given function $ f$ . For instance, $ \mbox{NOR2}(x_1 x_2)=3$ .

Let $ f_1= m_1, f_2=m_2$ , where $ m_1, m_2$ are minterms that are co-prime (i.e, $ f_1+f_2$ can’t be minimized further. In other words, $ m_1,m_2$ are prime implicants of $ f_1+f_2$ ). For instance, $ x_1 \bar x_2 x_3 $ and $ x_1 x_2 \bar x_3 $ are co-prime

Then, is the following true? $ $ \mbox{NOR2}(f_1+f_2)\ge \mbox{max}\{ \mbox{NOR2}(f_1), \mbox{NOR2}(f_2) \}$ $

[i.e, adding two coprime minterms can’t yield a 2-input NOR circuit with fewer gates]

I think it is true but I can’t think of a proof. Any ideas on how to start proving it?

Multi-Path Length Minimization

I’ve been thinking about path planning and am trying to make good heuristics for cases with multiple agents.

Suppose there are sets $ S_i$ of coordinates in $ \mathbb R^2$ or $ \mathbb R^3$ , each of the same size n, for each possible value of $ i$ where $ i \in [0, … k]$ .

A path is defined as k line segments connecting a sequence of k+1 coordinates, made up of one coordinate from each set $ S_i$ in consecutive order. I want to find n paths such that (a) no two paths have the same coordinate for a given index i in the sequence, and b) the combined path length of all the paths is minimized. In other words, assign coordinates from each set without replacement to construct paths with the goal of making the total path length as small as possible.

Right now i can do the minimization from some $ i$ to $ i+1$ , but I am not sure if locally minimizing each step will yield a global minimum. I know I could brute force it, but that explodes really quickly.

Minimization of DFA, Table Filling Method or Myhill-Nerode Theorem

I want to know, what if the DFA is merged states. So if my DFA starts with q0 and then have (q0,q1) state it goes to with an ‘a’ for example. How do I do the table filling method.

I tried to rename the merged state, for example turning (q0,q1) to q4 for example. But then the issue is when I try to do the table filling. With the input in the Automaton, I get only one state, sometimes the final state. So do I mark it.

For example, I renamed (q0,q1) to q4. Now in the table, if I want to find out the marking for q0 and q4. With an ‘a’ I get q4,q4 for both of the state and with ‘b’ I get nothing from q0 and q2(final state) from q4. I know since q4,q4 does not exist I do not mark. But I only get one state, which happens to be the final state. Do I mark q0,q4 part of the table or do I leave that blank aswell

Uniqueness of l1 minimization

Let $ A \in \mathbb{R}^{m \times n}$ .

Is it true that $ $ \min \limits_{Q}|I – QA|_{\infty} < \frac{1}{2}$ $ is criteria for the uniqueness of the solution to

$ \min \limits_{x \text{ s.t.} Ax=y} |x|_1$ for any $ y$ . If yes, where can I read about this result. I am not sure that I have got the criteria correctly.

Update. $ |M|_{\infty} = \max \limits_{i,j} |M_{ij}|$