Behaviour of Coefficient[] for negative exponents in rational expressions

I’ve observed a puzzling behaviour of Coefficient[] for negative exponents and I’m wondering if I was always just relying on undefined behaviour or if there is a bug in recent versions of Mathematica.

So far, if I had an expression of the form

expr = a/x + b/(1+x) + c 

and ran

Coefficient[expr,x,-1] 

I’ve always gotten

a 

as the answer, which made a lot of sense to me. I’ve tried this with a number of different versions of Mathematica (all on Linux if that matters) that I had access to and the behaviour described above is true for 8.0.0, 8.0.1, 8.0.4, 9.0.0, 9.0.1, 10.0.0, 10.0.1, 10.4.0, 10.4.1, 11.0.0 and 11.0.1.

With 11.1.0, 11.1.1, 11.3.0 and 12.0.0 I got the answer

a + b/(1+x) 

which I find a bit weird, but maybe I can come up with a rationale behind this.

Finally, with 12.1.1 I get

a/(1+x) + b/(1+x) + c/(1+x) 

which makes absolutely no sense at all to me.

My question is: Is this a bug in the newer versions of Mathematica or was the answer of 8.0.0 to 11.0.1 always just undefined behaviour? And is there a workaround, which would allow me to extract the coefficient a from expressions like the one above (i.e. after partial fractioning a rational function, take only the term that is multiplied by x^k with k a negative integer)? If this is indeed undefined behaviour, shouldn’t Mathematica issue a warning or something like that in this case?

Substitution of expressions in a symbolic expression

I define tables of symbolic variables in the following form (for convenience)

X = Table[Symbol["x" <> ToString[i]], {i, 1, num}]; Y = Table[Symbol["y" <> ToString[j]], {j, 1, num}];  

And after that, in cycles, I create some expressions. For example, here is one of them

Expon := Exp[ - ((X[[1]] * Y[[1]]) / 4) ];  For[i = 2, i <= num, i++,  Expon = Expon * Exp[ - ((X[[i]] * Y[[i]]) / 4)] ]  

After that, I want to act by some differential operator on my symbolic expression (let’s call it $ \Psi$ ) and substitute in the final expression some tables of numbers X1 and Y1 (here they are not symbolic, but filled by real numbers). I tried to use ReplaceAll ./ command, but it didn’t work. Could you tell me please, how can I substitute two or more tables of real numbers in symbolic expression? Long story short, how to calculate something like $ \Psi(X1, Y1)$ ?

What is the difference between available expressions analysis and very busy expressions analysis?

I am having trouble understanding the conceptual meaning of the two kinds of analysis. I know the equations and how to solve the problems and I know how one is a forward data-flow analysis while the other is a backwards data-flow analysis, but there is still something missing in the explanations I have seen so far, in a higher level.

What is the computational complexity of “real-life” regular expressions?

Regular expressions in the sense as equivalent to regular (Chomsky type 3) languages know concatenation xy, alternation (x|y), and the Kleenee star x*.

"Real-life" regular expressions as used in programming usually have a lot more operations available; amongst others, quantification x{n}, negation [^x], positive and negative lookahead x(?=y), or back-reference \n.

There is a famous post on SO stating that regular expressions can not be used to parse HTML for the reason that HTML is not a regular language.

My question is: Is this accurate? Do "real-life" regular expressions, say the selection defined in the Java docs, really have the same expressive power as regular expressions as understood in formal language theory; or do the additional constructs, although possibly not strong enough to capture HTML and the like, put common regular expressions further up on the Chomsky scale than just Type 3 languages?

I would imagine the proof of the computational equality of the two would amount to showing that each operation available for the common regexp is just syntactic sugar and can be expressed by means of the 3 basic operations (concatenation, alternation, Kleene start) alone; but I am finding it hard to see how one would e.g. simulate back-reference with classic regexes alone.

Symplifying expressions with exponentials inside square root

I have an expression $ $ \exp (i k x) \sqrt{y^2 \exp (-2 i k x)} $ $ When I put this in Mathematica and do FullSimplift, it gives

FullSimplify[Exp[I k x] Sqrt[Exp[-2 I k x] y^2]] 

The output is $ $ e^{i k x} \sqrt{y^2 e^{-2 i k x}} $ $ Even if I give all proper assumptions $ \{x,y, k\} \in \mathbb R$ and $ -\pi < k \leq \pi$ like this

FullSimplify[Exp[I k x] Sqrt[Exp[-2 I k x] y^2], {x, y, k} \[Element] Reals && -\[Pi] < k <= \[Pi]] 

The output comes as $ $ \left| y\right| e^{i k x} \sqrt{e^{-2 i k x}} $ $ But the exponentials should not be there anymore, the result should be only $ \left| y\right|$ .

What simplification or assumptions to make, to get the desired result?

Decidability of equality, and soundness of expressions involving elementary arithmetic and exponentials

Let’s have expressions that are composed of elements of $ \mathbb N$ and a limited set of binary operations {$ +,\times,-,/$ } and functions {$ \exp, \ln$ }. The expressions are always well-formed and form finite trees, with numbers as leaf nodes and operators as internal nodes, binary operations having two child sub-expressions and the functions one. A value of such an expression is interpreted to mean some number in $ \mathbb R$ .

There are two limitations on the structure of the expressions: the divisor (the right-hand sub-expression) of $ /$ can’t be 0 and the argument of $ \ln$ must be positive.

I have two questions about these kind of expressions:

  • Is it possible to ensure “soundness” of such an expression, in the sense that the two limitations can be checked in limited time?

  • is an equality check between two such expressions decidable?

These questions seem to be connected in the sense that if you’re able to check equality of the relevant sub-expression to zero, you can decide whether a division parent expression is sound, and it doesn’t seem hard to check whether a sub-expression of $ \ln$ is positive or negative if it’s known not to be zero.

I know that equality in $ \mathbb R$ is generally not decidable, whereas equality in algebraic numbers is. However, I wonder how inclusion of {$ \exp, \ln$ } changes the result.

(A side note: I posted an earlier version of a question with similar intent here, but it turned out to have too little thought behind it, and had unnecessary (= not related to the meat of the question) complications with negative logarithms.)

Decidability of equality of expressions involving exponentiation

Let’s have expressions that are finite-sized trees, with elements of $ \mathbb N$ as leaf nodes and operators and the operations {$ +,\times,-,/$ , ^} with their usual semantics as the internal nodes, with the special note that we allow arbitrary expressions as the right-hand side of the exponentiation operation. Are the equality between such nodes (or, equivalently, comparison to zero) decidable? Is the closure under these operations a subset of algebraic numbers or not?

This question is similar to this: Decidability of Equality of Radical Expressions but with the difference that here the exponentiation operator is symmetric in the type of the base and the exponent. That means that we could have exponents such as $ 3^\sqrt 2$ . It isn’t clear to me, whether allowing exponentiation with irrationals retains the algebraic closure.

This question is also similar to Computability of equality to zero for a simple language but the answers to that question focused on the transcendental properties of combinations of $ \pi$ and $ e$ , which I consider out of scope here.

Factoring long trigonometric expressions separating specific variables

I have a very long expression with trigonometric functions:

n (2 a - h - 2 z) (-Sin[p (β - δ + t ω)] + Sin[p (β + δ + t ω)]) 

I would like to simplify the term (-Sin[p (β - δ + t ω)] + Sin[p (β + δ + t ω) to 2 Sinδ Cos (ωt + β).

Any suggestions? Just using Simplify does not work on this case. Thank you for your help.

Is there an abstract architecture equivalent to Von Neumann’s for Lambda expressions?

In other words, was a physical implementation modelling lambda calculus (so not built on top of a Von Neumann machine) ever devised? Even if just on paper?
If there was, what was it? Did we make use of its concepts somewhere practical (where it can be looked into and studied further)?

— I’m aware of specialised LISP machines. They were equipped with certain hardware components that made them better but eventually they were still the same at their core.

If there isn’t such thing, what stops it from being relevant or worth the effort? Is it just a silly thought to diverge so greatly from the current hardware and still manage to create a general-purpose computer?