What kind of advantages does Lambda calculus have over Turing machine, and vice versa?
Tag: Calculus
Rreference Request: book on stochastic calculus (not finance)
I am looking at looking at fractional Gaussian/Brownian noise from a signal theoretic and engineering point of view. In particular, I am looking at the math behind what defines these noise processes and what consequences this has on the physics, either generating them or consuming these noise signals.
As an engineer by training I am familiar with both (real/multivariate/complex) calculus and basic probability theory and also stochastic signals. But most of what I am doing now is where fractional calculus and stochastic calculus meet (Hic sunt dracones… literally). I think I can get my way around most of the fractional calculus part, but for the stochastic calculus I am in need of better understanding of how it works.
What I am looking for is a book (or lecture notes) that not only give me an understanding and intuition how stochastic calculus works (ie. how to apply it), but I also need the proofs in order to tell what I am allowed to do with the theorems and what not. Measure theory shouldn’t be much of a problem, as I have two mathematicians at hand who can explain things, if I get stuck.
Why is it important for functions to be anonymous in lambda calculus?
I was watching the lecture by Jim Weirich, titled ‘Adventures in Functional Programming’. In this lecture, he introduces the concept of Ycombinators, which essentially finds the fixed point for higher order functions.
One of the motivations, as he mentions it, is to be able to express recursive functions using lambda calculus so that the theory by Church (anything that is effectively computable can be computed using lambda calculus) stays.
The problem is that a function cannot call itself simply so, because lambda calculus does not allow named functions, i.e.,
$ $ n(x, y) = x + y$ $
cannot bear the name ‘$ n$ ‘, it must be defined anonymously:
$ $ (x, y) \rightarrow x + y $ $
Why is it important for lambda calculus to have functions that are not named? What principle is violated if there are named functions? Or is it that I just misunderstood jim’s video?
Is Lambda Calculus purely syntactic
I’ve been reading for a few weeks about the Lambda Calculus, but I have not yet seen anything that is materially distinct from existing mathematical functions, and I want to know whether it is just a matter of notation, or whether there are any new properties or rules created by the lambda calculus axioms that don’t apply to every mathematical function. So, for example, I’ve read that:
“There can be anonymous functions” > Lambda functions aren’t anonymous, they’re just all called lambda. It is permissible in mathematical notation to use the same variable for different functions if the name is not important. For example, the two functions in a Galois Connection are often both called *
“Functions can accept functions as inputs” > Not new you can do this in with ordinary functions.
“Functions are black boxes” > Just inputs and outputs are also valid descriptions of mathematical functions…
This may seem like a discussion or opinion question but I believe that there should be a “correct” answer to this question. I want to know whether lambda calculus is just a notational, or syntactic convention for working with mathematical functions, or whether there are any substantial or semantic differences between lambdas and ordinary functions.
Thanks!
Reference request: Gauge natural bundles, and calculus of variation via the equivariant bundle approach
Let $ P\rightarrow M$ be a principal fibre bundle with structure group $ G$ , $ F$ a manifold and $ \alpha: G\times F\rightarrow F$ a smooth left action.
There is an associated fibre bundle $ E\rightarrow M$ with $ E=P\times_\alpha F=(P\times F)/G$ .
As it is well known, one may either treat sections of the associated fibre bundle “directly”, or consider maps $ \psi:P\rightarrow F$ which satisfy the equivariance property $ \psi(pg)=g^{1}\cdot\psi(p)$ , where $ \cdot$ denotes the left action. Let us refer to this latter method as the “equivariant bundle approach”.
I am interested in describing the gauge field theories of physics using global language with appropriate rigour. However, most references I know treat this topic using the “direct approach”, and not with the equivariant approach, the chief exception being Gauge Theory and Variational Principles by David Bleecker.
Bleecker’s book however doesn’t go far enough for my present needs.
 Bleecker only uses linear matter fields, eg. the case where $ F$ is a vector space and $ \alpha$ is a linear representation. Some things are easy to generalize, others appear to be highly nontrivial to me.

Bleecker treats only firstorder Lagrangians. The connection between a higherorder variational calculus based on the equivariant bundle approach and between the more “standard” one built on the jet manifolds $ J^k(E)$ of the associated bundle is highly unclear to me. Example: If $ \bar\psi:M\rightarrow E$ is a section of an associated bundle, its $ k$ th order behaviour is represented by the jet prolongation $ j^k\bar\psi:M\rightarrow J^k(E)$ , but if instead I use the equivariant map $ \psi:P\rightarrow F$ , what represents its $ k$ th order behaviour? I assume it is related to something like $ J^k(P\times F)/G$ , but the specifics are unclear to me.

In Bleecker’s approach, connections are $ \mathfrak g$ valued, $ \text{Ad}$ equivariant 1forms on $ P$ , however I am interested in treating them on the same footing as matter fields. Connections however are higher order associated objects in the sense that they are associated to $ J^1P$ . Bleecker absolutely doesn’t treat higher order principal bundles.
In short, I am interested in references that consider gauge theories, gauge natural bundles, including nonlinear and higherorder associated bundles and calculus of variations/Lagrangian field theory from the point of view where fields are fixed spacevalued objects defined on the principal bundle (equivariant bundle approach), rather than using associated bundles directly.
Is λx. a valid Lamda Calculus abstraction?
For demonstration purposes I was wondering about some very easy to grasp LC abstractions and I came to the idea of a function that simply eats its argument, and nothing more.
If you apply λx. (Yes no lambda term after the point) to an argument the abstraction reduces to nothing.
Although this isn’t very useful for computation maybe, do you think the idea itself is valid or not?
Writing a grammar for lambda calculus
I’m trying to write a contextfree grammar (to be feeded to lark) for parsing lambda calculus expressions. Basic version of it, as presented by most sources, looks like:
expr: variable  "(" expr ")"  application  abstraction abstraction: "λ" variable "." expr application: expr expr
I’d like the grammar to unambiguously parse expressions taking advantage of the notational conventions mentioned here on Wikipedia. While I’m able to modify the grammar to follow most of them, I got stuck with implementing this one: “The body of an abstraction extends as far right as possible”.
For example, there are two parse trees for λx.x λa.a
– it can be both an application of two abstractions ( (λx.x)(λa.a)
) or an abstraction with an application in its body ( λx.(x(λa.a))
). If the abstarction was greedy as it should be, only the second one would be correct.
Is it possible to write a grammar that would force (i.e. make it the only choice) greedy interpretation of abstractions? If so, how to do it?
Fractional and Exterior Calculus
I am interested in fractional calculus and was wondering is there an equivalent/analagous fractional exterior calculus or fractional theory of differential forms? If so can someone point me out to some good books on it or papers?
Vector calculus problem involving planes
I’m working on a vector calculus problem (provided below) and the issue is I’m getting two different answers, and I’m not sure which is right.
The question is as follows: Calculate the surface integral $ \int_{S} \vecF \dot d\vecS$ over a triangular surface bound by the points $ (2,0,0), (0,2,0), (0,0,2)$ , where $ \vecF = (x, y, z)$ and $ d\vecS = \vecn dS$ , with $ \vecn$ being the outward unit normal. (Hint: the plane equation is $ ax+by+cz=1$ , use it to find the normal, using the points on the surface)
I started with the hunt, and found that: $ $ \frac{1}{2}(x+y+z)=1$ $ Yielding a normal of $ /frac{1}{2}(1,1,1)$ Using the surface integral: $ $ \int_{S} \vecF \dot d\vecS = \int_{S} (\vecF \dot \vecn)dxdy$ $
Which I computed to be $ $ \frac{1}{2} \int_{S} (x+y+z) dxdy = \frac{1}{2} \int_{S} 2 dxdy = 2$ $ Using the area of the projected triangle on the $ xy$ plane.
What I don’t understand is, if I rewrite the plane as $ $ x+y+z=2$ $ And compute the integral, I get an answer of $ 4$ ? Given the same plane equations (just rearranged), I don’t understand where I’m losing the factor of two, or as to which answer is actually correct?
All help appreciated!
Calculus II Series Help
I am trying to solve this problem and I am having a lot of difficulty with it. The question is as follows:
Determine if the series converges or diverges. If convergent, find the exact sum.
$ $ \sum _{n=1}^{\infty }\:\left(\frac{\left(2\right)^{n2}+\frac{1}{5^{2n}}}{3^{n+1}}\right)$ $
Any help would be much appreciated. Thank you.