What’s the proof complexity of E-KRHyper (E-hyper tableau calculus)?

Before the question, let me explain better what is E-KRHyper:

E-KRHyper is a theorem proving and model generation system for first-order logic with equality. It is an implementation of the E-hyper tableau calculus, which integrates a superposition-based handling of equality into the hyper tableau calculus (source: System Description: E-KRHyper).

I am interested in the complexity of system E-KRHyper because it is used in the question-answer system Log-Answer (LogAnswer – A Deduction-Based Question Answering System (System Description)).

I have found a partial answer:

our calculus is a non-trivial decision procedure for this fragment (with equality), which captures the complexity class NEXPTIME (source: Hyper Tableaux with Equality).

I don’t understand much of complexity theory so my question is:

What is the complexity of a theorem to be proved in terms of the number of axioms in the database and in terms of some parameter of the question to be answered?

Lambda Calculus as a branch of set theory

This answer to a question about whether C is the mother of all languages contained an interesting tidbit that I am curious about:

The functional paradigm, for example, was developed mathematically (by Alonzo Church) as a branch of set theory long before any programming language ever existed.

Is this true? What is the link between these topics that is so fundamental as to make lambda Calculus an outgrowth of set theory? The best I can come up with is that standard mathematical functions possess domains and codomains.

How/when is calculus used in Computer Science?

Many computer science programs require two or three calculus classes.

I’m wondering, how and when is calculus used in computer science? The CS content of a degree in computer science tends to focus on algorithms, operating systems, data structures, artificial intelligence, software engineering, etc. Are there times when Calculus is useful in these or other areas of Computer Science?

lambda calculus as set-theoretic operations

It is possible to interpret typed lambda calculus a-la Church as logical operations (because of Curry-Howard correspondence). Also, there is a isomorphism between logical and set-theoretic operations. So, is it possible to direct interpret lambda application as union of sets, and lambda abstraction as subset relation or something like this? Which set-theoretic operations corresponds to lambda application and abstraction?

Rreference Request: book on stochastic calculus (not finance)

I am looking at looking at fractional Gaussian/Brownian noise from a signal theoretic and engineering point of view. In particular, I am looking at the math behind what defines these noise processes and what consequences this has on the physics, either generating them or consuming these noise signals.

As an engineer by training I am familiar with both (real/multivariate/complex) calculus and basic probability theory and also stochastic signals. But most of what I am doing now is where fractional calculus and stochastic calculus meet (Hic sunt dracones… literally). I think I can get my way around most of the fractional calculus part, but for the stochastic calculus I am in need of better understanding of how it works.

What I am looking for is a book (or lecture notes) that not only give me an understanding and intuition how stochastic calculus works (ie. how to apply it), but I also need the proofs in order to tell what I am allowed to do with the theorems and what not. Measure theory shouldn’t be much of a problem, as I have two mathematicians at hand who can explain things, if I get stuck.

Why is it important for functions to be anonymous in lambda calculus?

I was watching the lecture by Jim Weirich, titled ‘Adventures in Functional Programming’. In this lecture, he introduces the concept of Y-combinators, which essentially finds the fixed point for higher order functions.

One of the motivations, as he mentions it, is to be able to express recursive functions using lambda calculus so that the theory by Church (anything that is effectively computable can be computed using lambda calculus) stays.

The problem is that a function cannot call itself simply so, because lambda calculus does not allow named functions, i.e.,

$ $ n(x, y) = x + y$ $

cannot bear the name ‘$ n$ ‘, it must be defined anonymously:

$ $ (x, y) \rightarrow x + y $ $

Why is it important for lambda calculus to have functions that are not named? What principle is violated if there are named functions? Or is it that I just misunderstood jim’s video?

Is Lambda Calculus purely syntactic

I’ve been reading for a few weeks about the Lambda Calculus, but I have not yet seen anything that is materially distinct from existing mathematical functions, and I want to know whether it is just a matter of notation, or whether there are any new properties or rules created by the lambda calculus axioms that don’t apply to every mathematical function. So, for example, I’ve read that:

“There can be anonymous functions” -> Lambda functions aren’t anonymous, they’re just all called lambda. It is permissible in mathematical notation to use the same variable for different functions if the name is not important. For example, the two functions in a Galois Connection are often both called *

“Functions can accept functions as inputs” -> Not new you can do this in with ordinary functions.

“Functions are black boxes” -> Just inputs and outputs are also valid descriptions of mathematical functions…

This may seem like a discussion or opinion question but I believe that there should be a “correct” answer to this question. I want to know whether lambda calculus is just a notational, or syntactic convention for working with mathematical functions, or whether there are any substantial or semantic differences between lambdas and ordinary functions.

Thanks!

Reference request: Gauge natural bundles, and calculus of variation via the equivariant bundle approach

Let $ P\rightarrow M$ be a principal fibre bundle with structure group $ G$ , $ F$ a manifold and $ \alpha: G\times F\rightarrow F$ a smooth left action.

There is an associated fibre bundle $ E\rightarrow M$ with $ E=P\times_\alpha F=(P\times F)/G$ .

As it is well known, one may either treat sections of the associated fibre bundle “directly”, or consider maps $ \psi:P\rightarrow F$ which satisfy the equivariance property $ \psi(pg)=g^{-1}\cdot\psi(p)$ , where $ \cdot$ denotes the left action. Let us refer to this latter method as the “equivariant bundle approach”.

I am interested in describing the gauge field theories of physics using global language with appropriate rigour. However, most references I know treat this topic using the “direct approach”, and not with the equivariant approach, the chief exception being Gauge Theory and Variational Principles by David Bleecker.

Bleecker’s book however doesn’t go far enough for my present needs.

  • Bleecker only uses linear matter fields, eg. the case where $ F$ is a vector space and $ \alpha$ is a linear representation. Some things are easy to generalize, others appear to be highly nontrivial to me.
  • Bleecker treats only first-order Lagrangians. The connection between a higher-order variational calculus based on the equivariant bundle approach and between the more “standard” one built on the jet manifolds $ J^k(E)$ of the associated bundle is highly unclear to me. Example: If $ \bar\psi:M\rightarrow E$ is a section of an associated bundle, its $ k$ -th order behaviour is represented by the jet prolongation $ j^k\bar\psi:M\rightarrow J^k(E)$ , but if instead I use the equivariant map $ \psi:P\rightarrow F$ , what represents its $ k$ -th order behaviour? I assume it is related to something like $ J^k(P\times F)/G$ , but the specifics are unclear to me.

  • In Bleecker’s approach, connections are $ \mathfrak g$ -valued, $ \text{Ad}$ -equivariant 1-forms on $ P$ , however I am interested in treating them on the same footing as matter fields. Connections however are higher order associated objects in the sense that they are associated to $ J^1P$ . Bleecker absolutely doesn’t treat higher order principal bundles.

In short, I am interested in references that consider gauge theories, gauge natural bundles, including nonlinear and higher-order associated bundles and calculus of variations/Lagrangian field theory from the point of view where fields are fixed space-valued objects defined on the principal bundle (equivariant bundle approach), rather than using associated bundles directly.

Is λx. a valid Lamda Calculus abstraction?

For demonstration purposes I was wondering about some very easy to grasp LC abstractions and I came to the idea of a function that simply eats its argument, and nothing more.

If you apply λx. (Yes no lambda term after the point) to an argument the abstraction reduces to nothing.

Although this isn’t very useful for computation maybe, do you think the idea itself is valid or not?