## Lambda Calculus Conversion

How can I take a data type or function (eg fold, list, String, zip) and convert it to a lambda calculus expression (or how can it be expressed as a lambda expression)?
If sum computes a sum of all elements in a list and :t sum -> Num a => [a] -> a. How do I take this information to translate it to a lambda calculus expression? I have tried to find guides online but they just give me the answers. I want to know how to actually make the conversion/translation from a function to a lambda calculus expression.

​​

## Understanding $\lambda \mu$-calculus in more programming way

I am learning $$\lambda \mu$$-calculus (self-study).

I learned it because it seems very useful for understanding Curry-Howard correspondence (e.g understanding the connection between classical logic and intuitionistic logic)

I searched the internet, there is some information about $$\lambda \mu$$-calculus on Wikipedia, but it does not explore it further (at time of writing). https://en.wikipedia.org/wiki/Lambda-mu_calculus

Is there any more programming way to interpret the intuition behind $$\lambda \mu$$-calculus?

For example:

In $$\lambda \mu$$-calculus, there are two additional terms called $$\mu$$-abstraction $$\mu \delta .T$$ and named term $$[\delta]T$$.

Can I think $$\mu$$-abstraction as a $$\lambda$$-abstraction which waiting for some continuation $$k$$ (here, is $$\delta$$)?

What’s the meaning of the named term?

How does it connect to call/cc?

Can I find the corresponding roles in some programming language (e.g. Scheme)?

PS: I can understand $$\lambda$$-calculus, call/cc in Scheme, and CPS-Translation, but I still cannot clearly understand the intuition behind $$\lambda \mu$$-calculus.

Very thanks.

Posted on Categories proxies

## Lambda calculus without free variables is as strong as lambda calculus?

First question: How would one prove that by removing free (unbound) variables from lambda calculus, and allowing only bound variables, its power is not reduced (it is still Turing-complete) ?

Second question: Is the proposition given above really true? Is lambda calculus sans free variables really Turing-complete?

Posted on Categories proxies

## What does Lambda Calculus teach us about data?

1. Can we generalize that data is just a suspended computation?
2. Is this true for other models of computation?
3. What books, or papers, one should read to better understand the nature of data and its relation to computation?

Some context: as a software developer, I got used to the concept of data so much that I never considered its true nature. I’d very much appreciate any references that could help me better understand the general connection between data and computation.

## Where is typed lambda calculus on the Chomsky hiererchy?

The functions definable in typed lambda calculus are the computable functions, for which idt is in turn possible to efine equivalences to the concepts of Turing machines, recursive enumerability and Type-0 grammars.

But what about typed lambda calculus — where on the Chomskian computability hierarchy are the functions definable by expressions of simply-typed lambda calculus?

Assuming that there is a natural way of transferring the idea of lambda-definability of a recursive function from untyped to simply-typed lambda calculus, along the lines of:

A $$k$$-ary number-theoretic function $$f$$ is simply-typed-lambda definable iff there is a simply typable $$\lambda$$-term $$P$$ such that for all $$x_1, \ldots x_k$$, where $$\underline{x}$$ is the encoding of $$x$$, $$P \underline{\vec{x}} =_\beta \underline{y} \text{ iff } f(\vec{x}) = y$$, if $$f(\vec{x})$$ is defined, and $$P$$ has no $$\beta$$-normal form othewise.

To make the bridge from functions to formal languages and the Chomsky hierarchy, I guess my question is:

Between which levels of the Chomsky hierarchy is the class of languages located such that $$L$$ is in the class iff there is a simply-typed-lambda-definable function $$f$$ such that $$f(w)$$ is defined if and only if $$w \in L$$?

Alternatively, are there other ways of building an correspondence between typed lambda calculus and formal languages or automata that makes it possible to locate it on the known computability scale in a meaningful way?

All I could find so far was about modifications of lambda calculus corresponding to certain types of grammars, or auotmata to recognize strings certain kinds of lambda expressions, but, surpisingly, nothing specifically about (Curry-style) typed lambda calculus.

Posted on Categories proxies

## lambda calculus reduction: (((lambda f (lambda x (f x))) (lambda y (* y y))) 12)

given the input

(((lambda f (lambda x (f x))) (lambda y (* y y))) 12)

what does this step evaluate to: lambda x (f x)

I am trying to evaluate this and I have the following tree so far: how do I evaluate this ? looking for guidance on what I might be doing wrong or how to proceed with this.

## Is there an abstract architecture equivalent to Von Neumann’s for Lambda expressions?

In other words, was a physical implementation modelling lambda calculus (so not built on top of a Von Neumann machine) ever devised? Even if just on paper?
If there was, what was it? Did we make use of its concepts somewhere practical (where it can be looked into and studied further)?

— I’m aware of specialised LISP machines. They were equipped with certain hardware components that made them better but eventually they were still the same at their core.

If there isn’t such thing, what stops it from being relevant or worth the effort? Is it just a silly thought to diverge so greatly from the current hardware and still manage to create a general-purpose computer?

Posted on Categories proxies

## Proof of lambda reductions

I am not sure how to approach this question or what exactly it is asking. Need to prove the following reductions: Need to prove those knowing that: N = λf.λc.(f (f . . .(f c)). . .) and that: addition: + = λM.λN.λa.λb.((M a)((N a) b)) multiplication: × = λM.λN.λa.(M (N a)) exponentiation: ∧ = λM.λN.(N M)

## Lambda Expression Reduction

I am unable to solve the following lambda expression using both normal order (Call-by-name) and applicative order (Call-by-value) reduction. I keep getting different answers for both. This is the lambda expression that has to be reduced using both techniques:

(λfx.f (f x)) (λfx.f (f x)) f x

Posted on Categories proxies

## How to find a lambda term to complete a function?

I tried to complete this exercise but i stopped… Defining a $$\lambda$$-term M such that: $$() \: \simeq_{\beta} \: $$

I chose $$M=\lambda m \lambda a \lambda b \lambda p \,((p)m)b \:$$ then i have to find a representation T of a function using M that value true if the sequence is empty and false if it’s not. A sequence is defined as: $$[]=\lambda x_0\lambda x_1 \lambda z z \ [b]=\lambda x_0 \lambda x_1 \lambda z (z) x_b\ [b_1 b_2]=\lambda x_0 \lambda x_1 \lambda z ((z)x_{b_1})x_{b_2} \ .\. \ . \ [b_1 .. b_n]= \lambda x_0 \lambda x_1 \lambda z (…((z) x_{b_1})x_{b_2}…)x_{b_n}$$ so the sequence of exercise is : $$= \lambda x_0 \lambda x_1 \lambda z (((((z)x_0)x_1)x_1)x_0)x_1$$ For example T need to be: $$(T) \simeq_{\beta}$$ false while $$(T) []\simeq_{\beta}$$ true. I really find that difficult. How i can do that?