Applied Pi calculus: Evaluation context that distinguishes replication with different restrictions

For an exercise, I need to find an evaluation context $ C[\_]$ s.t. the transition systems of $ C[X]$ and $ C[Y]$ are different (=they are not bisimulation equivalent), where $ X$ and $ Y$ are the following processes:

$ $ X = ( \nu z) (!\overline{c}\langle z \rangle.0)$ $ and $ $ Y= !((\nu z) \overline{c}\langle z \rangle.0)$ $

Intuitively, the difference seems to be that in process $ X$ , all replications of the process output the same $ z$ on channel $ c$ , while in process $ Y$ , all processes output a different $ z$ . Is this correct? And how could this be used to construct an evaluation context such that $ C[X] \neq C[Y]$ ?

Situation calculus: how to find pre-conditions in 15-puzzle game?

I have been working on finding the preconditions for a situation calculus example for some time now. This example is called the game "15-puzzle" where you can find a discription here https://en.wikipedia.org/wiki/15_puzzle.

The following fluents are given for this game:

at(x,y,z,s) // which means: there is a tile z at position x,y. free(x,y,s) // which means: position x,y is free. 

And now I have to find the preconditions for the following actions (movements):

move_up() move_down() move_left() move_right() 

I have tried really cumbersome solutions where I am sure that they are not correct. I would be very grateful if someone has the right approach for me!

What does Lambda Calculus teach us about data?

  1. Can we generalize that data is just a suspended computation?
  2. Is this true for other models of computation?
  3. What books, or papers, one should read to better understand the nature of data and its relation to computation?

Some context: as a software developer, I got used to the concept of data so much that I never considered its true nature. I’d very much appreciate any references that could help me better understand the general connection between data and computation.

Where is typed lambda calculus on the Chomsky hiererchy?

The functions definable in typed lambda calculus are the computable functions, for which idt is in turn possible to efine equivalences to the concepts of Turing machines, recursive enumerability and Type-0 grammars.

But what about typed lambda calculus — where on the Chomskian computability hierarchy are the functions definable by expressions of simply-typed lambda calculus?

Assuming that there is a natural way of transferring the idea of lambda-definability of a recursive function from untyped to simply-typed lambda calculus, along the lines of:

A $ k$ -ary number-theoretic function $ f$ is simply-typed-lambda definable iff there is a simply typable $ \lambda$ -term $ P$ such that for all $ x_1, \ldots x_k$ , where $ \underline{x}$ is the encoding of $ x$ , $ P \underline{\vec{x}} =_\beta \underline{y} \text{ iff } f(\vec{x}) = y$ , if $ f(\vec{x})$ is defined, and $ P$ has no $ \beta$ -normal form othewise.

To make the bridge from functions to formal languages and the Chomsky hierarchy, I guess my question is:

Between which levels of the Chomsky hierarchy is the class of languages located such that $ L$ is in the class iff there is a simply-typed-lambda-definable function $ f$ such that $ f(w)$ is defined if and only if $ w \in L$ ?

Alternatively, are there other ways of building an correspondence between typed lambda calculus and formal languages or automata that makes it possible to locate it on the known computability scale in a meaningful way?

All I could find so far was about modifications of lambda calculus corresponding to certain types of grammars, or auotmata to recognize strings certain kinds of lambda expressions, but, surpisingly, nothing specifically about (Curry-style) typed lambda calculus.

Does the underlying computational calculus in type theories affect decidability?

I’m looking for a high-level explanation although if that isn’t possible or difficult, I’d prefer references to books/papers.

I understand that modern type theory is inspired by Curry-Howard correspondence. From the Wikipedia article on Curry-Howard correspondence:

The correspondence has been the starting point of a large spectrum of new research after its discovery, leading in particular to a new class of formal systems designed to act both as a proof system and as a typed functional programming language. … This field of research is usually referred to as modern type theory.

Looking at the various type theories proposed and under development, I have a few basic questions:

1. Most modern type theories marry a type system with lambda calculus. Are there examples where a type theory uses a computational calculus other than lambda calculus?

2. At a very high level, if every modern type theory is a bundle of a type system and a computational calculus and the computational calculus is turing-complete (like lambda calculus), does the computational calculus in any way affect the decidability of decision problems like type checking, type inference, etc.? (AFAIK modern type theories tweak the type system while keeping the associated turing-complete computational calculus intact and just tweaking the type system affects decidability of type checking, type inference, etc.)

in the lambda calculus with products and sums is $f : [n] \to [n]$ $\beta\eta$ equivalent to $f^{n!}$?

$ \eta$ -reduction is often described as arising from the desire for functions which are point-wise equal to be syntactically equal. In a simply typed calculus with products it is sufficient, but when sums are involved I fail to see how to reduce point-wise equal functions to a common term.

For example, it is easy to verify that any function $ f: (1+1) \to (1+1)$ is point-wise equal to $ \lambda x.f(fx)$ , or more generally $ f$ is point-wise equal to $ f^{n!}$ when $ f: A \to A$ and $ A$ has exactly $ n$ inhabitants. Is it possible to reduce $ f^{n!}$ to $ f$ ? If not, is there an extension of the simply typed calculus which allows this reduction?

What qualifies the structural relation in the pi calculus as a congruence?

In the pi calculus, there is an equivalence relation between terms that are structurally equivalent and should “act the same”, which is usually described as structural congruence rules. For example, the reduction rules are usually described modulo the structural congruence.

My question is: in the context of a calculus, what exactly turns a relation as such into a congruence? Let’s say I’m working with some process calculus, what would I need to prove in order to say that a similar equivalence relation is, in fact, a congruence?