## What are the advantages and the limitations of using Formal Specifications?

I have researched through the internet on either the advantages and disadvantages that the use of formal specifications and specifically Code Contracts in C# might have. I have found various answers but I didn’t manage to identify the key points on the pros and cons as I have mentioned before. Could I have some assistance on that topic and any relevant source that will help me?

## Formal semantics of a mutable/imperative stack

When introducing formal semantics for data structures, immutable stacks are a nice simple example :

• $$\mathit{is\_empty}(\mathit{create()})=\mathrm{True}$$
• $$\mathit{is\_empty}(\mathit{push}(e, s))=\mathrm{False}$$
• $$\mathit{top}(\mathit{push}(e, s)) = e$$
• $$\mathit{pop}(\mathit{push}(e, s)) = s$$

I am trying to do the same for a mutable stack structure, where the $$\mathit{push}$$ and $$\mathit{pop}$$ operations will modify a stack instead of returning one.

The way I am trying to do it is with Hoare triples. I can define the simplest ones (omitting that $$s$$ is a stack and $$e$$ an element) :

• $$[]\ s\gets create() \ [\mathit{is\_empty}(s) \text{ yields True}]$$
• $$[]\ \mathit{push}(e, s) \ [\mathit{is\_empty}(s) \text{ yields False}]$$

However I am not finding satisfactory axioms for $$\mathit{pop}$$. I could do $$[\mathit{is\_empty}(s) \text{ yields some value } b]\ \mathit{push}(e, s)\mathord{;} \mathit{pop}(s)\ [\mathit{is\_empty}(s) \text{ yields the same value } b]$$.

But this is formally only applicable when one would push and immediately pop, which is too restrictive.

With the immutable version, the same axiom (mutatis mutandis : sequence vs. composition) is acceptable because in any actual $$\mathit{pop}(s)$$ where $$s$$ is an expression that correctly denotes a non-empty stack, this expression $$s$$ can be reduced (in the sense of rewriting) to a normal form $$\mathit{push}(\cdot, \cdot)$$ and the axiom will then apply, allowing for further reduction.

This does not seem to work in the mutable/imperative case. Or does it ?

A solution would be to use properties that express the length of a stack~: pushing adds one, poping subtracts one, and expressing emptiness becomes easy. But this would not be sufficient for tracking the values of the elements ; applying the same idea would amount to having a whole immutable stack in the Hoare properties to track the mutable stack in the imperative program.

Is there a neater approach to this ?

## Formal sums on groups with symmetric finite presentations

Suppose we have a group $$G$$ with a “symmetric” finite presentation: $$$$G = \langle g_i, i=1\ldots m | R_\alpha (\{g_i\}) ,\, \alpha=1\ldots n \rangle$$$$ $$$$R_\alpha (\{g_i\}) \iff R_\alpha(\{g_{\sigma(i)}\}) ,\,\,\,\forall \sigma \in S_m \,.$$$$ That is, the set of relations is symmetric under every permutation of the generators.

Now I want to make a formal sum over elements of $$G$$. Can I do so by taking a formal power series in $$\sum_i g_i$$? And is there a well-defined way of determining said power series from the set of relations $$R_\alpha$$?

## Sets, logic, and formal languages. Which precedes the other?

Recently I was reading about relations, and one passage stated “The notion of logical equivalence is, as its name suggests, an equivalence relation on the set of propositional terms”.

Now, to me this application of sets to logic seemed a bit circular. I always saw a set as a mathematical idea and math as being ‘inside’ of a logical system. So then how can we use sets to describe logic if the logic is used to build up the math?

I am also encountering this confusion in Enderton’s logic book, for example: “An expression is a well-formed formula iff it is a member of every inductive set”.

## Is there some way of proving that this simple pattern tiles the plane? Is a formal proof even necessary?

I’m thinking about the well known pattern generated by constructing a series of squares with side lengths following the Fibonacci sequence. Each time we add in a new square, we choose a side of the current rectangle for it to be ‘branching off from’, for lack of a better term. If we loop around so that we first choose the right side, the top, the left and then the bottom side in that order repeatedly, we obtain something like the image below.

It is often stated that this iterative procedure will ‘tile the plane’. I take this to mean that given any point on the infinite two-dimensional Euclidean plane, there exists at least one (or possibly exactly one, depending on our definition) square that contains this point. More loosely: in the limit (as the number of squares approaches infinity), the whole plane is covered by this pattern of squares.

Here it is fairly obvious that this is going to be the case, and yet as always I’m wondering if there is a more formal way that we can prove this, or whether such a proof is even a sensible notion.

Could we perhaps show that the function (from the set of squares to 2D Euclidean space) mapping a given square to the set of points that it contains is a surjection in the sense that all of the points in the plane exist in at least one of these sets? Or, I suppose, show that there exists a function from the points to the squares with the opposite relation that is defined on the whole plane?

Maybe we could start by observing that the space occupied by the first square is covered, and then use an induction step to show that given any point on the plane, the surrounding points (in some sense) are also covered at some point in the iteration.

In order to even construct such a proof we would need a more rigorous definition of ‘tiles the plane’ than ‘it covers the whole thing after a long enough time’. Does such a definition exist? Surely it must, and yet I can’t find it anywhere.

## Formal way to list use cases by groups or sets

Just looking for some advice. I have to write a Software Analysis Document for a new application, which basically is to write and describe all use cases.

First, I have to list all use cases of that application, like this:

However, I’m wondering if it would be better to show those uses cases by groups.

I don’t know if showing use cases by groups is appropriately, I don’t find any sample like that in google.

Is there a formal way to do that?

## Could it make sense to construe formal proof to theorem consequences as sound deductive inference?

A theory T is a conceptual class consisting of certain of these elementary statements. The elementary statements which belong to T are called the elementary theorems of T and said to be true. (Haskell Curry 2010).

The chain of symbolic manipulations in the calculus corresponds to and represents the chain of deductions in the deductive system. (Braithwaite 1962)

∀F ∈ Formal_Systems ∀x ∈ WFF(F) (Sound_Deduction(F, x) ↔ (F ⊢ x))

## Can deductive inference and formal proof be unified this way?

Philosophy of Logic – Reexamining the Formalized Notion of Truth https://philpapers.org/archive/OLCPOL.pdf

## Converting a formal language to Context free grammer

I am trying to convert Regular expression L={ a^m b^n | m ≥ 0, 2m ≥ n ≥m} to a context free grammer.How can i extract the grammer?

## Exercise on formal group laws over an algebraically closed field

There is an exercise in Weinstein’s notes on Lubin–Tate theory, namely show that there is a unique (up to isomorphism) one-dimensional formal group law of given finite height $$h$$ over an algebraically closed field of characteristic $$p$$. A hint is to show that for the Diedonne module $$M$$ of such a formal group law $$\mathcal{F}$$ we have $$F^h(M)=pM$$.

I think that to prove the latter statement we need the following facts:

• by definition, there is a power series $$g(X)\in W(\bar{F_p})[[X]]$$ such that $$[p]_{\mathcal{F}}X=g(F^h(X))$$ and such that $$g'(0)\neq 0 (\mathrm{mod}\, p)$$. I believe this means that we can find another power series $$f\in W(\bar{F_p})[[X]]$$ such that $$g\circ f=1(\mathrm{mod}\,p)$$.
• power series comparable $$\mathrm{mod}\,p$$ induce the same map on $$H^1_{dR}$$.
• reparametrization by a power series from $$W(\bar{F_p})[[X]]$$ preserves the class of closed/exact forms.

I am not sure how to proceed from here. One problem is that Frobenius does not really induce a linear map between Diedonne modules but a semi-linear map (maybe here we should use that the ground field is algebraically closed, so $$g(F^h(X))=F^h(g_1(X))$$ where $$g_1$$ is a power series whose coefficients are the inverse images of the coefficients of $$g$$ under the lift of $$\bar{F_p}$$-Frobenius to $$W(\bar{F_p})$$). Could somebody give a detailed proof so that a novice would understand?