## Finding bound function of loop invariant

In Programming in the 1990s, by Edward Cohen, the author gives an example of a bound function.

For example, if we have $$B \equiv 10 – n$$ where $$n = 0$$, say, then $$B$$ will eventually be falsified if 10 – n is decreased. Thus we would need to choose for our bound function $$t = 10 – n$$. Decreasing t will eventually falsify $$B$$.

How did he find such bound function? Is it dependent on some variable, like this: $$t(n) = 10 – n$$ ?

## How to find the loop invariant in hoare triples

Hey I am new to Hoare triples and I can’t understand on finding the loop invariants in hypothesis. For example for this while loop

[x>1 & y>1] WHILE x>0 DO x:=x-1; y:=y+2 END [x+y>5] 

the invariant is [2x + y > 5]. But I don’t understand on how to find it. A step by step explanation on how to find it will be greatly appreciated.

## How to effectively represent and generate 2D cellular automaton rules that are invariant under rotation and reflection of the input matrix?

Consider cellular automaton rules for a two-dimensional universe with two states, where a cell’s new state can depend on its previous state and the states of the cells in its Moore neighborhood. Such a rule can be modeled as a function that takes as input a 3 by 3 matrix of bits, and outputs a bit.

Such a rule can be represented easily using a string of 512 bits representing the output for each of the $$2^{3 \times 3}$$ input states. Since this representation is bijective, a rule can also be randomly generated by sampling 512 bits.

However, it is sometimes preferable for reasons of aesthetics and comprehensibility that the output of the rule be the same when the input matrix is rotated or flipped horizontally or vertically. The rule for Conway’s Game of Life is an example of a function satisfying this restriction.

Given a function $$r : 2^9 \to 2$$ which does not necessarily obey this restriction, we can obtain a function that does by mapping the input to the lexicographically smallest matrix obtainable by reflecting or rotating it before passing to $$r$$. However, this provides little insight into the questions I am interested in:

• How many functions obey this restriction?
• How can they be compactly representing using a bit string?
• How can they be efficiently randomly generated?

## Loop invariant initialisation confusion

Consider the algorithm LastMatch below, which returns the offset (shift) of the last occurrence of the pattern P in text T, or -1 if P does not occur in T:

LastMatch(T,P)  for(s = T.length - P.length downto 0)    j = 1    while(j =< P.length and P[j] == T[s + j])      j++    if(j == P.length + 1)      return s  return -1 

I’ve been given a loop invariant for the while loop:

$$\forall k(1 \leq k

The initialisation of this invariant confuses me. Before we enter the while loop, $$j=1$$. So we’re asking is there a $$1\leq k<1$$ such that $$P[k] ==T[s+k]$$?

I cannot find a $$k$$ which satisfies this inequality, so I do not understand what this is saying. So why is the invariant satisfied before we enter the loop? Is it because when I cannot find a $$k$$ it implies that $$P[k]$$ and $$T[s+k]$$ are both equal to the empty set?

## Lie algebras with unique invariant scalar product

Every 1-dimensional or simple complex Lie algebra admits an invariant, symmetric and non-degenerate bilinear form. This form is unique up to multiplication by a nonzero constant (which in Yang-Mills theories plays the role of the coupling constant).

Are there other Lie algebras with such a unique scalar product?

## Does WRT invariant detect hyperelliptic involution on the genus 2 surface?

The Witten-Reshetikhin-Turaev invariant cannot detect the hyperelliptic involution on the genus 1 surface, and that if $$M_U$$ is the mapping torus for a mapping class group element $$U\in \mathrm{Mod}(\Sigma_1)$$, then $$Z(M_U) = Z(M_{-U})$$ where $$-I\in \mathrm{Mod}(\Sigma_1)$$ is the hyperelliptic involution.

What about the genus 2 case? If $$-I\in \mathrm{Mod}(\Sigma_2)$$ is the hyperelliptic involution on the genus 2 surface, then is there any $$U\in \mathrm{Mod}(\Sigma_2)$$, for which the following does not hold? $$Z(M_U) \stackrel{?}{=} Z(M_{-U})$$

## Projection of an invariant almost complex structure to a non integrable one

My apology in advance if my question is obvious or elementary

We identify elements of $$S^3$$ with their quaternion representation $$x_1+x_2 i +x_3 j +x_4 k$$. We consider two independent vector fields $$S_1(a)=ja$$ and $$S_2(a)=ka$$ on $$S^3$$. On the other hand $$P: S^3\to S^2$$ is a $$S^1$$-principal bundle with the obvious action of $$S^1$$ on $$S^3$$. Then the span of $$S_1, S_2$$ is the standard horizontal space associated to the standard connection of the principal bundle $$S^3 \to S^2$$. Then each horizontal space has an almost complex structure $$J$$. This is the standard structure associated to $$S_1, S_2$$ coordinate.

Is this structure invariant under the action of $$S^1$$? If yes, we can define a unique almost complex structure on $$S^2$$ which is $$P$$ related to the structure on total space. Now is this structure on $$S^2$$ integrable?

As a similar question, is there an example of a principal bundle $$P\to X,$$ such that $$P$$ is a real manifold and $$X$$ is a complex manifold and a connection admit an invariant almost complex structure which project to a non integrable structure?

## A question about a step in Browder’s paper about the Kervaire invariant

This is the paper in question. I am trying to understand the construction of the quadratic function $$\psi$$ in section 1. Here is the setup. All homology and cohomology groups are with mod $$2$$ coefficients. Let $$M=M^{2q}$$ be a manifold of dimension $$2q$$, or more generally a Poincare duality space (using mod 2 coefficients). Browder assumes that there is a space $$X=X_{2q+k}$$ and a map $$\eta\colon X\to \Sigma^k M_+$$ with the following properties:

1. $$H_{2q+k}(X)\cong {\mathbb Z}/2$$

2. $$\eta_*$$ induces an isomorphism on $$H_{2q+k}$$.

3. The Steenrod operation $$Sq^{q+1}\colon H^{q+k-1}(X)\to H^{2q+k}(X)$$ is zero.

$$\eta$$ induces a homomorphism $$(\eta^*)^{q+k}\colon H^{q+k}(\Sigma^kM_+)\to H^{q+k}(X).$$

Of course there is an isomorphism $$H^{q+k}(\Sigma^kM_+)=H^q(M)$$

Next, we want to define a function $$\ker[(\eta^*)^{q+k}] \to {\mathbb Z}/2$$. We do it by means of a functional Steenrod operation. Suppose $$x\in \ker[(\eta^*)^{q+k}]$$. Then there is a functional Steenrod operation that associates to $$x$$ an element in a quotient group of $$H^{2q+k}(X)$$. My problem is with the indeterminacy. I do not understand the proof of Lemma 1.1, which says that the indeterminacy is zero.

Let me try to explain further where exactly my difficulty is. If I am not mistaken, one way to construct the functional operation is to consider the following sequence of maps $$X \stackrel{\eta}{\to} \Sigma^k M_+ \stackrel{f_x}{\to} K({\mathbb Z}/2, q+k)\stackrel{Sq^{q+1}}{\to} K({\mathbb Z}/2, 2q+k+1)$$ Here $$K(A,n)$$ denotes the Eilenberg – Mac Lane space, $$f_x$$ is a map representing $$x$$, etc. By our assumptions the composites $$f_x\eta$$ and $$Sq^{q+1}f_x$$ are null-homotopic. So we may form the Toda Bracket $$<\eta, f_x, Sq^{q+1}>$$. According to my understanding, the Toda Bracket lands in the following quotient group $$H^{2q+k}(X)/\left(\eta^*(H^{2q+k}(\Sigma^k M_+))+ Sq^{q+1}H^{q+k-1}(X)\right)$$

The image group $$Sq^{q+1}H^{q+k-1}(X)$$ is zero by assumption (3) above. But $$\eta^*(H^{2q+k}(\Sigma^k M_+))$$ is not zero, by assumption 2. So why is the indeterminacy zero?

Browder says that the indeterminacy is given by the subgroup $$Im(Sq^{q+1}) + Im(g^*)$$, where $$g=f_x\circ\eta$$, so maybe the question is: why can one replace $$Im(\eta^*)$$ with $$Im(g^*)$$?

Browder’s refers to a paper of Steenrod for the construction of the functional operation. But reading Steenrod’s description of the indeterminacy did not dispel my confusion.

## Casson invariant and Euler characteristic

A slogan I frequently hear is: “the Casson invariant is the Euler characteristic of the Floer homology of flat SU(2)-connections on the integral homology sphere”. Is there a single paper/reference that essentially states this as a result? It has been difficult to locate precise definitions. In addition, a sketch of how this works would be very helpful.

## Loop Invariant Code Motion – am I missing something?

I’ve been implementing LICM for a project, and came upon a strange observation.

Let’s say we have a loop

int i = 10; while (i > 0) {     a = 2;     i--; } 

One can easily see that a = 2 is invariant. What we do next is check (for each invariant candidate) if the containing basic block dominates all uses of the said variable and all exits of the loop.

The former is clearly true, as a is only used once. The latter is what I don’t seem to get. The CFG of the example above is

By looking at the graph we can clearly see that the cycle body (which is just a single basic block in this case) does not dominate all exits!

In a general case, we can not know for sure if the body of a loop is going to be executed at least once. This implies that one can not simply move invariant instructions out of the loop, as they will then be executed dependless of the actual value of the loop termination condition.

I have not seen this adressed in any of the material covering LICM, as well as in the wikipedia article. In fact, their example, in which they transform

for (int i = 0; i < n; ++i) {     x = y + z;     a[i] = 6 * i + x * x; } 

into

x = y + z; t1 = x * x; for (int i = 0; i < n; ++i) {     a[i] = 6 * i + t1; } 

seems incorrect, as the loop condition generally could have been false on the first iteration, and so x = y + z should not have been executed at all.

## My proposed solution

All code deemed invariant should be put in an if (expr), where expr is the loop condition. For my original example, that would give us

int i = 10; if (i > 0) {    a = 2; } while (i > 0) {     i--; }