## C++ Help with Separating Axis Theorem

I am trying to detect collision between two triangles using Separating Axis Theorem however I am unaware what is wrong with my code. The CollisionHelper::isTriangleIntersectingTriangle is called every frame and passes in the vertices of both triangles. It never returns true, however. I’ve been stuck on this for days now. Any help is appreciated.

glm::vec3 CalcSurfaceNormal(glm::vec3 tri1, glm::vec3 tri2, glm::vec3 tri3) {     //Subtracts each coordinate respectively     glm::vec3 u = tri2 - tri1;     glm::vec3 v = tri3 - tri1;      glm::vec3 nrmcross = glm::cross(u, v);     nrmcross = glm::normalize(nrmcross);     return nrmcross; }  bool SATTriangleCheck(glm::vec3 axis, glm::vec3 tri1vert1, glm::vec3 tri1vert2, glm::vec3 tri1vert3, glm::vec3 tri2vert1, glm::vec3 tri2vert2, glm::vec3 tri2vert3) {     int t1v1 = glm::dot(axis, tri1vert1);     int t1v2 = glm::dot(axis, tri1vert2);     int t1v3 = glm::dot(axis, tri1vert3);     int t2v1 = glm::dot(axis, tri2vert1);     int t2v2 = glm::dot(axis, tri2vert2);     int t2v3 = glm::dot(axis, tri2vert3);      int t1min = glm::min(t1v1, glm::min(t1v2, t1v3));     int t1max = glm::max(t1v1, glm::max(t1v2, t1v3));     int t2min = glm::min(t2v1, glm::min(t2v2, t2v3));     int t2max = glm::max(t2v1, glm::max(t2v2, t2v3));       if ((t1min < t2max && t1min > t2min) || (t1max < t2max && t1max > t2min))             return true;     if ((t2min < t1max && t2min > t1min) || (t2max < t1max && t2max > t1min))             return true;      return false; }  bool CollisionHelper::isTriangleIntersectingTriangle(glm::vec3 tri1, glm::vec3 tri2, glm::vec3 tri3, glm::vec3 otherTri1, glm::vec3 otherTri2, glm::vec3 otherTri3) {     //Triangle surface normals, 2 axes to test     glm::vec3 tri1FaceNrml = CalcSurfaceNormal(tri1, tri2, tri3);     glm::vec3 tri2FaceNrml = CalcSurfaceNormal(otherTri1, otherTri2, otherTri3);      glm::vec3 tri1Edge1 = tri2 - tri1;     glm::vec3 tri1Edge2 = tri3 - tri1;     glm::vec3 tri1Edge3 = tri3 - tri2;     glm::vec3 tri2Edge1 = otherTri2 - otherTri1;     glm::vec3 tri2Edge2 = otherTri3 - otherTri1;     glm::vec3 tri2Edge3 = otherTri3 - otherTri2;      //axes     //TODO: may need to (un)normalize the cross products     glm::vec3 axis1 = tri1FaceNrml;     glm::vec3 axis2 = tri2FaceNrml;     glm::vec3 axis3 = glm::normalize(glm::cross(tri1Edge1, tri2Edge1));     glm::vec3 axis4 = glm::normalize(glm::cross(tri1Edge1, tri2Edge2));     glm::vec3 axis5 = glm::normalize(glm::cross(tri1Edge1, tri2Edge3));     glm::vec3 axis6 = glm::normalize(glm::cross(tri1Edge2, tri2Edge1));     glm::vec3 axis7 = glm::normalize(glm::cross(tri1Edge2, tri2Edge2));     glm::vec3 axis8 = glm::normalize(glm::cross(tri1Edge2, tri2Edge3));     glm::vec3 axis9 = glm::normalize(glm::cross(tri1Edge3, tri2Edge1));     glm::vec3 axis10 = glm::normalize(glm::cross(tri1Edge3, tri2Edge2));     glm::vec3 axis11 = glm::normalize(glm::cross(tri1Edge3, tri2Edge3));      //Perform SAT     if (SATTriangleCheck(axis1, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis2, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis3, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis4, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis5, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis6, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis7, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis8, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis9, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis10, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;     if (SATTriangleCheck(axis11, tri1, tri2, tri3, otherTri1, otherTri2, otherTri3)) return true;      return false; } 

## Incomputable sets of low degree vs Rices theorem?

I have heard that there are sets that are not computable, but are lower in degree than the halting problem.

How does this not contradict Rices theorem? Are there any concrete examples of such sets?

## Difficulty in understanding a portion in the proof of the $\text{“white path”}$ theorem as with in CLRS text

I was going through the $$\text{DFS}$$ section of the Introduction to Algorithms by Cormen et. al. and I faced difficulty in understanding the $$\Leftarrow$$ direction of the proof of the white path theorem. Now the theorem which is the subject of this question depends on two other theorems so I present the dependence before presenting the actual theorem and the difficulty which I face in the said.

Dependencies:

Theorem 22.7 (Parenthesis theorem) In any depth-first search of a (directed or undirected) graph $$G = (V, E)$$, for any two vertices $$u$$ and $$v$$;, exactly one of the following three conditions holds:

• the intervals $$[d[u], f[u]]$$ and $$[d[v], f[v]]$$ are entirely disjoint, and neither $$u$$ nor $$v$$ is a descendant of the other in the depth-first forest,

• the interval $$[d[u], f[u]]$$ is contained entirely within the interval $$[d[v], f[v]]$$, and $$u$$ is a descendant of $$v$$; in a depth-first tree,

• the interval $$[d[v], f[v]]$$ is contained entirely within the interval $$[d[u], f[u]]$$, and $$v$$ is a descendant of $$u$$ in a depth-first tree.

Corollary 22.8 (Nesting of descendants’ intervals) Vertex $$v$$ is a proper descendant of vertex $$u$$ in the depth-first forest for a (directed or undirected) graph $$G$$ if and only if $$d[u] < d[v] < f[v] < f[u]$$.

Theorem 22.9 (White-path theorem)

In a depth-first forest of a (directed or undirected) graph $$G = (V, E)$$, vertex $$v$$ is a descendant of vertex $$u$$ if and only if at the time $$d[u]$$ that the search discovers $$u$$, vertex $$v$$ can be reached from $$u$$ along a path consisting entirely of white vertices.

Proof

$$\Rightarrow$$ : Assume that $$v$$ is a descendant of $$u$$. Let $$w$$ be any vertex on the path between $$u$$ and $$v$$ in the depth-first tree, so that $$w$$ is a descendant of $$u$$. By Corollary 22.8, $$d[u] < d[w]$$, and so $$w$$ is white at time d[u].

$$\Leftarrow$$:

1. Suppose that vertex $$v$$ is reachable from $$u$$ along a path of white vertices at time $$d[u]$$, but $$v$$ does not become a descendant of $$u$$ in the depth-first tree.
2. Without loss of generality, assume that every other vertex along the path becomes a descendant of $$u$$. (Otherwise, let $$v$$ be the closest vertex to $$u$$ along the path that doesn’t become a descendant of $$u$$.)
3. Let $$w$$ be the predecessor of $$v$$ in the path, so that $$w$$ is a descendant of $$u$$ ($$w$$ and $$u$$ may in fact be the same vertex) and, by Corollary 22.8, $$f[w] \leq f[u]$$.
4. Note that $$v$$ must be discovered after $$u$$ is discovered, but before $$w$$ is finished.$$^\dagger$$ Therefore, $$d[u] < d[v] < f[w] \leq f[u]$$.
5. Theorem 22.7 then implies that the interval $$[d[v], f[v]]$$ is contained entirely within the interval $$[d[u], f[u]]$$.$$^{\dagger\dagger}$$
6. By Corollary 22.8, $$v$$ must after all be a descendant of $$u$$. $$^\ddagger$$

$$\dagger$$ : Now it is clear that since $$u$$ is the first vertex to be discovered so any other vertex (including $$v$$) is discovered after it. In point $$1$$ we assume that $$v$$ does not become the decendent of $$u$$, but by the statement that but before $$w$$ is finished I feel that this is as a result of exploring the edge $$(w,v)$$ (this exploration makes $$v$$ ultimately the descendant of $$u$$, so the proof should have ended here $$^\star$$)

$$\dagger\dagger$$ : Considering the exact statement of theorem 22.7 , I do not get which fact leads to the implication in $$5$$.

$$\ddagger$$ : The proof should have ended in the $$\star$$, but why the stretch to this line $$6$$.

Definitely I am unable to get the meaning the proof of the $$\Leftarrow$$. I hope the authors are using proof by contradiction.

(I thought of an alternate inductive prove. Let vertex $$v$$ is reachable from $$u$$ along a path of white vertices at time $$d[u]$$. We apply induction on the vertices in the white path. As a base case $$u$$ is an improper descendant of itself. Inductive hypothesis, let all vertices from $$u$$ to $$w$$ be descendants of $$u$$ , where $$w$$ is the predecessor of $$v$$ in the white path. We prove the inductive hypothesis by the exploration of the edge $$(w,v)$$. But I want to understand the proof the text.)

## Rice’s Theorem for Turing machine with fixed output

So I was supposed to prove with the help of Rice’s Theorem whether the language: $$L_{5} = \{w \in \{0,1\}^{*}|\forall x \in \{0,1\}^{*}, M_{w}(w) =x\}$$ is decidable.

First of all: I don’t understand, why we can use Rice’s Theorem in the first place: If I would chose two Turingmachines $$M_{w}$$ and $$M_{w’}$$ with $$w \neq w’$$ then I would get $$M_{w}(w) = M_{w’}(w) = x$$. But this would lead to $$w’$$ not being in $$L_{5}$$ and $$w \in L_{5}$$. Or am I misunderstanding something?

Second: The solution says, that the Language $$L_{5}$$ is decidable as $$L_{5} = \emptyset$$ because the output is clearly determined with a fixed input. But why is that so? I thought that $$L_{5}$$ is not empty because there are TM which output x on their own input and there are some which do not.

## A bit confused about the time complexity of this theorem vs. my situation

In this paper polynomiality of n-fold integer programming problem is discussed.

(My question is very simple and it is stated at the end. There is no need to deeply understand the excerpts they are provided for the purpose of being complete.)

Excerpt 1:
The n-fold integer programming problem. Fix a $$p \times q$$ integer matrix $$A$$. Given positive integer $$n$$ and integer vectors $$b = (b^0, b^1, \ldots , b^n)$$ and $$c = (c^1, \ldots , c^n)$$, where $$b^0 \in \mathbb{Z}^q$$, and $$b^k \in \mathbb{Z}^p$$ and $$c^k \in \mathbb{Z}^q$$ for $$k = 1, \ldots, n$$, find a nonnegative integer vector $$x = (x^1, \ldots , x^n)$$, where $$x^k \in \mathbb{N}^q$$ for $$k = 1, \ldots, n$$, which minimizes $$cx= \sum_{k=1}^{n}c^k x^k$$ subject to the equations $$\sum_{k=1}^{n} x^k = b^0$$ and $$Ax^k = b^k$$ for $$k = 1, \ldots, n$$.

Excerpt 2:
In this article, we establish the following theorem. Naturally, the input size is $$n$$ plus the bit size of the integer objective vector $$c \in \mathbb{Z}^{nq}$$ and the integer right-hand side vector $$b \in \mathbb{Z}^{q+np}$$.
Theorem 1.1. Fix any integer matrix $$A$$. Then there is a polynomial-time algorithm that, given any $$n$$ and any integer vectors $$b$$ and $$c$$, solves the corresponding n-fold integer programming problem.

My Question:
I have an Integer Programming at hand. For a given integer $$k$$, my $$A$$ (like above) is $$3 \times 2k$$, and my $$n$$ (like above) is $$k$$, and let us assume that bit size of my integers is of $$O(\log (k))$$. Is the aforementioned theorem still applicable to my situation? Especially because $$q$$ in my situation depends on $$k.$$

## Help understanding a theorem Kleinberg proves related to sequence alignment

This is from Kleinberg’s Algorithm Design text (Theorem 6.14)

Let $$M$$ be an alignment of $$X$$ and $$Y$$. If $$(m, n) \notin M$$, then either the $$m^{\text{th}}$$ position of $$X$$ or the $$n^{\text{th}}$$ position of $$Y$$ is not matched in $$Y$$.

The theorem does not state that $$m, n$$ must be the last elements of $$X$$ and $$Y$$ respectively, but earlier, when introducing the topic, he uses $$m, n$$ to denote the last entries of the two strings. Which of these is correct?

## Consistent theory based on L and not(A->A) is a theorem

I am working on this problem in which I have a theory T based on L language and the only information we have is that T is consistent and |- not(A -> A). Given this information, how can I know if this theory is sound, complete and/or decidable?

My only guess is that I can say that T is sound because since T is consistent and we can derive not(A->A) from axioms and inference rules, not(A->A) is a theorem and because of that we can assume that T is sound (because the premises and conclusions are true).

Thank you!

## Regarding space of linear speed-up theorem

I was reading the proof of speed-up lemma from this slide (page 10 to 13) but I could not understand why the plus two factor appears in the new space bound. Would anybody elaborate?

Furthermore for a Turing machine that uses a linear amount of space, isn’t it possible to reduce the amount of space used by a constant factor without additional constant overhead? (i.e. to have only the εf(n) part as the new space)

Theorem: Suppose TM M decides language L in space f(n). Then for any ε > 0, there exists TM M’ that decides L in space εf(n) + 2.

## Euclidean geometry theorem proving complexity

Euclidean geometry is complete, so the problem of determining whether a statement $$A$$ is provable is computable. Do we know its time complexity?

## Clarification of the proof involving the regularity condition in Master Theorem

I was going the text Introduction to Algorithms by Cormen Et. al. Where I came across the following statement in the proof of the third case of the Master’s Theorem.

(Master theorem) Let $$a \geqslant 1$$ and $$b > 1$$ be constants, let $$f(n)$$ be a function, and let $$T (n)$$ be deﬁned on the nonnegative integers by the recurrence( the recursion divides a problem of size $$n$$ into $$a$$ problems of size $$n/b$$ each and takes $$f(n)$$ for the divide and combine)

$$T(n) = aT(n/b)+ f (n)$$ ;

where we interpret $$n/b$$ to mean either $$\lceil b/n \rceil$$ or $$\lfloor b/n \rfloor$$. Then $$T(n)$$ has the following asymptotic bounds:

1. If $$f(n)=O (n^{log_ba – \epsilon})$$ for some constant $$\epsilon > 0$$, then $$T(n)=\Theta (n^{log_ba})$$.

2. If $$f(n)=\Theta (n^{log_ba})$$, then $$T(n)=\Theta (n^{log_ba}lg n)$$

3. If $$f(n)=\Omega (n^{log_ba + \epsilon})$$ for some constant $$\epsilon > 0$$, and if $$af(n/b) \leqslant cf(n)$$ for some constant $$c < 1$$ and all sufﬁciently large n, then $$T(n)=\Theta (f(n))$$.

Now in the proof of Master’s Theorem with $$n$$ as exact power of $$b$$ the expression for $$T(n)$$ reduces to :

$$T(n)=\Theta(n^{log_ba})+\sum_{j=0}^{log_bn -1} a^jf(n/b^j)$$

Let us assume,

$$g(n)=\sum_{j=0}^{log_bn -1} a^jf(n/b^j)$$

Then for the proof of the 3rd case of the Master’s Theorem the authors prove that,

If $$a.f(n/b)\leqslant c.f(n)$$ for some constant $$c<1$$ and for all $$n\geqslant b$$ then $$g(n)=\Theta(f(n))$$

They say that as, $$a.f(n/b)\leqslant c.f(n) \implies f(n/b)\leqslant (c/a).f(n)$$ then interating $$j$$ times yeilds, $$f(n/b^j)\leqslant (c/a)^j.f(n)$$

I could not quite get the mathematics used behind iterating $$j$$ times.

Moreover I could not quite get the logic behind the assumption of $$n\geqslant b$$ for the situation that $$n$$ should be sufficiently large.(As the third case of the Master’s Theorem says).

Moreover in the similar proof for the third case of the general master theorem( not assuming $$n$$ as exact powers of $$b$$) there again the book assumes that $$n\geqslant b+b/(b-1)$$ to satisfy the situation of sufficiently large $$n$$.

I do not quite understand what the specific value has to do and why such is assumed as sufficiently large $$n$$

(I did not give the details of the second situation as I feel that it shall be something similar to the first situation)