Proving that $NPSPACE\subseteq PSPACE$ using the proof of Savitch’s Theorem

We were shown a proof of $ NPSPACE\subseteq PSPACE$ in class. In short, the proof says:

  • Let $ L\in NPSPACE$ .
  • Then there exists a non-deterministic polynomial space bounded Turing machine $ M$ that accepts $ L$ .
  • For every input word $ w$ , the number of vertices in the configuration graph of $ M$ is exponential in $ |w|$ .
  • Nevertheless, using the algorithm from Savitch’s proof, we can check whether there exists a path in the graph from the initial state to the accepting state, using space polynomial in $ |w|$ .

My problem is the memory required to store the graph. How can we store the graph using space polynomial in $ |w|$ ?

What’s the proof complexity of E-KRHyper (E-hyper tableau calculus)?

Before the question, let me explain better what is E-KRHyper:

E-KRHyper is a theorem proving and model generation system for first-order logic with equality. It is an implementation of the E-hyper tableau calculus, which integrates a superposition-based handling of equality into the hyper tableau calculus (source: System Description: E-KRHyper).

I am interested in the complexity of system E-KRHyper because it is used in the question-answer system Log-Answer (LogAnswer – A Deduction-Based Question Answering System (System Description)).

I have found a partial answer:

our calculus is a non-trivial decision procedure for this fragment (with equality), which captures the complexity class NEXPTIME (source: Hyper Tableaux with Equality).

I don’t understand much of complexity theory so my question is:

What is the complexity of a theorem to be proved in terms of the number of axioms in the database and in terms of some parameter of the question to be answered?

How to prove an implication about an upper bound mentioned in the proof of master theorem?

enter image description here

How can we prove rigorously the proposition “Suppose the if in case 1 is true, the equation 4.23 is true”? For given constant b and j, the implication in green makes sense. If the upper bound of j was fixed, the equation 4.23 follows directly. However, when n increases, the upper bound of j also increases, though is slower. It is where I find difficult to prove there always exists a value m > 0 such that for all n >= m, equation 4.23 is true.

Adcash.com (Cash traffic) is a Big Scam With Proof (Warning)

This Affiliate Network 'Adcash.com' just scam me 500 Euro and close my account, and did nt paid me for my works, and when i ask them, he don't want to answer me, so i speaked to a support in MSN Live Chat name's (Maxime) and then, i ask here why he did that and the answer was block me.

View attachment 70282

and here's my account suspended without any reason :(

View attachment 70283

i send them in about 5 messages and without any response :(

Warning : don't trust this…

Adcash.com (Cash traffic) is a Big Scam With Proof (Warning)

Proof of the average case of the Heap Sort algorithm

Consider the following python implementation of the Heap Sort algorithm:

def heapsort(lst):     length = len(lst) - 1     leastParent = length // 2     for i in range (leastParent, -1, -1):         moveDown(lst, i, length)      for i in range(length, 0, -1):         if lst[0] > lst[i]:             swap(lst, 0, i)             moveDown(lst, 0, i - 1)   def moveDown(lst, first, last):     largest = 2 * first + 1     while largest <= last:         # right child is larger than left         if (largest < last) and (lst[largest] < lst[largest + 1]):             largest += 1          # right child is larger than parent         if lst[largest] > lst[first]:             swap(lst, largest, first)             # move down to largest child             first = largest;             largest = 2 * first + 1         else:             return # exit   def swap(lst, i, j):     tmp = lst[i]     lst[i] = lst[j]     lst[j] = tmp 

I have been able to formally prove that worst-case is in $ \Theta(n \log(n))$ and that the best-case is in $ \Theta(n)$ (some might argue the best-case is in $ \Theta(n \log(n))$ as well since that’s what most searches on the internet will return but just think of what happens when an input list where all of the elements are the same number).

I have showed both upper and lower bounds of the worst and best-case by realizing that the route taken by moveDown function is dependent on the height of the heap/tree and whether the elements in the list are distinct or the same number across the whole list.

I have not been able to prove the average case of this algorithm which I know is also in $ \Theta(n \log(n))$ . I do know, however, that I am supposed to consider an input set or family of lists of all length $ n$ and I am allowed to make an assumption such as that all of the elements in the list are distinct. I confess that I am not good at average-case analysis and would really appreciate it if someone could give a complete and thorough proof(including the exact expressions especially of the number of inputs) as it would help me understand the concept a great deal.

Proof of concept for enterprise level Microservices

I’ve been studying gRPC a lot and went through several presentations from youtube. I’m presenting two options. The idea is to develop a minimalistic but comprehensive demo in a scala and the right way of doing things in microservices.

First I thought the gRPC is inter-service communication only until I saw this video (5:40) (screenshot) and turns out there are libraries for browser as well so is that mean REST or http+json is out of the window?

I’ve read that the companies who used REST as inter-service communication are struggling.

Appreciate if someone can even upscale it to make it an enterprise level, and suggest me more ideas and features to put in, doesn’t matter over-engineer it because it’s just a proof of concept.

I’ve also read that I should have a separate database for each microservice but using shared one at the moment, I want to get more enlightenment on connecting pieces for the middleware.

enter image description here

enter image description here

Proof (by contradiction) of the emptiness problem

I fail to understand the proof of the Emptiness Problem

$ E_{TM} = \{\langle M \rangle | M $ is a TM and $ L(M) = \emptyset\}$


enter image description here

1) Use the description of $ M$ and $ w$ to construct $ M_1$ , which on Input $ x$ behaves as follows:

  1. If $ x \neq w$ , reject
  2. If $ x = w$ , run $ M$ on input $ w$ and accept if $ M$ does

2) Run $ R$ on input $ \langle M_1\rangle$

3) If $ R$ accepts, reject; if $ R$ rejects, accept


I do understand the basic idea of a reduction and in particular the reduction of $ A_{TM}$ to $ Halt_{TM}$ , however,

  • I do not see how $ E_{TM}$ could be used as a subroutine to solve $ A_{TM}$ . The whole construction of $ M_1$ confuses me a lot. To me it looks like $ M_1$ is just like a filter that rejects everything except $ w$

  • But why does $ M_1$ even have to check if $ x$ equals $ w$ ? As soon as $ S$ is fed with a particular pair $ \langle M,w\rangle$ , $ x$ will be equal to $ w$ , no? how can it be anything different than $ w$ ?

Proof that $G=(V,E)$ is connected, if every node has at least one adjacent edge, $|E|\ge n-1$ and $|V|=n$

Let $ G=(V,E)$ be an undirected graph without self-loops or parallel edges.

Does the statement:
If $ |V|=n, |E|\ge n-1$ and every node has at least one adjacent edge , then $ G$ is connected;
hold?

I’ve proofed it for $ |E|=n-1$ :

Per induction:
Start:
For $ \left|V\right|=1$ the graph is trivially connected.

Induction step:
Let the statement be shown for all graphs $ G=\left(V,E\right)$ where $ \left|V\right|=n-1$ and $ |E| = n-2$ .

Let further $ G=\left(V,E\right)$ with $ \left|V\right|=n$ and $ |E| = n-1$ be given.

We’re now looking for an induced sub graph $ G|_{V^\prime}$ where $ V^\prime\subset V, \left|V^\prime\right|=n-1$ , so that $ G|_{V^\prime}$ has at least $ n-2$ edges.

(Any such sub graph can have at most $ n-2 $ edges, as there’ll always be at least one edge that originally lead to the removed node)

Let’s now assume that every sub graph $ G|_{V^\prime}$ has less than $ n-2$ edges.
Then, the removed node in any sub graph would have at least $ 2$ edges.

Thus, every node must have at least $ 2$ edges, and therefore there’d have to exist at least $ n$ edges in the graph.

Therefore, there’s at least one sub graph $ G|_{V^\prime}$ with $ n-2$ edges, for which our induction assumption holds. And because there is one edge from $ G|_{V^\prime}$ to the erased edge, we get that $ G$ is connected.

Therefore, the induction is proven.

However, if I try to generalize the above proof, the same style leads to an inequality that only holds if $ |E|>|V|$ .

Therefore, if the above proof can be generalized, how would it look? If not, what’s an example where it fails?

Complexity guess and induction proof

I was trying to prove by induction that
$ $ T(n) = \begin{cases} 1 &\quad\text{if } n\leq 1\ T\left(\lfloor\frac{n}{2}\rfloor\right) + n &\quad\text{if } n\gt1 \ \end{cases} $ $ is $ \Omega(n)$ implying that $ \exists c>0, \exists m\geq 0\,\,|\,\,T(n) \geq cn \,\,\forall n\geq m$

Base case : $ T(1) \geq c1 \implies c \leq 1$

Now we shall assume that $ T(k) = \Omega(k) \implies T(k) \geq ck \,\,\forall k < n$ and prove that $ T(n) = \Omega(n)$ .
$ $ T(n) = T(\lfloor{\frac{n}{2}}\rfloor) + n \geq c\lfloor{\frac{n}{2}}\rfloor + n \geq c \frac{n}{2} -1 + n \geq n\left(\frac{c}{2} – \frac{1}{n} + 1\right) \geq^{?} cn\ c \leq 2 – \frac{2}{n} $ $
So we have proved that $ T(n) \geq c n$ in :

1) The base case for $ c \leq 1$

2) The inductive step for $ c \leq 2 – \frac{2}{n}$

Yet we have to find a value that satisfies them both for all $ n\geq 1$ , the book suggest such value is $ c = 1$ which to me is not true since :

$ $ 1 \leq 1\1\leq2 – \frac{2}{n}\implies 1 \leq 0 \text{ for n = 1} $ $
My guess would be $ 0$ but is not an acceptable value; So we just say its $ \Omega(n)$ but for $ n \gt 1$ ?Or how can we deal with it?

Don’t understand one step for AVL tree height log n proof

I came across a proof that the an AVL tree has O(log n) height and there’s one step which I do not understand.

Let $ N_h$ represent minimum number of nodes that can form an AVL tree of height h. Since we’re looking for minimum number of nodes, let its children’s minimum number of nodes be $ N_{h-1}$ and $ N_{h-2}$ .

Proof:

$ $ N_h = N_{h-1} + N_{h-2} + 1 \tag{1}$ $ $ $ N_{h-1} = N_{h-2} + N_{h-3} + 1 \tag{2}$ $ $ $ N_h = (N_{h-2} + N_{h-3} + 1 + ) + N_{h-2} + 1 \tag{3}$ $ $ $ N_h > 2N_{h-2} \tag{4}$ $ $ $ N_h > 2^{h/2} \tag{5} $ $

I do not understand how we went from (4) to (5). If anyone could explain, that’d be great. Thanks!