Understanding proofs Lemma 11 & 10 in this specific Computer Science Scheduling Paper

I’m having a hard time wrapping my head around the proofs Lemma 11 and Lemma 10 (Pages 10 and 11) in this paper called: Preemptive and Non-Preemptive Real-Time UniProcessor Scheduling.

Generally the proofs have very few steps in between and I can’t seem to understand how the author continues from one step to another. Therefore, I would be very gratefull in a more step by step detailed explanation of the proofs mentioned.

Thank you in advance!

Do all languages in $P$ have polynomial proofs that they are in $P$?

A proof for a language $ L$ belonging to a complexity class $ C$ that is accepted by a mathematical journal can be framed as there existing a verifier $ V$ that accepts this proof as the first part of its input and the language as the second. The verifier (referee) verifies this language is a member (a word) in the language representing the complexity class.

$ Verifier$ : (Proof for $ L$ in $ C$ , $ L$ )$ –>$ [0,1]

Do all languages in $ P$ have a proof of the fact that they are in $ P$ that can be verified in polynomial time? Given a language, determining if an arbitrary $ L$ is in $ P$ or not is undecidable; however, given a proof for a language being in $ P$ , can that language be verified to be in $ P$ in polynomial time?

Are NP proofs limited to polynomial length?


In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the set of decision problems for which the problem instances, where the answer is “yes”, have proofs verifiable in polynomial time by a deterministic Turing machine.

The proofs for an NP decision problem are verified in polynomial time.

Does this imply the proofs are at most polynomial length?

“Well you have to read the whole input. If the input is longer than polynomial, then the time is greater than polynomial.”

The decision problem “Is the first bit of the input a 0?” can be solved in constant time and space – without reading the whole input.

Therefore, perhaps some NP problem has candidate proofs that are longer than polynomial length but checked in polynomial time.

Resolution exponential lower bound… alternative proofs?

I am reading the Resolution proof system exponential lower bound via Haken’s bottleneck method for the Pigeonhole Principle as presented in Arora and Barak’s Computational Complexity: A Modern Approach, Chapter 15. However, I don’t like how the proof is presented in the book and I am having some difficulties following it.

Does somebody know of alternative sources where this same proof is presented? I know there are different techniques to show exponential lower bounds for Resolution, but I want something based on the Pigeonhole Principle. It’s just that the phrasing in this book is truly confusing.

How to tackle Big O proofs that involve multiple parameters

I am getting more and more familiar with the whole concept of time complexity but I have never encountered an example where more than one parameter is involved. Therefore, is it possible(well, I am sure it is :”)) and how to prove

a^n = Θ(logn)

or any other, similar-looking expression?

c1 * logn ≤ a^n ≤ c2 * logn

where, e.g. c1 = 1 and c2 = 2,

logn ≤ a^n ≤2 * logn.

Can I go one step further and set n, to be equal e.g. 2? This way I will get

log(2) ≤ a^2 ≤ log(4)

Which is surely true(for a between ~ 0.55 and 0.77)…

…but isn’t that too specific and interfere with the inequality too much? Sorry if the answer is trivial but Google is not helping and I have nobody to ask for explanation.

How come correctness proofs aren’t tautological?

Consider the following function on binary trees, which is supposed to tell whether a given int is a member of a binary tree t:

type tree = Leaf | Node of int * tree * tree;;  let rec tmember (t:tree) (x:int) : bool =   match t with       Leaf -> false     | Node (j,left,right) -> j = x || tmember left x || tmember right x ;; 

If one wants to prove that this function is correct, one would need to define first what tree membership actually means, but then I can find no formal way of doing this except for saying that x is a member of t if and only if it is either equal to the root of t, or it is a member of the left or right subtree of t. This is essentially saying that x is a member of t if and only if tmember t x outputs true.

What am I missing here?

Induction proofs in Big-O notation

I’m not sure how go about this question: Prove the following inequality. For a correct proof, we require a value of the constant $ c>0$ and an $ n \in \mathbb N$ , such that $ \forall n>N : f(x)<c\cdot g(n)$ .

$ \mathcal O(2^n) < \mathcal O (n!)$ .

I’m well aware how to prove $ 2^n < n!$ using induction, I just don’t understand how one is supposed to find a constant, etc. The only thing that springs to mind here is choosing $ N = 4$ , since that is when $ 2^n < n!$ begins to hold. If someone could clarify how I can apply the definition of Big-O notation to solve this, I would be greatful.

Algorithm for automatic construction of natural deduction proofs

I was wondering if there exists any algorithm for automatic construction of nautral deduction proofs. I’m interested in propositional logic and first order logic.

If there is no algoritm, can you provide some proof of this fact?

PD0: I’m not interested in any page for solving these kind of problems. My question is more theorical.

PD1: This is not homework, just personal interest.

Cut elimination proofs of the consistency of arithmetic

It is well known that one can use cut elimination to establish the consistency of arithmetic (though this involves assuming transfinite induction up to $ \epsilon_0$ .) Most proofs, however, work within an infinitary system with an omega rule. I am looking for proofs of cut elimination in arithmetic that avoid this, and that just work using only the ordinary rules of first-order logic, without the omega-rule. I realize that induction etc. may have to be formulated as rules, rather than axioms, for this to be possible. Are there straightforward proofs of this sort?

I posted this question on mathstackexchange: https://math.stackexchange.com/questions/3269309/cut-elimination-proofs-of-the-consistency-of-arithmetic . I received a reference to Takeuti there, which I am currently looking at. It would be great if there were other options too.

Proofs for red-black tree insertion and deletion

I need to proof that the insertion into a red-black tree takes up to O(logn) recolorings and maximum 1 trinode reconstructions.

Additionally I need to show that deletion in a red black tree takes up to O(logn) recolorings and up to two reconstructions.

Insertion

It seems logic that for the insertion and deletion takes up to log(n) recoloring because the tree has always a height of log(n). And also 1 reconstruction seem plausible, becuase the tree is already balanced before we insert a new element. However I’m am not sure how to show that mathematically and proof the 2 theorems. Can you help me with a sketch of the proofs?