Assume $P \neq NP$ Proof that $L$ = {$a | a \in SAT$ and every clause consists of $log_2(|a|)$ literals} in P [closed]

I really stuck on the following question (Assuming $ P\neq NP$ ):

$ $ L = \{a \mid a \in SAT \text{ and every clause consists of } \log_2|a| \text{ literals}\}$ $

I don’t understand how could $ L$ be in $ P$ while we know that the $ SAT \not\in P$ , how can one verify if $ a$ is satisfiable without using the Turing machine that verifies that $ a \in SAT$ .

Proof that L^2 is regular => L is regular

I’m trying to show $ L^2 \in \mathsf{REG} \implies L \in \mathsf{REG}$ with $ L^2 = \{w = w_1w_2 \mid w_1, w_2 \in L\}$ but I cant seem to find a proof that feels right.

I first tryed to show $ L \in \mathsf{REG} \implies L^2 \in \mathsf{REG}$ , by constructing an machine $ M$ that consists of two machines $ A=A’$ with $ A$ recognizing $ L$ . $ M$ has the same start states as $ A$ but the final states of $ A$ are put together with the starting states of $ A’$ . Further $ M$ uses the same accepting states as $ A’$ . Hope that makes sens so far 😀

Now to show $ L^2 \in \mathsf{REG} \implies L \in \mathsf{REG}$ I’d argue the same way, but:

The machine $ M’$ that accepts $ L^2$ has to recognize $ w_i \in L$ in some way, and because $ L^2$ is regular, $ M’$ has to be a NFA/DFA. So the machine has to check if $ w_i \in L$ and this cant be done by using something else than a NFA/DFA.

This feels wrong and not very mathematical, so maybe somebody knows how to do this?

Is my understanding of strictness correct in this proof of a `foldl` rule?

Exercise G in Chapter 6 of Richard Bird’s Thinking Functionally with Haskell asks the reader to prove

foldl f e . concat  =  foldl (foldl f) e 

given the rule

foldl f e (xs ++ ys)  =  foldl f (foldl f e xs) ys 

There’s no mention whether the given rule applies to infinite lists, nor does the answer in the book mention infinite lists. If my reasoning is correct, they are indeed both valid for infinite lists, provided f is strict.

The undefined case for the given rule requires this additional proof:

-- f is strict  ⟹  foldl f is strict  (i.e., foldl f undefined = undefined)        foldl f undefined xs       =        undefined xs  -- case undefined       foldl f undefined undefined         undefined undefined --   =  {foldl.0}                        =  {?}       undefined                           undefined  -- case []       foldl f undefined []                undefined [] --   =  {foldl.1}                        =  {?}       undefined                           undefined  -- case (x:xs)       foldl f undefined (x:xs)            undefined (x:xs) --   =  {foldl.2}                        =  {?}       foldl f (f undefined x) xs          undefined --   =  {f is strict}       foldl f (undefined x) xs --   =  {?}       foldl f undefined xs --   =  {induction}       undefined xs --   =  {?}       undefined 

As an aside, my proof for the dual of the above rule for foldr:

-- f x is strict  ⟹  foldr f is strict  (i.e., foldr f undefined = undefined)        foldr f undefined xs       =        undefined xs  -- case undefined       foldr f undefined undefined         undefined undefined --   =  {foldr.0}                        =  {?}       undefined                           undefined  -- case []       foldr f undefined []                undefined [] --   =  {foldr.1}                        =  {?}       undefined                           undefined  -- case (x:xs)      foldr f undefined (x:xs)             undefined (x:xs) --  =  {foldr.2}                         =  {?}      f x (foldr f undefined xs)           undefined --  =  {induction}      f x (undefined xs) --  =  {?}      f x undefined                      --  =  {f x is strict}      undefined 

The given rule:

-- f is strict  ⟹  foldl f e (xs ++ ys)  =  foldl f (foldl f e xs) ys        foldl f e (xs ++ ys)       =        foldl f (foldl f e xs) ys  -- case undefined       foldl f e (undefined ++ ys)         foldl f (foldl f e undefined) ys --   =  {++.0}                           =  {foldl.0}       foldl f e undefined                 foldl f undefined ys --   =  {foldl.0}                        =  {f is strict  ⟹  foldl f is strict}       undefined                           undefined ys --                                       =  {?}                                           undefined  -- case []      foldl f e ([] ++ ys)                 foldl f (foldl f e []) ys --  =  {++.1}                            =  {foldl.1}      foldl f e ys                         foldl f e ys  -- case (x:xs)      foldl f e ((x:xs) ++ ys)             foldl f (foldl f e (x:xs)) ys --  =  {++.2}                            =  {foldl.2}      foldl f e (x : (xs ++ ys))           foldl f (foldl f (f e x) xs) ys --  =  {foldl.2}                         =  {induction}      foldl f (f e x) (xs ++ ys)           foldl f (f e x) (xs ++ ys) 

My solution to the exercise:

-- f is strict  ⟹  foldl f e . concat  =  foldl (foldl f) e        foldl f e (concat xs)       =       foldl (foldl f) e xs  -- case undefined       foldl f e (concat undefined)        foldl (foldl f) e undefined --   =  {concat.0}                       =  {foldl.0}       foldl f e undefined                 undefined --   =  {foldl.0}       undefined  -- case []       foldl f e (concat [])               foldl (foldl f) e [] --   =  {concat.1}                       =  {foldl.1}       foldl f e []                        e --   =  {foldl.1}       e  -- case (x:xs)       foldl f e (concat (x:xs))           foldl (foldl f) e (x:xs) --   =  {concat.2}                       =  {foldl.2}       foldl f e (x ++ concat xs)          foldl (foldl f) (foldl f e x) xs --   =  {f is strict  ⟹  foldl f e (xs ++ ys)  =  foldl f (foldl f e xs) ys}       foldl f (foldl f e x) (concat xs) --   =  {induction}       foldl (foldl f) (foldl f e x) xs 

Does this work? If so, is it often the case that rules restricted to finite lists can be made to work for all lists given additional strictness requirements like this?

There are several lines above with {?} given as my reasoning. They could be replaced by {undefined x = undefined}, but I am just guessing there. If that is true, how could it be justified?

On the proof techniques of Udi Manber

I was familiar with the approach of first coming up with an algorithm, and then proving the loop invariant to come up with an algorithm as elucidated in CLRS (Introduction to algorithms, Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein). Lately, on reading Udi Manber’s introduction ‘A creative approach’ I have come across the idea of positing and algorithm and then proving it using induction and also getting the algorithm itself. It’s like having your cake and eating it too.

There is one question which remains inexplicable to me. When I am proving an algorithm using Udi Manber’s approach, am I arguing in the object language or in the meta language? In either of these cases, how am I just generating a proof? From the meta language it seems sensible to generate a proof in the object language, but arguing the soundness/completeness of these class of arguments appears difficult. But how do I guarantee that the proof is correct in the object language if it is generated by the object language itselfa? It is unclear to me if it is the metalanguage or the object language that is involved in here.

This question might seem poorly phrased, but I cannot find a better way to express it. Udi Manber’s approach seems to generate an algorithm despite not knowing a priori what the algorithm itself is. This is something counter intuitive to me. Please kindly explain.

Confused by proof of correctness of Majority

I have been studying a streaming algorithm to determine if there is a majority element in a stream. But am confused by a proof for it.

The algorithm works as follows. You keep one counter $ c$ and a store for one item called $ a^*$ . When a new item arrives, first you check if $ c == 0$ . If so you set $ c=1$ and $ a^*$ stores the arriving item. Otherwise, if $ c>0$ and the arriving item is the same as $ a^*$ you increment $ c$ or decrement $ c$ if not.

If there is a majority element then it will be stored in $ a^*$ at the end.

In the notes from http://www.cs.toronto.edu/~bor/2420f17/L11.pdf there is a proof of this fact (called simpler proof).

enter image description here

I can see that if there is a majority item then $ c’$ will be positive at the end of the stream. But:

  • How do we know that $ a^*$ will hold the majority item at the end?
  • Does $ c’$ being positive imply that $ c$ will be positive too?

Difference between the logic and the type system of a proof assistant?

In Comparing Mathematical Provers (section 4.1), Wiedijk classifies logics and type systems of different proof assistants? I do not see what he means by type system of the assistant. He only says:

A system is only considered typed when the types are first class objects that occur in variable declarations and quantifiers.

I can only think of types in goals. For instance, in Isabelle if you write a goal using variables (I don’t think you "declare" variables", you can check the type of these variables. But this type is certainly a type in the logic I’m using.

It would be interesting to clarify this and apply this example in the cases of Isabelle, Coq and Metamath (which is untyped and apparently based on proof trees, which could give a hint).

Use of pumping lemma for not regular languages. (Proof Verification)

$ L=\{w \in \{0,1,a\}^* | \#_0(w) = \#_1(w) \}$

We show that L is not regular by pumping lemma.

We choose w=$ 0^p 1^p a$

|w| = 2p+1

Now |xy| has to be $ \leq p$

So x and y could only contain zeros. And $ z=1^p a$

$ xy^iz= 0^p 1^p a$

Now let i = 0

$ xy^0z=0^{p-|y|} 1^p a$

Now hence $ p-|y| \neq p$ this choice of i would lead to a word that is not in L. So we can not pump y and stay in the language.

So L is not regular.

I’m trying to learn the usage of the pumping lemma. Is my proof correct?

Any suggestions are welcome. Thanks!

Complete proof of PAC learning of axis-aligned rectangles

I have already read PAC learning of axis-aligned rectangles and understand every other part of the example.

From Foundations of Machine Learning by Mohri, 2nd ed., p. 13 (book) or p. 30 (PDF), I am struggling to understand the following sentence of Example 2.4, which is apparently the result of a contrapositive argument:

… if $ R(\text{R}_S) > \epsilon$ , then $ \text{R}_S$ must miss at least one of the regions $ r_i$ , $ i \in [4]$ .

i.e., $ i = 1, 2, 3, 4$ . Could someone please explain why this is the case?

The way I see it is this: given $ \epsilon > 0$ , if $ R(\text{R}_S) > \epsilon$ , then $ \mathbb{P}_{x \sim D}(\text{R}\setminus \text{R}_S) > \epsilon$ . We also know from this stage of the proof that $ \mathbb{P}_{x \sim D}(\text{R}) > \epsilon$ as well. Beyond this, I’m not sure how the sentence above is reached.