Reducing Dominant Set Problem to SAT

I am trying to solve a problem and I am really struggling, I would appreciate any help.

Given a graph $ G$ and an integer $ k$ , recognize whether $ G$ contains dominating set $ X$ with no more than $ k$ vertices. And that is by finding a propositional formula $ \phi_{G,k}$ that is only satisfiable if and only if there exists a dominating set with no more than k vertices, so basically reducing it to SAT-Problem.

What I have so far is this boolean formula:

$ \phi=\bigwedge\limits_{i\in V} \bigvee\limits_{j\in V:(i,j)\in E} x_i \vee x_j$

So basically defining a variable $ x_i$ that is set to true when when vertix $ v_i$ is in the dominating Set $ X$ , so all the formula says that it is satisfiable when for each node in $ G$ it is true that either the vertex itself is in $ X$ or one of the adjacent vertices is. Which is basically a Weighted Satisfiability Problem so its satisfiable with at most $ k$ variables set to true.

My issue is now that I couldn’t come up with a boolean formula $ \phi_{G,k}$ that not only uses Graph $ G$ as input but also integer $ k$ . So my question is now how can I modify this formula so it features $ k$ , or possibly come up with a new one if it cannot be modified?

How to solve balancing parentheses problem?

I’m new to dynamic programming and find this problem in http://www.hackerrank.com which is impossible to do I’ve been trying to solve it by 3 days now, the main issue with this problem is that the input string size is in range of 10^9 (creating a dynamic programming 2d array of size 10^9*10^9 takes forever) and no matter what I try the execution time out. someone please help.

the problem : https://www.hackerrank.com/contests/moraxtreme-4-0/challenges/balancing-parentheses-with-a-twist/problem

Problem with pwndbg

I am trying to write an exploit for Dovecot (CVE-2019-11500). I downloaded an older version of Dovecot and got it to work (I just did the basics). Now I wanted to debug the program with pwndbg to inspect the heap but there was a failure:

pwndbg> r Starting program: /usr/local/sbin/dovecot process 6484 is executing new program: /usr/local/bin/doveconf process 6484 is executing new program: /usr/local/sbin/dovecot [New process 6488] [New process 6489] process 6489 is executing new program: /usr/local/libexec/dovecot/anvil [tcsetpgrp failed in terminal_inferior: Kein passender Prozess gefunden] 

(Kein passender Prozess gefunden translates to No appropriate process found)

I am pretty new to whole heap exploit and gdb thing so could somebody help?

Confusion of halting problem

Show that the following problem is solvable.Given two programs with their inputs and the knowledge that exactly one of them halts, determine which halts.

lets P be program that determine one of the program will be halted.

P(Program x,Program y){      if(x will be halted)            then return 1;      else             then return 2; } 

since we know that exactly one of them will be halted,if 1 then program x will be halted.Otherwiae program y will be halted.

then we construct a new program call D

D(X,Y){       if(P(X,Y) == 2)           D will halt;        else           while(1)//D will not halt;    } 

lets S be aritbrary program.

So if we have D(D,S)

if D will halt then D will not halt

if D will not halt then D will halt

It impiles a contradiction same as halting problem.

But the question stated that it is solvable.

Is there any good method to find if a grammar is optimal for a problem?

I’ve been thinking about grammatical evolution problems and how the grammar influences the algorithm performance. It came to my mind the huge impact that the grammar that you’re using has in the time that takes an algorithm to reach an optimum solution.

The simplest example would be if your problem doesn’t involve trigonometric operations. If you’re trying to find f(x) = 3x - 1/2, including sins, tangents or square roots in your grammar will, almost certainly, slowen your algorithm as the population complexity will grow. Other not-so-evident simplifications for a grammar would be trigonometric identities:

tan(x) = sen(x) / cos(x) 

Talking about this last example, I don’t know how to determine the importance of the impact of including tan(x) between the grammar rules to produce valid solutions. Or in other words, knowing if adding tan(x) will be better in terms of performance than don’t doing it and thus, forcing the evolution to combine two or more operators and terminals to being able to use that operation and making the grammar ambiguous.

So this two are the questions:

  1. Is there any way of knowing if a grammar is optimal for finding a solution?
  2. Which evolutionary algorithm or machine learning method (considering that I’m almost profane in this discipline, some explanation is wellcome) would you use for finding optimal or sub-optimal grammars?

Thanks

Proof of Co-Problem being in NP if Problem is in NP using negated output

Given any problem $ P$ that we know of being in $ NP-\text{complete}$ , where is the flaw in the following proof?

Given a problem $ Co-P$ which is the co-problem of $ P \in NP-\text{complete}$ , $ Co-P$ is at least in $ NP$ because the following algorithm can always be given:

Co-P(J):     bool res = P(J)     return !res 

Where $ Co-P(J)$ is the algorithm solving $ Co-P$ and $ P(J)$ is the nondeterministic polynomial algorithm solving P.

Why is this not correct?

Buckets of Water Problem – Part 2

Continuing from this question: The buckets of water problem

(All the definitions can be found there, so I will not repeat them).

As seen there by Yuval’s answer, the problem is NP-Hard. I was attempting to prove its Completeness, and while doing so – I was suddenly not sure whether or not it belongs to NP.

Because the witness is most likely to be a series of actions (filling buickets etc..), and that might be too long.

Ofcourse, we can change the definition of the language, in such a way we will limit the number of actions to be polynomial or make it part of the input (with a slight adjustment to represent the number of actions in unary, so it won’t be log of the number’s value).

But, I find it interesting to ask if this is a must?

And if we do not change anything – Can we tell for sure it is not NP? That there is no better (polynomial) witness.

Smallest subarray problem

Say you have an array of integers like [1, 2, 3, 4, 5, 6], the problem is to find the smallest way to break up the array into sub-arrays where each sub-array satisfies the following requirement:

  • sub-array.first_integer and sub-array.last_integer must have a common divisor that is not 1.

So for [1, 2, 3, 4, 5, 6] the answer would be 2, because you can break it up like [1] and [2, 3, 4, 5, 6], where 2 and 6 have a common divisor of 2 (which is > 1 so it meets the requirement).

You can assume the array can be huge but the numbers are not too big. Is there a way to do this in n or n*log(n) time? I think with dp and caching n^2 is possible but not sure how to do it faster.