Prove that this language is NP-Hard

Given #3SAT =

$$\{\ w, y\ |\ w \ is \ a \ 3SAT \ instance \ which \ has \ at \ least \ y \ satisfying \ assignments \}\$$

Prove that #3SAT is NP-Hard.

I am currently stuck with this one.

Basically #3SAT is the counting version of 3SAT and to me it is seems clear that establishing membership in the language is more complex than establish the classic 3-SAT membership.
I’m trying to reason through the prover-verifier paradigm: lets say you have a 3-SAT instance, now suppose the certificate for that instance is $$n \ bit$$. It is obvious that the certificate for the same instance but translated into #3SAT with lets say $$y=10$$ would be at least $$10n \ bit$$. So if the first can be examined in $$Polynomial \ time(t)$$ then the second will take at least $$10∗(t)$$ and so on. However I am not satisfied with this reasoning, it seems incomplete and superficial. I would greatly appreciate your advices on how to proceed with this proof. Thanks in advance.

Quantum vs classic in NP-hard problems

Is there any quantum algorithm (algorithm for quantum computers) for any NP-hard problem that has better runtime than the best known classic algorithm’s runtime?

Proof maximum coverage problem is NP-HARD [on hold]

How I could prove that maximum coverage problem is NP-HARD ?

Posted on Categories proxies

What are the current state of art approximation algorithm for NP-Hard problems?

I came cross some works try to use deep learning to approximate NP-Hard

https://arxiv.org/pdf/1810.10659.pdf

Though the paper seems to have very good results but based on the citations. I’m quit doubt if it really the state of art approximation for NP-hard problem.

I’m wondering what are some state of art NP-Hard approximation algorithms for each problems ?

Posted on Categories proxies

Is retrospective inference on a spatial Bayesian network NP-hard?

Cooper (1990) showed that probabilistic inference on general Bayesian networks is NP-hard. However, for many “spatial” Bayesian networks (where nodes represent small regions of spacetime), we can reasonably assume that a node is only influenced by other nodes in its local vicinity. If I directly observe the states of the nodes in a particular region of this spatial Bayesian network, then I can compute all future node states implied by this observation in linear time. But can I also efficiently compute all past states that are retroactively implied by this observation?

Here is a precise, CS-friendly version of this question:

Consider a network with nodes arranged in rows: $$1$$ node in the first row, $$1+d$$ nodes in the second, $$1+2d$$ nodes in the third, and so on, for $$m$$ rows. (This gives a total of $$n = m + (m-1)md/2$$ nodes in the network.) Each node takes a state of either 0 or 1. Additionally, each node $$v$$ has up to $$1+d$$ “parent” nodes and, if $$v$$ is the $$k$$th node in its row, all of these parents must lie among the $$k$$th through $$(k+d)$$th nodes in the subsequent row (when there is such a row). (This is the locality constraint.)

For each node $$v$$, there is a table $$T_v$$ that lists all possible combinations of states of the parents of $$v$$ and, for each combination, assigns a state of 0 or 1 to $$v$$. (We avoid probabilities for simplicity.)

Let $$d\text{-LBAYES}$$ denote the decision problem of determining whether there is an assignment of states to the nodes of the network that assigns state 1 to the node in the first row and is consistent with all $$T_v$$.

Are there $$d$$ for which $$d\text{-LBAYES}$$ is NP-hard?

It seems unlikely that I am the first person to contemplate this. Is a direct proof possible (e.g., by a polynomial-time reduction from a known NP-complete problem)? Or, if not, is there relevant literature?

Posted on Categories proxies

Is a “stacked”, “local” version of 3-SAT NP-hard?

In this previous question, I learned that if each variable in a string $$C \in 3\text{-SAT}$$ appears only “locally”, then finding a satisfying assignment is no longer NP-hard. My question below builds on this to ask whether NP-hardness is recovered when, loosely speaking, there are multiple “layers” of variables leading up to the formula of interest. (This elaboration emerges naturally from a study of spatial Bayesian networks.)

Question

Consider a graph with nodes arranged in rows. The number of nodes doubles in each subsequent row; the graph has one node in the first row, two in the second, four in the third, and so on, for $$m$$ rows. (There are $$n = 2^m – 1$$ nodes total in the graph.) Each node takes a state of either 0 or 1 and has up to three “parent” nodes in the subsequent row (when there is such a row). Each parent is “$$k$$-local”: there are fewer than $$k$$ nodes between the first and last nodes for which it is a parent. ($$k$$ is a fixed natural number that does not change with $$n$$.)

For each node $$v$$, there is a table $$T_v$$ that lists all possible combinations of states of the parents of $$v$$ and, for each combination, assigns a state of 0 or 1 to $$v$$ in a fashion consistent with $$v$$ being a disjunctive clause of literals of its parents.

Let $$(3,k)\text{-SATSTACK}$$ denote the decision problem of determining whether there is an assignment of states to the nodes of the graph that assigns state 1 to the first node and is consistent with all $$T_v$$.

Are there $$k$$ for which $$(3,k)\text{-SATSTACK}$$ is NP-hard?

Notes

(1) It appears that $$(3,k)\text{-SATSTACK}$$ is in NP, since we can choose an assignment for each node in the last row, then work backward through the rows, computing the assignment for the rest of the graph in linear time and accepting it if and only if it assigns state 1 to the first node.

(2) It also seems plausible that $$(3,k)\text{-SATSTACK}$$ is NP-hard, at least for $$k$$ large enough, since it seems difficult to do better than a procedure in which we work forward from the first row, applying the polynomial-time algorithm noted above to each row and keeping inventory of all satisfying assignments out to that row (which may be an exponentially growing collection).

(3) Nevertheless, it appears challenging to find a polynomial-time reduction showing that $$(3,k)\text{-SATSTACK}$$ is NP-hard because many of the subsets of $$3\text{-SAT}$$ (and related problems) that map naturally into $$(3,k)\text{-SATSTACK}$$ are not NP-complete (see again). So how should we proceed?

Is a “local” version of 3-SAT NP-hard?

Below is my simplification of part of a larger research project on spatial Bayesian networks:

Say a variable is “$$k$$-local” in a string $$C \in \text{3-CNF}$$ if there are fewer than $$k$$ clauses between the first and last clause in which it appears (where $$k$$ is a natural number).

Now consider the subset $$\text{3-LSAT} \subseteq \text{3-SAT}$$ defined by the criterion that for any $$C \in \text{3-LSAT}$$, every variable in $$C$$ is $$k$$-local. For what $$k$$ (if any) is $$\text{3-LSAT}$$ NP-hard?

Here is what I have considered so far:

(1) Variations on the method of showing that $$\text{2-SAT}$$ is in P by rewriting each disjunction as an implication and examining directed paths on the directed graph of these implications (noted here and presented in detail on pp. 184-185 of Papadimitriou’s Computational Complexity). Unlike in $$\text{2-SAT}$$, there is branching of the directed paths in $$\text{3-LSAT}$$, but perhaps the number of directed paths is limited by the spatial constraints on the variables. No success with this so far though.

(2) A polynomial-time reduction of $$\text{3-SAT}$$ (or other known NP-complete problem) to $$\text{3-LSAT}$$. For example, I’ve tried various schemes of introducing new variables. However, bringing together the clauses that contain the original variable $$x_k$$ generally requires that I drag around “chains” of additional clauses containing the new variables and these interfere with the spatial constraints of the other variables.

Surely I’m not in new territory here. Is there a known NP-hard problem that can be reduced to $$\text{3-LSAT}$$ or do the spatial constraints prevent the problem from being that difficult?

Is this possible when it comes to the relations of P, NP, NP-Hard and NP-Complete?

I saw an image that describes the relations of P, NP, NP-Hard and NP-Complete which look like this :

https://en.wikipedia.org/wiki/NP-hardness#/media/File:P_np_np-complete_np-hard.svg I wonder if the following is possible ? Which means, P = NP, but not all of them are in NP-Hard : Edit : I want to add this : I’m not here to say if the original image is wrong or right, I’m just here to ask a question if my image contains a possible situation. In other words, is it correct to assume that all 3 images are possible ?

Posted on Categories proxies

League and Divisions problem (np-hard)

There is a League. And there are Divisions, that are the disjoint subsets of this League. There are n teams (unique locations are given, let’s assume it’s x and y for simplicity reasons). Every team must belong to exactly one Division. There are min and max – minimal and maximal amount of teams per division. Every team travels once to every other team inside one division. The target is to divide the league into divisions in a way, that the total traveling distance for the whole league is minimized.

Input:
n – amount of teams
min – minimum teams per division
max – maximum teams per division
teams – array of teams (name,x,y)

Input example:
n = 10
min = 2
max = 3
teams = [{a,1,1},{b,2,1},{c,5,3},{d,9,8},{e,6,8},{f,5,1},{g,1,7},{h,6,6}{i,7,2},{j,2,7}]

Output:
League

Output example:
League = [[a,b,c],[d,e],[f,g,h],[i,j]]
One sub array stands for one division. It means that the teams a,b and c belong to one division and each of them will visit every other team from this division.
This is not the optimal solution

This is what фт output of a real-world case could look like I am struggling with selecting an algorithm for this issue. It is not a traveling salesman since we are not interested in simply visiting all of the locations. Neither the clustering approach is applicable since we need to keep the min and max in mind and it’s not what clustering does. What kind of algorithms can be used to tackle down this problem?

MILP problem NP-hard proof

Since a MILP problem is not necessarily NP hard. How could I demonstrate that a MILP problem is actually NP hard? There exists some smart and easy method to do that? Many thanks!