Computation of “maximal” answer sets in First-order resolution without contraints

I am not familiar with logic programming but I would like to know if the following setting have been studied and if it corresponds to a known system in logic programming.

  • I work with first-order resolution where we have clauses $ c = \{p_1(t_1), …, p_k(k_k), \lnot p_{k+1}(t_{k+1}), …, \lnot p_n(t_n)\}$ : we have a disjunction of (positive or negative) first-order predicates (for instance $ p(f(x, y))$ ).
  • When we have a program $ P = \{c_1, …, c_n\}$ , using Robinson’s resolution between two clauses $ c_i$ and $ c_j$ , we would like to compute all the predicates we can infer from $ P$ . We can obtain different sets of predicates depending on how we connect the predicates but we would like all such sets.
  • We would like all these connections to be maximal in the sense that it we connect predicates in $ P$ until no more predicates in $ P$ can be added. It should represent a “full computation”.

For instance, let’s $ $ P = \{add(0,y,y)\} \quad \{\lnot add(s(x),y,s(z)), add(x,y,z)\} \quad \{add(s^2(0),s^2(0),s^4(0))\}$ $ be a program with $ s^n(0)$ being $ n$ repetitions of the unary predicate $ s$ . If the clauses are labelled $ c_1, c_2, c_3$ , the unique way of constructing such “maximal connections” is to do $ c_1-c_2^m-c_3$ for any $ m$ but only one is correct: $ c_1-c_2^2-c_3$ corresponding to checking $ 2+2=4$ .

To give more context, I work in another field with a system with (at first) no connections to logic programming but which later showed strong similarities (for instance with answer sets) so I wanted to connect it to known concepts in logic programming.

LP – given m constraints for 2 variables find maximal radius of cycle

Given $ m$ constraints for 2 variables $ x_1,x_2$ :

$ d_ix_1 + e_ix_2 \leq b_i$ for $ i = 1,…m$

need to create a linear program that finds the maximal radius of a cycle such that all the points inside the cycle are in the feasible range of the above constraints.

so I know the formula for distance between some $ (x,y)$ and some $ ax +by + c = 0$

then I have tried –

  • $ Maximal$ $ R$ s.t
  • $ d_ix_1 + e_ix_2 \leq b_i$ for every $ i$
  • $ R \leq |d_ix_1 + e_ix_2 – b_i| / {\sqrt{{e_i}^2 +{d_i}^2}}$ for every $ i$
  • $ R \geq 0$

I know the standard linear program doesn’t get absolute value function but obviously we can just have 2 inequalities to get rid of it.

what do you think? and how eventually I can the relevant $ (x_0,y_0)$ that will be the centre of the cycle which $ R$ is its radius are they going to be the specific $ x_1,x_2$ that will makes R maximal I guess?

Search for maximal value smaller than V in an array composed of K linear functions

Sorry for the lack of clarity in the question description.

I have an array composed of length N composed of K linear subarrays. We define a linear subarray as a contiguous subarray of the array [l,r] where A[i]-A[i-1] = C, a constant, for all l<i<=r. (Note: C may be different for different subarrays.)

I wish to find multiple queries for the maximal value that is less than a queried value V in sublinear time per query, such as O(logK), or even O(logN). I am fine with any preprocessing in time such as O(K), O(KlogK), but not O(N). (Assume that the array has already been stored in terms of the K linear subarrays, perhaps in terms of the constant C and length of each subarray, with the subarrays ordered.)

Of course, a balanced binary search tree (BBST) or simply sorting achieves the purpose, but it requires O(NlogN) preprocessing, which is too much. Checking the largest valid value within each subarray takes O(K) per query, which is again too much.

Randomised algorithms are okay as long as they always achieve the correct answer and work fast enough in the average case, though deterministic algorithms are preferred.

Thanks for any and all responses.

Was there an attempt to create a programming language aimed for maximal abstraction?

I am trying to understand if a programming language can have totally or almost totally parallel the abstraction level shared between generally all human languages.

A pseudocode of this perhaps “highest programming language” could be the following example.


open (shop) at (06:00);
if no (visitor), than (air condition) == off, else
in plea (air condition) == on

while (opening-day), for each (visitor), since (08:00) until (20:00) accept customers;
Exceptionally, since (12:00) until (13:00) close (shop);
also, if (buying is finished)
give (generally every customer)
the message "thank you for buying with our store and goodbye"
since (20:00) until (08:00) close (shop);


hello is kind of an opening declaration before parsing, styling and behavior take place, quite like a HTML <!DOCTYPE> saying if it starts “not bad” (and maybe if it also starts “good”).
goodbye is like a termination command (such as exit common in CLUI programs)

I have no idea what will such a programming language be might be used for — humanoid robots comes to me in mind though.
Was there an attempt to create a programming language aimed for maximal abstraction?

Finding maximal cardinal independent set given oracle

The problem is given an oracle $ O(G, k)$ that would say if graph G contains IS of size k devise an algorithm for finding independent set of max cardinality that makes poly number of calls to the oracle. My attempt has been that first finding the maximal possible size and then try to find the set of that size by removing vertices one at a time. I understand for a given node it either has to be in or not in the set, then I noticed that that there are multiple overlaps between chain of removals and I could devise a DP algorithm of sorts. But I’m just really stuck after that and was wondering if any hint could be given.

A problem with the greedy approach to finding a maximal matching

Suppose I have an undirected graph with four vertices $ a,b,c,d$ which are connected as in a simple path from $ a$ to $ d$ , i.e. the edge set $ \{(a,b), (b,c), (c,d)\}$ . Then I have seen the following proposed as a greedy algorithm to find a maximal matching here (page 2, middle of the page)

Maximal Matching (G, V, E): M = [] While (no more edges can be added)      Select an edge which does not have any vertex in common with edges in M      M.append(e) end while return M 

It seems that this algorithm is entirely dependent on the order chosen for which edge is chosen first. For instance in my example if you choose edge $ (b,c)$ first, then the you will have a matching that consists only of $ (b,c)$ .

Whereas if you choose $ (a,b)$ as your starting edge, then the next edge chosen will be $ (c,d)$ and you have a matching of cardinality 2.

Am I missing something, as this seems wrong? I have also seen this described as an algorithm for finding a maximal matching in the context of proving that the vertex cover approximation algorithm selects a vertex cover by choosing edges according to a maximal matching. Any insights appreciated.

The optimal asymptotic behavior of the coefficient in the Hardy-Littlewood maximal inequality

It is well-known that for $ f \in L^1(\mathbb{R^n})$ ,$ \mu(x \in \mathbb{R^n} | Mf(x) > \lambda) \le \frac{C_n}{\lambda} \int_{\mathbb{R^n}} |f| \mathrm{d\mu}$ , where $ C_n$ is a constant only depends on $ n$ .

It is easy to see $ C_n \le 2^n$ , but how to determine its optimal asymptotic behavior? For example, does $ C_n$ bounded in $ n$ ? Is $ C_n$ bounded by polynomial in $ n$ ?

Proving NP completeness of maximal length path

I have this question to answer:

For each node i in an undirected network $ G = (N,E)$ , let $ N(i) = \{j \in N : \{i, j\} \in E\}$ denote the set of neighbors of node $ i$ and let $ c_e\geq0$ denote the length of edge $ e \in E$ . For each node $ i \in N$ , suppose the set $ N(i)$ is partitioned into two subsets, $ N^+(i)$ and $ N^-(i)$ such that $ j \in N^+(i)(j \in N^-(i))$ is referred to as a positive (negative) neighbor of $ i$ . (Note: Regardless of whether $ j$ is a positive or negative neighbor of $ i$ , $ i$ can be either a positive or negative neighbor of j.) Consider the problem of finding a maximum-length path $ (s =) i(0)–i(1)–···–i(h)(= t)$ in $ G$ between two nodes $ s ∈ N$ and $ t ∈ N$ subject to the following restriction: For each internal node $ i(k)(k \in\{1,…,h − 1\})$ on the path, the set $ \{i(k − 1),i(k + 1)\}$ must contain exactly one positive neighbor and one negative neighbor of $ i(k)$ . Prove NP-completeness of the decision problem and state whether it strongly NP-complete or not.

I wonder about the steps of the proof and whether shall I start from the longest path problem or from another problem instance.

How can I find explicit examples of maximal orders of quaternion algebras that are not isomorphic?

I know that there exist algorithms that will construct maximal orders of a quaternion algebra over, say, $ \mathbb{Q}$ . However, the implemented algorithms that I know of require that you input an order that is not necessarily maximal, which the algorithm then completes. Unfortunately, this does not help if you want examples of orders that are not isomorphic to one another.

The more examples (at least over $ \mathbb{Q}$ , but preferably over other number fields) that I can get, the better, but I would be happy even with a table of some known examples.