## Find maximal subset with interesting weight function

You are given $$n$$ rows of positive integers of length $$k$$. We define a weight function for every subset of given $$n$$ rows as follows – for every $$i = 1, 2, \dots, k$$ take the maximum value of $$i$$-th column (), then add up all the maximums.

For example, for $$n = 4$$, $$k = 2$$ and rows $$(1, 4), (2, 3), (3, 2), (4, 1)$$ the weight of subset $$(1, 4), (2, 3), (3, 2)$$ is $$\max\{1, 2, 3\} + \max\{4, 3, 2\} = 3 + 4 = 7$$.

The question is, having $$m \leq n$$, find the subset of size $$m$$ (from given $$n$$ rows) with maximal weight.

The problem looks trivial when $$m \geq k$$, but how can one solve it for $$m < k$$? Looks like dynamic programming on subsets could work for small $$k$$, isn’t it? Are there other ways to do it?

## Maximal subsets of a point set which fit in a unit disk

Suppose that there are a set $$P$$ of $$n$$ points on the plane, and let $$P_1, \dots, P_k$$ be not necessarily disjoint subsets of $$P$$ such that every point in $$P_i|\ 1 \leq i \leq k$$ fits inside a unit disk $$D_i$$.

Moreover, each $$P_i$$ is maximal. This means that if the corresponding unit disk $$D_i$$ moves to cover another point, then one point which was inside the disk will be uncovered.

Here is an example:

In the above figure, there are three maximal subsets.

I don’t know whether this problem has a name or was studied before, but my question is:

1. Can $$k$$ be $$O(1)^n$$?
2. If not, then can we find those subsets in polynomial time w.r.t. $$n$$?

## What is the maximal difference between the depths of 2 leaves in AVL tree?

I’m wondering what’s the answer of the following question:

What is the maximal difference between the depths of 2 leaves in an AVL tree?

Intuitively I think that it shouldn’t exceed $$log n$$ but have no idea how to prove that. On the other hand, I saw multiple articles that claim the depth can become larger and larger without any formal proof (I always get a contradiction when trying to draw such trees)

I would really like if someone can explain me why it’s correct\incorrect.

## Computation of “maximal” answer sets in First-order resolution without contraints

I am not familiar with logic programming but I would like to know if the following setting have been studied and if it corresponds to a known system in logic programming.

• I work with first-order resolution where we have clauses $$c = \{p_1(t_1), …, p_k(k_k), \lnot p_{k+1}(t_{k+1}), …, \lnot p_n(t_n)\}$$: we have a disjunction of (positive or negative) first-order predicates (for instance $$p(f(x, y))$$).
• When we have a program $$P = \{c_1, …, c_n\}$$, using Robinson’s resolution between two clauses $$c_i$$ and $$c_j$$, we would like to compute all the predicates we can infer from $$P$$. We can obtain different sets of predicates depending on how we connect the predicates but we would like all such sets.
• We would like all these connections to be maximal in the sense that it we connect predicates in $$P$$ until no more predicates in $$P$$ can be added. It should represent a “full computation”.

For instance, let’s $$P = \{add(0,y,y)\} \quad \{\lnot add(s(x),y,s(z)), add(x,y,z)\} \quad \{add(s^2(0),s^2(0),s^4(0))\}$$ be a program with $$s^n(0)$$ being $$n$$ repetitions of the unary predicate $$s$$. If the clauses are labelled $$c_1, c_2, c_3$$, the unique way of constructing such “maximal connections” is to do $$c_1-c_2^m-c_3$$ for any $$m$$ but only one is correct: $$c_1-c_2^2-c_3$$ corresponding to checking $$2+2=4$$.

To give more context, I work in another field with a system with (at first) no connections to logic programming but which later showed strong similarities (for instance with answer sets) so I wanted to connect it to known concepts in logic programming.

## LP – given m constraints for 2 variables find maximal radius of cycle

Given $$m$$ constraints for 2 variables $$x_1,x_2$$ :

$$d_ix_1 + e_ix_2 \leq b_i$$ for $$i = 1,…m$$

need to create a linear program that finds the maximal radius of a cycle such that all the points inside the cycle are in the feasible range of the above constraints.

so I know the formula for distance between some $$(x,y)$$ and some $$ax +by + c = 0$$

then I have tried –

• $$Maximal$$ $$R$$ s.t
• $$d_ix_1 + e_ix_2 \leq b_i$$ for every $$i$$
• $$R \leq |d_ix_1 + e_ix_2 – b_i| / {\sqrt{{e_i}^2 +{d_i}^2}}$$ for every $$i$$
• $$R \geq 0$$

I know the standard linear program doesn’t get absolute value function but obviously we can just have 2 inequalities to get rid of it.

what do you think? and how eventually I can the relevant $$(x_0,y_0)$$ that will be the centre of the cycle which $$R$$ is its radius are they going to be the specific $$x_1,x_2$$ that will makes R maximal I guess?

## Search for maximal value smaller than V in an array composed of K linear functions

Sorry for the lack of clarity in the question description.

I have an array composed of length N composed of K linear subarrays. We define a linear subarray as a contiguous subarray of the array [l,r] where A[i]-A[i-1] = C, a constant, for all l<i<=r. (Note: C may be different for different subarrays.)

I wish to find multiple queries for the maximal value that is less than a queried value V in sublinear time per query, such as O(logK), or even O(logN). I am fine with any preprocessing in time such as O(K), O(KlogK), but not O(N). (Assume that the array has already been stored in terms of the K linear subarrays, perhaps in terms of the constant C and length of each subarray, with the subarrays ordered.)

Of course, a balanced binary search tree (BBST) or simply sorting achieves the purpose, but it requires O(NlogN) preprocessing, which is too much. Checking the largest valid value within each subarray takes O(K) per query, which is again too much.

Randomised algorithms are okay as long as they always achieve the correct answer and work fast enough in the average case, though deterministic algorithms are preferred.

Thanks for any and all responses.

## Was there an attempt to create a programming language aimed for maximal abstraction?

I am trying to understand if a programming language can have totally or almost totally parallel the abstraction level shared between generally all human languages.

A pseudocode of this perhaps “highest programming language” could be the following example.

hello

open (shop) at (06:00);
if no (visitor), than (air condition) == off, else
in plea (air condition) == on

while (opening-day), for each (visitor), since (08:00) until (20:00) accept customers;
Exceptionally, since (12:00) until (13:00) close (shop);
also, if (buying is finished)
give (generally every customer)
the message "thank you for buying with our store and goodbye"
since (20:00) until (08:00) close (shop);

goodbye

hello is kind of an opening declaration before parsing, styling and behavior take place, quite like a HTML <!DOCTYPE> saying if it starts “not bad” (and maybe if it also starts “good”).
goodbye is like a termination command (such as exit common in CLUI programs)

I have no idea what will such a programming language be might be used for — humanoid robots comes to me in mind though.
Was there an attempt to create a programming language aimed for maximal abstraction?

## Finding maximal cardinal independent set given oracle

The problem is given an oracle $$O(G, k)$$ that would say if graph G contains IS of size k devise an algorithm for finding independent set of max cardinality that makes poly number of calls to the oracle. My attempt has been that first finding the maximal possible size and then try to find the set of that size by removing vertices one at a time. I understand for a given node it either has to be in or not in the set, then I noticed that that there are multiple overlaps between chain of removals and I could devise a DP algorithm of sorts. But I’m just really stuck after that and was wondering if any hint could be given.

## A problem with the greedy approach to finding a maximal matching

Suppose I have an undirected graph with four vertices $$a,b,c,d$$ which are connected as in a simple path from $$a$$ to $$d$$, i.e. the edge set $$\{(a,b), (b,c), (c,d)\}$$. Then I have seen the following proposed as a greedy algorithm to find a maximal matching here (page 2, middle of the page)

Maximal Matching (G, V, E): M = [] While (no more edges can be added)      Select an edge which does not have any vertex in common with edges in M      M.append(e) end while return M 

It seems that this algorithm is entirely dependent on the order chosen for which edge is chosen first. For instance in my example if you choose edge $$(b,c)$$ first, then the you will have a matching that consists only of $$(b,c)$$.

Whereas if you choose $$(a,b)$$ as your starting edge, then the next edge chosen will be $$(c,d)$$ and you have a matching of cardinality 2.

Am I missing something, as this seems wrong? I have also seen this described as an algorithm for finding a maximal matching in the context of proving that the vertex cover approximation algorithm selects a vertex cover by choosing edges according to a maximal matching. Any insights appreciated.

## The optimal asymptotic behavior of the coefficient in the Hardy-Littlewood maximal inequality

It is well-known that for $$f \in L^1(\mathbb{R^n})$$,$$\mu(x \in \mathbb{R^n} | Mf(x) > \lambda) \le \frac{C_n}{\lambda} \int_{\mathbb{R^n}} |f| \mathrm{d\mu}$$, where $$C_n$$ is a constant only depends on $$n$$.

It is easy to see $$C_n \le 2^n$$, but how to determine its optimal asymptotic behavior? For example, does $$C_n$$ bounded in $$n$$? Is $$C_n$$ bounded by polynomial in $$n$$?