Proving injectivity for an algorithm computing a function between sets of different types of partitions [closed]

I am attempting to solve the following problem:

Let $ A$ be the set of partitions of $ n$ with elements $ (a_1, \dots, a_s)$ such that $ a_i > a_{i+1}+a_{i+2}$ for all $ i < s,$ taking $ a_{s+1} = 0.$ Define $ g_n = F_{n+2}-1$ and let $ B$ be the set of partitions of $ n$ as $ b_1 \ge \dots \ge b_s$ such that $ b_i \in \{g_n\}$ for all $ i,$ and if $ b_1 = g_k$ for some $ k,$ then $ g_1, \dots, g_k$ all appear as some $ b_i.$ Prove $ |A|=|B|.$

Attempt: Let $ e_i$ be the vector with $ 1$ at position $ i$ and $ 0$ elsewhere. If $ b_1 = g_k,$ let $ c=(c_k, \dots, c_1)$ count how many times $ g_i$ appears. We calculate $ f: B \to A$ as follows:

Let $ c=(c_k,\dots,c_1), a=(0,\dots,0).$ While $ c \ne 0,$ let $ d_1 > \dots > d_k$ be the indices such that $ c_{d_i} \ne 0.$ Replace $ c, a$ with $ c-(e_{d_1}+\dots+e_{d_k}), a+(g_{d_1} e_1 + \dots + g_{d_k} e_k)$ respectively. After the while loop ends, let $ f(b)=a.$

Let $ \sum a, \sum b, \sum c$ be the sum of the components of $ a, b, c$ respectively. Since $ \sum c$ decreases after every loop, the algorithm terminates and $ f(b)$ is well-defined. Since $ c_k g_k + \dots + c_1 g_1 + \sum a$ does not change after every iteration, is $ \sum b$ at the start and $ \sum a$ at the end, we have $ \sum f(b) = \sum b = n,$ so $ f(b)$ is also a partition of $ n.$ Now $ a = (g_k, \dots, g_1)$ after the first loop, which satisfies the condition $ g_i > g_{i-1}+g_{i-2}$ since $ g_i = F_{n+2}-1 = (F_{n+1}-1)+(F_n-1)+1 > g_{i-1}+g_{i-2}.$ Furthermore, after every iteration of the loop, the difference $ a_i – (a_{i-1}+a_{i-2})$ changes by $ 0, g_{d_j}-g_{d_{j-1}} > 0,$ or $ g_{d_j}-(g_{d_{j-1}}+g_{d_{j-2}}) > 0,$ so we have $ a_i > a_{i-1} + a_{i-2}$ at the end and hence $ f(b) \in A.$ Thus, $ f: B \to A$ is well-defined.

In order to prove the injectivity of $ f,$ it suffices to prove each loop iteration as a mapping $ (c,a) \to (c’,a’)$ is injective, which would imply the mapping $ (c,0) \to (0,a)$ that the while loop creates is injective. Indeed, if $ f(b_1) = f(b_2) = a$ with $ (c_1, 0), (c_2, 0)$ being sent to $ (0, f(b_1)) = (0,a), (0, f(b_2)) = (0,a)$ respectively, then we have $ (c_1, 0) = (c_2, 0) \Rightarrow c_1 = c_2 \Rightarrow b_1 = b_2.$

Suppose $ d_1 > \dots > d_i, f_1 > \dots > f_j$ are the non-zero indices of $ c_1, c_2$ respectively and $ c_1 – (e_{d_1}+\dots+e_{d_i}) = c_2 – (e_{f_1}+\dots+e_{f_j}), a_1+g_{d_1}e_1 + \dots+ g_{d_i} e_i = a_2 + g_{f_1} e_1 + \dots + g_{f_j} e_j.$ If $ x \ge 2$ is an entry of $ c_1,$ it decreases by $ 1,$ so the corresponding entry in $ c_2$ after $ c_2$ is modified is also $ x-1,$ which means it must’ve been $ (x-1)+1 = x$ before since $ x-1>0.$ Thus, if the values of two positions of $ c_1, c_2$ differ, one is $ 1$ and the other is $ 0.$ However, if $ c_1 = (1,0), a_1 = (3,1), c_2 = (0,1), a_2 = (4,1),$ then $ (a_1, c_1), (a_2, c_2)$ both get sent to $ ((5,1), (0,0)).$ I can rule out this specific example by arguing that one of the pairs is illegal and could not have come from any choice of initial $ c,$ but I have no idea on how to do this in general.

What should I do next in order to show $ f$ is injective? Furthermore, since the problem I’m trying to prove is correct, injectivity would imply $ f$ is secretly a bijection. But I have no clue on how to even start on the surjectivity of $ f,$ so I just constructed a similar algorithm for $ g: A \to B$ in the hopes of proving $ g$ is injective too. If I can show $ f$ is injective I will probably know how to show $ g$ is.

Here is an example of $ f, g$ in practice:

Let $ n = 41, b = (12, 7, 7, 4, 4, 2, 2, 2, 1) \Rightarrow c = (1, 2, 2, 3, 1).$

$ $ ((1, 2, 2, 3, 1), (0,0,0,0,0)) \to ((0, 1, 1, 2, 0), (12, 7, 4, 2, 1)) \to ((0, 0, 0, 1, 0), (19,11,6,2,1)) \to ((21,11,6,2,1),(0,0,0,0,0)),$ $ so $ f(b) = (21,11,6,2,1).$

Let $ a = (21, 11, 6, 2, 1).$

$ $ ((21,11,6,2,1),(0,0,0,0,0)) \to ((9,4,2,0,0), (1,1,1,1,1)) \to ((2,0,0,0,0),(1,2,2,2,1)) \to ((0,0,0,0,0),(1,2,2,3,1)),$ $ so $ g(a) = (12, 7, 7, 4, 4, 2, 2, 2, 1).$

Same notation/terminology for union of sets and concatenation (Kleene star)?


For the union of sets we use the union operator $ \cup$ (or $ \bigcup$ ). And for a concatenation (Kleene star) we also use the union operator. The operations are different, but why the same terminology and operator?

The following is my understanding of the union of sets versus the concatenation of sets (Kleene star). Please correct me if I’m wrong.

Union of sets

For the two sets $ \{a,b\}$ and $ \{a,b\}$ we have the union \begin{align} \{a,b\}\cup\{a,b\}=\{a,b\} \end{align}

Concatenation of sets (Kleene star)

The concatenation of $ \{a,b\}$ and $ \{a,b\}$ is also a union (same notation?!) of two sets \begin{align} \{a,b\}^*&=\bigcup _{i=0}^{2} \{a,b\} ^2=\{a,b\} \cup \{a,b\}\ &=\{\epsilon,a,b,aa,ab,ba,bb,aaa,aab,aba,abb,baa,\dots\} \end{align}

Algorithm to generate combinations of n elements from n sets of m elements

Suppose I have 3 sets of 2 elements: [A, B], [C, D], [E, F], and I wanted to generate all possible combinations of 1 element from each set, such that the result of the algorithm would be:

[A, C, E], [A, C, F], [A, D, E], [A, D, F], [B, C, E], [B, C, F], [B, D, E], [B, D, F] 

What algorithm can I use to generate all combinations. Keep in mind that I’m looking for an algorithm that will work on any number of sets that have any number of elements, the above is just an example. Also, remember that I’m looking for an algorithm to actually generate the combinations, not just count them.

work with sets, filters and cycles

Im trying to represented a set of data with a mathematical expression but I don’t know what is the best method for this.

In programming you have data set like

var x = [1,2,3,4] 

and you can apply filter like:

x.filter( element => element > 2 ) 

or loops like:

x.foreach( element => { some filter conditions } ) 

exists a mathematical expression for this cases?

An apology if the question is very ambiguous, but I would like your guidance to begin to understand programming processes at a more abstract level and to be able to represent cases with clear and universal mathematical expressions that can later be translated into programming languages.

About computable sets

Let TOT be the set of all Turing Machines that halt on all inputs. Find a computable set B of ordered triples such that:

TOT = {e : ($ \forall$ x)($ \exists$ y)[(e, x, y) $ \in$ B]

This definition means that TOT is a set of all Turing machines e such that they halt on all inputs. The “for all” x denotes all inputs to that machine, and “there exists” a y denotes that e halts under y steps. x consists of 0s and 1s, y and e are Natural numbers too ( e denotes Turing machine $ T_e$ if we were to number all our turing machines)

I had a very fundamental doubt because of which I couldn’t progress at all. How do we construct B?

Thank you in advance.

Maximizing integer sets intersection (with integer delta)

There are two sets of integers with different numbers of items in them.

X= { x_0, x_1, ... x_n }, x_0 < x_1 < ... < x_n Y= { y_0, y_1, ... y_m }, y_0 < y_1 < ... < y_m 

And there is a function of a single integer defined as

F(delta) = CountOfItems( Intersection( X, { y_0+delta, y_1+delta, ... y_m+delta) ) ) 

That is – i add delta integer to every element of the Y and then count how many same integers are in the X and the modified Y.

And then the problem – find delta that maximizes F(delta).

max( F(delta) ), where delta is integer 

Is there some “mathematical” name for such task and optimal algorithm for this? Obviously i can use brute-force here and enumerate all possible combination – but it does not work for big n and m.

Incremental dynamic on-disk disjoint sets (incremental on-disk dynamic forest)


Problem statement

I am looking for an algorithm to maintain a very large number of disjoint sets under node and edge additions. Due to the data size, keeping everything in memory is not feasible, so the algorithm needs to work efficiently with SSD storage.

Ideally, the algorithm should:

  • support link(v1, v2) operation which either merges two sets or does nothing if v1 and v2 already belong to the same set. If either v1 or v2 did not exist prior to link operation, the new vertex(es) should be added to a set
  • support get_set(v) operation which will return all elements in a set
  • be IO efficient in terms of SSD access
  • allow concurrent link and get_set operations

Some notes:

  • only edge additions are allowed, no removals
  • consecutive link operations 1..N operate on a small number of disjoint sets K, K << N

Why I need such an algorithm

There is a stream of events (~100M of events per day) in which each event may link to zero or several “parent” events. When a new event arrives, I need to run some aggregations on a graph that this event belongs to. Events are generated by a set of services, so this is basically a distributed tracing problem.

Computation of “maximal” answer sets in First-order resolution without contraints

I am not familiar with logic programming but I would like to know if the following setting have been studied and if it corresponds to a known system in logic programming.

  • I work with first-order resolution where we have clauses $ c = \{p_1(t_1), …, p_k(k_k), \lnot p_{k+1}(t_{k+1}), …, \lnot p_n(t_n)\}$ : we have a disjunction of (positive or negative) first-order predicates (for instance $ p(f(x, y))$ ).
  • When we have a program $ P = \{c_1, …, c_n\}$ , using Robinson’s resolution between two clauses $ c_i$ and $ c_j$ , we would like to compute all the predicates we can infer from $ P$ . We can obtain different sets of predicates depending on how we connect the predicates but we would like all such sets.
  • We would like all these connections to be maximal in the sense that it we connect predicates in $ P$ until no more predicates in $ P$ can be added. It should represent a “full computation”.

For instance, let’s $ $ P = \{add(0,y,y)\} \quad \{\lnot add(s(x),y,s(z)), add(x,y,z)\} \quad \{add(s^2(0),s^2(0),s^4(0))\}$ $ be a program with $ s^n(0)$ being $ n$ repetitions of the unary predicate $ s$ . If the clauses are labelled $ c_1, c_2, c_3$ , the unique way of constructing such “maximal connections” is to do $ c_1-c_2^m-c_3$ for any $ m$ but only one is correct: $ c_1-c_2^2-c_3$ corresponding to checking $ 2+2=4$ .

To give more context, I work in another field with a system with (at first) no connections to logic programming but which later showed strong similarities (for instance with answer sets) so I wanted to connect it to known concepts in logic programming.

Options for approaching stable marriage problem with unequally sized sets of elements/preferences

I am looking for an algorithm/code that will provide stable matching for two unequally sized sets of elements (clubs and students) with an unequal set of preferences. There is a large pool of students looking to join a club, and a comparatively small pool of clubs for those students to join. Each student ranks only the clubs they wish to join in order of preference. In other words, each student will not be required to rank every club available. At the same time, each club has a maximum amount of students they can accept and this number differs across each club. Therefore, both pools for the algorithm are unequally sized, and each element within those pools (i.e. every individual club and student) could have a different number of preferences.

I have looked into the Gale-Shapley algorithm and envy-free matching, but I have not found any code that provides a stable match when there is so much variation in the elements/preferences. Does anyone know of any code that can accomplish this (preferably in something like Python or Java)?

Show that infinite decidable sets $A$ and $B$ exist

I am stuck in this problem, so any help is appreciated. The problem asks to show that there exists decidable sets $ A$ and $ B$ such that $ A \leq_{m}^{p} B$ but $ B \not \leq_{m}^{p} A$ , and that $ A$ , $ B$ and $ \bar{A}$ and $ \bar{B}$ are infinite.

Here, $ \leq_{m}^{p}$ refers to many-one polynomial time reducibility…. I have a hunch that this may have something to do with letting $ A$ be a decidable set such that $ A \in EXP$ , but $ B \in P$ , so that the reduction cannot be done in polynomial time.