Creating a block matrix from arrays of blocks

I am trying to generate a matrix from square blocks. Effectively, I have a $ n×n$ matrix polynomial $ P(l)$ , the $ qth$ derivative of $ P(l)$ with respect to $ l$ , which is denoted by $ P^{(q)} (l)$ , and a block of zeroes, which I’ll just call $ 0$ . I have some integer $ k$ such that if $ k=1$ then I am generating the matrix

$ $ R= \begin{pmatrix} P(l) \end{pmatrix} $ $

If $ k=2$ then I should generate

$ $ R = \begin{pmatrix} P(l) & 0 \ \frac{1}{1!} P^{(1)}(l) & P(l) \end{pmatrix} $ $

If $ k=3$ then

$ $ R = \begin{pmatrix} P(l) & 0 & 0 \ \frac{1}{1!} P^{(1)}(l) & P(l) & 0 \ \frac{1}{2!} P^{(2)}(l) & \frac{1}{1!} P^{(1)}(l) & P(l) \end{pmatrix} $ $

and so forth. Generally,

$ $ R = \begin{pmatrix} P(l) & 0 & \cdots & 0 & 0 \ \frac{1}{1!} P^{(1)}(l) & P(l) & \cdots & 0 & 0 \ \frac{1}{2!} P^{(2)}(l) & \frac{1}{1!} P^{(1)}(l) & \cdots & 0 & 0 \ \vdots & \vdots & \ddots & \vdots & \vdots \ \frac{1}{(k-1)!} P^{(k-1)}(l) & \frac{1}{(k-2)!} P^{(k-2)}(l) & \cdots & \frac{1}{1!} P^{(1)}(l) & P(l) \end{pmatrix} $ $

is an $ nk×nk$ matrix.

I prefer a simple and understandable way and for that I thought to start with a zero matrix $ R$ of dimensions $ nk×nk$ and then with two "for" loops to full the initial zero matrix, putting the corresponding derivative which is needed. I’m not sure in what should go as my statement in "for" loops. I found other questions which were similar but more complicated and specific. Any help appreciated, thank you.

2-dimensional ranking of multiple arrays

Given a constant dimension $ d$ , say $ d=2$ we want the following:

Input: $ A_1\ldots A_m$ : $ m$ arrays of length $ n$ of integers

Each input array $ A_i$ must be a permutation of the numbers $ 1..n$ , so in each array each number from $ 1$ to $ n$ appears exactly once.

Output: For each pair (in the case $ d=2$ ; triplets in the case of $ d=3$ etc.) of numbers $ (1,1),(1,2)\dots(n,n)$ , we want a count for in how many input arrays the first number of the pair is also the first to appear in the array (among the numbers of that pair).

Question: Can this be done quicker than $ O(mn^d)$ in the worst case?

Upper and lower bounds

The output is represented as a $ d$ -dimensional array of length $ n$ . Therefore a lower bound for the runtime complexity is $ O(n^d)$ .

The naive approach is to create $ m$ mappings from each number to its index for each input array. Then for all $ n^d$ tuples, walk through the $ m$ mappings, yielding a runtime complexity upper bound of $ O(dmn^d)$ and since $ d$ is a constant this is $ O(mn^d)$ .

Examples

A = (1,2,3,4),        Output =  1 2 3 4     (1,2,3,4),                  -------     (1,2,3,4),     =>       1 | 4 4 4 4     (1,2,3,4)               2 | 0 4 4 4                             3 | 0 0 4 4     d=2, m=4, n=4           4 | 0 0 0 4  =======================================  A = (4,3,2,1),         Output = 1 2 3 4     (1,2,3,4),                  -------     (1,2,3,4)      =>       1 | 3 2 2 2                             2 | 1 3 2 2     d=2, m=3, n=4           3 | 1 1 3 2                             4 | 1 1 1 3 

Application

While writing poker analysis software, I’m particularly interested in the case $ d=3, m\approx 1250, n\approx 1250$ . I estimate that the naive $ O(mn^d)$ solution takes multiple hours but less than a day when using native Java arrays (no hashmaps etc) on a single thread.

Dynamically merge different arrays in javascript

I want to combine two arrays (ranking and matches) that has common properties:

var ranking = [{     def: "0.58",     league: "Scottish Premiership",     name: "Celtic",     off: "3.33",     grank: "3",     tform: "96.33", }, {     def: "2.52",     league: "Scottish Premiership",     name: "Dundee",     off: "1.28",     grank: "302",     tform: "27.51", }]  var matches = [{ date: "2010-04-22", league: "Scottish Premiership", home: "0.0676", away: "0.8", draw: "0.1324", goals1: "3", goals2: "1", tform1: "96.33", tform2: "27.51", team1: "Celtic", team2: "Dundee",}] 

Expected output looks like this:

[{ date: "2010-04-22", league: "Scottish Premiership", home: "0.0676", away: "0.8", draw: "0.1324", goals1: "3", goals2: "1", tform1: "96.33", tform2: "27.51", def1: "0.58", def2: "2.52", off1: "3.33", off2: "1.28", grank1: "3", grank2: "302", team1: "Celtic", team2: "Dundee",}] 

To merge the arrays, I used Lodash _.merge function

var result = _.merge(ranking, matches); 

The output it returned did merge some objects and omitted homogeneous objects.

Please I need some help and insight in achieving this task. I wouldn’t mind any javascript (client-side) solution.

Searching for substring in field that contains variable length of arrays of json objects

I am trying to construct a sql query, that searches for a substring within a field. The issue is that the field contains an array of one or more json objects.

For example the table looks like so:

day     |   items ____________________ Sunday  | [{"apples":5, "bananas":2}, {"pears":12, "cucumbers":9}, ...] Monday  | [{"apples":6, "bananas":1}, {"watermelon": 1}] Tuesday | [{"apples":4, "bananas":3}, {"tomatoes": 1}] 

How do I construct a SQL query that searches for a substring in items given it is not a string ?

Thanks

Check for common element in two arrays using FFT

My task ask me to check whether there is a common element in two arrays $ (x_1,x_2,…,x_n)$ , $ (y_1,y_2,…,y_n)$ with $ x_i,y_i\in\mathbb{N}$ using the Fast Fourier Transform(FFT). (I’m aware that there is a simple $ O(n\log(n))$ algorithm to solve this problem using sorting and binary search.) The tasks hints that we should consider the following product to solve the problem: $ $ \prod_{i+j=n} (x_i-y_j) $ $ The product is obviously zero if there is a common element, but I am still not sure how I could compute it faster via FFT.
… I know how to use FFT to multiply polynoms efficiently, but somehow I seem to overlook something.

Merge $k$-sorted arrays – without heaps/AVL tree in $O(n\log(k))$?


Given $ k$ -sorted arrays in ascending order, is it possible to merge all $ k$ arrays to a single sorted array in $ O(n\log(k))$ time where $ n$ denotes all the elements combined.

The question is definitely aiming towards a Min-heap/AVL tree solution, which can in fact achieve $ O(n\log(k))$ time complexity.

However i’m wondering if there exists a different approach, like a merge variant which can achieve the same result.

The closest I’ve seen is to merge all the arrays into one array which disregards their given ascending order, than doing comparison-based sort which takes $ O(n\log(n))$ but not quite $ O(n\log(m))$ .

Is there an algorithm variant which can achieve this result? Or a different data-structure?

What is the difference in time-complexity for sorting these 2-d arrays?

Let $ A$ have $ n/10$ rows, $ 10$ columns and $ n$ overall elements

Let $ B$ have 10 rows, $ n/10$ columns and $ n$ overall elements.

It is given that each row in sorted in ascending order, Can you sort each of these in $ O(n\log(n))$ or better using comparison sort?

I’m leaning towards k-way merge implementing a min-heap following this implementation merging sorted arrays, but I can’t seem to figure out what the difference between this cases is.

$ B$ for example will have $ 10$ elements constantly in the min-heap, so the time complexity will be $ 10n \log(n) \in O(n)$ ? Is this even possible in comparison sorts?

While $ A$ would have $ n/10$ elements in the min-heap, but are the run times equivalent?

best database for storing arrays with set-like sum operation

I need a simple database to store key-array pairs with the ability to update the stored arrays with merging new array into them, without duplicated values. In fact, I need to store ordered “sets”. For example, records in the database looks like:

key : value (an array)

“1” : [ 1, 2, 3, 4]

“2” : [ 1, 3, 5, 7, 11] …

and I want to be able to merge a new array, like [1, 2, 3, 100, 200, 300] into the first record, while skipping duplicates, resulting the first row to become:

“1” : [ 1, 2, 3, 4, 100, 200, 300]

I also want it to handle partial selection of arrays, example:

select myArrayColumn[0:2] where key = 2 

should result: [1, 3]

I want a suggestion for a database with such capabilities.

Given two arrays A and B, how do I answer queries asking the qth minimum sum of A[i]+B[j]?

I am given two arrays A and B of same size K (K<=20000). There can be upto 500 queries (offline), each asking the qth minimum sum a+b such that a belongs to A and b belongs to B (q<=10000). How do I answer these queries efficiently?
One way would be to iterate over all pairs but that is too slow for me.

Proof for an algorithm to minimize $\max(a, b, c) – \min(a, b, c), a \in A, b \in B, c\in C$, A, B, C are arrays in ascending order


Problem Statement

I came across this problem here. For given arrays $ A$ , $ B$ and $ C$ arranged in ascending order, we need to minimize the objective function $ f(a, b, c) = \max(a, b, c) – \min(a, b, c), a \in A, b \in B, c\in C$ .

It can be thought of as a problem to select a number from each of the three arrays such that the numbers are as close to each other as possible (max element is as close to min element as possible).

Solution

The editorial solution to the problem is based on a greedy approach running in linear time. Here are the steps, summarized:

  1. The algorithm involves three pointers, one for each array.
  2. Initially, all pointers point to the beginning of the arrays.
  3. Till the end of atleast one of the arrays is reached, steps 4 and 5 are repeated.
  4. the element combination formed by current pointer configuration is checked to see if it is the new minimum value of the objective function.
  5. The pointer pointing to the least element is incremented to get a new configuration.

This is the C++ code for reference and reproducibility:

int f(int a, int b, int c){ //objective function     return max(a, max(b, c)) - min(a, min(b, c)); }  int solve(vector<int> &A, vector<int> &B, vector<int> &C) {     int i=0, j=0, k=0;     int best = INT_MAX;      while(i<A.size() && j<B.size() && k<C.size()){         int mine = min(A[i], min(B[j], C[k]));         best = min(best, f(A[i], B[j], C[k]));          if(A[i] == mine)             i++;         else if(B[j] == mine)             j++;         else             k++;     }      return best; } 

Observations

While this approach seems reasonable to me (and does work), I cannot convince myself of its correctness. I have made some observations about the nature of the problem and the algorithm, but I cannot seem to arrive at a solid reasoning for why this solution works. Any help towards a proof, or towards a reasoning for why this approach is correct would be greatly appreciated.

I started by thinking along the lines of finding a loop invariant, thinking that the pointers would always point to the best configuration for subarrays $ A[0..i], B[0.j], C[0..k]$ . This line of thought is incorrect (i, j, k point to sub optimal confirugations as well)

This is what I have come up with so far:

TL;DR: if any element except the minimum element is incremented(next element), the objective function would increase or stay the same(unfavourable). If the minimum element is incremented, the objective function might decrease, increase or stay the same. So, the only “hope” of finding a lower objective function is to increment the minimum element in that iteration.

consider that the elements being pointed to by the pointers are $ x, y, z$ such that $ x \le y \le z$ . $ x, y, z$ could belong to any of the three arrays. If the elements following elements $ x, y, z$ in their respective arrays are elements $ x^{+}, y^{+}, z^{+}$ , then the solution asks for always incrementing the pointer pointing to $ x$ , so that it points to $ x^{+}$ .

Since x is the minimum element ans z is the maximum element, f$ (x, y, z)=z-x=f_{old}$ .

If we increment $ z$ to $ z^{+}$ :

  • $ f(x, y, z^{+})=z^{+}-x \ge f_{old}$ , as $ z^{+} \ge z$ .

So, $ f_{new}\ge f_{old}$

If we increment $ y$ to $ y^{+}$ :

  • If $ y^{+}<=z$ , $ f(x, y^{+}, z)=z-x = f_{old}$ .
  • If $ y^{+}>z$ , $ f(x, y^{+}, z)=y^{+}-x \ge f_{old}$

So, $ f_{new}\ge f_{old}$

If we increment $ x$ to $ x^{+}$ :

  • If $ x^{+} < y$ , $ f(x^{+}, y, z)=z-x^{+} \le f_{old}$
  • If $ y \le x^{+} \le z$ , $ f(x^{+}, y, z)=z-y \le f_{old}$
  • If $ z<x^{+} \le z+(y-x)$ , $ f(x^{+}, y, z) = x^{+}-y \le z-x$ $ (= f_{old})$
  • If $ x^{+}>z+(y-x)$ , $ f(x^{+}, y, z) = x^{+}-y > z-x$ $ (= f_{old})$

So, $ f_{new}\le f_{old}$ as long as $ x^{+} \le z+(y-x)$ .

I have a hunch that for the solution to work, in the case where $ f_{new}> f_{old}$ , when $ x^{+} > z+(y-x)$ , it must be impossible to get a lesser objective function without incrementing all pointers, however, I cannot prove this.

Nonetheless, none of these observations convince me that the method is correct (although I know that it is). If someone could make a loop invariant condition for this solution and the configuration of pointers, that would be the most straightforward proof.