Merge Multisites with Shared Network Media Library

So have a multisite setup which no longer needs to be a multisite but I’m left with a bit of a mess since I used Network Media Library plugin to host images for all sites on the network. I’ll try to break it down:

  • started out with WP multisite
  • created two sites on the network
  • installed Network Media Library
  • site #1 hosted the media library
  • both sites hosted posts
  • (about a year and a lot of posting goes by)
  • pulled site #1 out of multisite to be hosted independently
  • left with multisite running site #2 but still pulling it’s media from site #1

What I want to do now is combine site #2 which contains all my posts with site #1 which contains only media. My concerns are:

  • if I merge tables there will be ID conflicts (some posts will have same ID as attachments)
  • if I use import function to bring images into posts site then images will be given new IDs and post thumbnail relations will all break
  • if I use import function to bring posts into images site then post IDs would change which can’t happen because we use the ID in the post URL

The best idea I have so far is to somehow…

  • use the WordPress import function to import all the attachments into the posts site
  • log old and new IDs into a new table in the DB as the process works
  • then iterate over all the posts switching old for new IDs in the post_meta _thumbnail_id fields
  • ideally then be left with one site which contains all the posts and attachments so I can reduce the install down to regular non-multisite.

There’s tens of thousands of posts on these combined sites so performing these functions is no small feat and really not sure where to start so I wonder if anyone has any experience of a process like this or ideas for alternative solutions.

Thanks for reading.

Merge $k$-sorted arrays – without heaps/AVL tree in $O(n\log(k))$?


Given $ k$ -sorted arrays in ascending order, is it possible to merge all $ k$ arrays to a single sorted array in $ O(n\log(k))$ time where $ n$ denotes all the elements combined.

The question is definitely aiming towards a Min-heap/AVL tree solution, which can in fact achieve $ O(n\log(k))$ time complexity.

However i’m wondering if there exists a different approach, like a merge variant which can achieve the same result.

The closest I’ve seen is to merge all the arrays into one array which disregards their given ascending order, than doing comparison-based sort which takes $ O(n\log(n))$ but not quite $ O(n\log(m))$ .

Is there an algorithm variant which can achieve this result? Or a different data-structure?

What is the fastest way to merge two B trees?

Given two B-trees of some order $ m$ $ T_1,T_2$ , such that $ y > x$ for every pair $ x \in T_1$ and $ y \in T_2$ . What is the fastest way to create a new tree that is the union of both $ T_1,T_2$ ?

My current solution is naive in a sense that I create an array and insert all $ T_1$ elements, than all $ T_2$ elements. As a result I have a sorted array which I can create a new tree off of with a cost of $ n \log n$

I’m thinking that there must be a better solution, something like the AVL merging question but I can’t figure it out.

How To Merge Together Two Circular Doubly-Linked Lists In O(1) Time?

I’m implementing a Fibonacci Heap, where the lists are stored as a circular doubly-linked lists. What I’m trying to do is given a pointer to a random node in one linked list, and a random node in another, I want to merge them together in O(1) time. I tried drawing this out, but I couldn’t seem to figure out how to do this. Here is my first idea, in pseudocode:

union(Node one, Node two) {     if other = nil         return      p1 = one     p2 = one.right     p3 = two     p4 = two.right      p1.right = p4     p4.left = p1     p2.right = p3     p3.left = p2 } 

Each Node has a left and right attribute, which stores the node to the left and right, respectively. However, this didn’t work. Can anyone figure out how to merge together two linked lists?

How can I merge array of arrays in $project stage?

I merge 2 collections into one

mother:

{ _id: ObjectId(), address: "something" } 

child:

{   _id: ObjectId(),   name: "something",   contacts: [{     "phone": "+144543823..."   }, {     ...   }] } 

Now I when I merge 2 collections into one:

{   "lookup": {     from: 'contacts',     localField: 'address',     foreignField: 'name',     as: 'contacts'   } } 

Now I have results similar to the below payload:

{   _id: ...,   address: ...,   contacts: [{     _id: ..,     name: ...,     contacts: [       "phone": "+144543823..."     ]   }] } 

Now in my project I get the phones:

$  project: {    phones: "$  contacts.contacts.phone" } 

The output is as follow:

phones: [["+144543823..."], ["+48....], ["..."]] name: ... .... 

How can I get a simple array of phones. Using unwind would make the result very big and then I need to groupBy if I’m not mistaken! Is there an efficient way to do this? If not how can I do it with unwind and not missing other fields like name, address,…?

What happens when a druid picks up an object which is a polymorphed creature and then tries to merge it into their Wild Shape?

One of the interesting and useful characteristics of a druid’s Wild Shape is that they can merge their equipment into their beast form.

The scenario is this:

A PC from the party is turned into an object using True Polymorph, let’s say a ring or a coin. The druid in the party picks up the object and puts it into a pouch. Then, they use their Wild Shape feature with the option of merging all equipment into themselves.

Does this work – and, if it works, what happens when the duration of the True Polymorph ends after an hour, if the druid is still in beast form?

Thank you for your help with this.

Time Complexity of a Naive Solution to Merge K Sorted Arrays

There is a leetcode question about merging k sorted arrays. I would like to be able to explain the time complexity of the following naive solution:

function mergexsSortedArrays(xs) {   if (xs === null || xs.length === 0) {     return [];   }    let l1 = xs[0];    for (let i = 1; i < xs.length; i++) {     let l2 = xs[i];      l1 = merge(l1, l2);   }    return l1; }   /* This is simply for completeness; the relevant code is above /* function merge(l1, l2) {   const ans = [];    let l1HeadIdx = 0;   let l2HeadIdx = 0;    while (l1HeadIdx < l1.length && l2HeadIdx < l2.length) {     if (l1[l1HeadIdx] < l2[l2HeadIdx]) {       ans.push(l1[l1HeadIdx]);       l1HeadIdx++;     } else {       ans.push(l2[l2HeadIdx]);       l2HeadIdx++;     }   }    while (l1HeadIdx < l1.length) {     ans.push(l1[l1HeadIdx]);     l1HeadIdx++;   }    while (l2HeadIdx < l2.length) {     ans.push(l2[l2HeadIdx]);     l2HeadIdx++;   }    return ans; }   

Let’s say that k is the number of elements in the input array. To simplify the math, we will assume that each sorted array has length n.

Within the for loop, we are running the merge algorithm. On the first iteration, l1 will have length n and l2 will have length n, so the merge algorithm will be do 2n work. In the second iteration, l1 will be 2n and l2 will be n, so merge will do 3n work. As our result, the amount of work that is being done in our for loop will be 2n + 3n + 4n ... (k - 1) n. If we expand this work a bit, it would be n + 2n + 3n ... k(n), and this can be re-written as n * (1 + 2 + 3 ... k); the inner sum has a closed-form formula of [k * (k + 1)] / 2, which is essentially an O(k^2), and then we add the n to get a final time complexity of O(n(k^2)).

Is this correct? Or have I gone off the rails?

Combining merge sort with insertion sort – Time complexity

I am learning algorithms from the CLRS book on my own, without any help. It has an exercise which combines merge sort {O(n log n)} with insertion sort {O($ n^{2} $ )}. It says that when the sub-arrays in the merge-sorting reach a certain size “k”, then it is better to use insertion sort for those sub-arrays instead of merge sort. The reason given is that the constant factors in insertion sort make it fast for small n. Can someone please explain this ?

It asks us to show that (n/k) sublists, each of length k, can be sorted by insertion sort in O(nk) worst-case time. I found from somewhere that the solution for this is O($ nk^{2}/n $ ) = O(nk). How do we get this part O($ nk^{2}/n $ ) ?

Thanks !

Time complexity of a hybrid merge and selection sort algorithm

I’m trying to analyse the time and space complexity of the following algorithm, which is essentially a hybrid of a merge and selection sort. The algorithm is defined as follows:

def hybrid_merge_selection(L, k = 0):     N = len(L)     if N == 1:         return L     elif N <= k:         return selection_sort(L)     else:         left_sublist = hybrid_merge_selection(L[:N // 2])         right_sublist = hybrid_merge_selection(L[N // 2:])         return merge(left_sublist, right_sublist)  

My thinking is that the worst case scenario occurs when $ k$ is extremely large, which means that the insertion sort algorithm is always applied resulting in a time complexity of $ O(n^{2})$ , where $ n$ is the length of the list and the best case scenario occurs when $ N$ when $ k == 0$ , so the merge sort algorithm is only applied resulting in a time complexity of $ O(n\log_{2}n)$ . However, could somebody give me a more detailed and mathematical explanation of the time complexity, for all scenarios, namely worst, best, average.