Rewrite existing procedure without merge as Oracle 12.1 doesn’t support it

I have a procedure where data is pushed into table using merge logic used as part of it. The procedure takes string passed at runtime and splits it into rows.

I need to change the merge part using any other method other than merge, maybe cursor for loop as Oracle 12.1 does not support merge and similar take input string and insert/update accordingly in the destination table. inside procedure and we cannot get it upgraded due to constraints.

So everything in proc would be same just that it would be without merge.

The details are inside the fiddle with table and sample data.

MySQL Transform multiple rows into a single row in same table (reduce by merge group by)

Hy, i want reduce my table and updating himself (group and sum some columns, and delete rows)

Source table "table_test" :

+----+-----+-------+----------------+ | id | qty | user  | isNeedGrouping | +----+-----+-------+----------------+ |  1 |   2 | userA |              1 | <- row to group + user A |  2 |   3 | userB |              0 | |  3 |   5 | userA |              0 | |  4 |  30 | userA |              1 | <- row to group + user A |  5 |   8 | userA |              1 | <- row to group + user A |  6 |   6 | userA |              0 | +----+-----+-------+----------------+ 

Wanted table : (Obtained by)

DROP TABLE table_test_grouped; SET @increment = 2; CREATE TABLE table_test_grouped SELECT id, SUM(qty) AS qty, user, isNeedGrouping FROM table_test GROUP BY user, IF(isNeedGrouping = 1, isNeedGrouping, @increment := @increment + 1); SELECT * FROM table_test_grouped;  +----+------+-------+----------------+ | id | qty  | user  | isNeedGrouping | +----+------+-------+----------------+ |  1 |   40 | userA |              1 | <- rows grouped + user A |  3 |    5 | userA |              0 | |  6 |    6 | userA |              0 | |  2 |    3 | userB |              0 | +----+------+-------+----------------+ 

Problem : i can use another (temporary) table, but i want update initial table, for :

  • grouping by user and sum qty
  • replace/merge rows into only one by group

The result must be a reduce of initial table, group by user, and qty summed.

And it’s a minimal exemple, and i don’t want full replace inital from table_test_grouped, beacause in my case, i have another colum (isNeedGrouping) for decide if y group or not.

For flagged rows "isNeedGrouping", i need grouping. For this exemple, a way to do is sequentialy to :

CREATE TABLE table_test_grouped SELECT id, SUM(qty) AS qty, user, isNeedGrouping FROM table_test WHERE isNeedGrouping = 1 GROUP BY user ; DELETE FROM table_test WHERE isNeedGrouping = 1 ; INSERT INTO table_test SELECT * FROM table_test_grouped ; 

Any suggestion for a simpler way?

Dynamically merge different arrays in javascript

I want to combine two arrays (ranking and matches) that has common properties:

var ranking = [{     def: "0.58",     league: "Scottish Premiership",     name: "Celtic",     off: "3.33",     grank: "3",     tform: "96.33", }, {     def: "2.52",     league: "Scottish Premiership",     name: "Dundee",     off: "1.28",     grank: "302",     tform: "27.51", }]  var matches = [{ date: "2010-04-22", league: "Scottish Premiership", home: "0.0676", away: "0.8", draw: "0.1324", goals1: "3", goals2: "1", tform1: "96.33", tform2: "27.51", team1: "Celtic", team2: "Dundee",}] 

Expected output looks like this:

[{ date: "2010-04-22", league: "Scottish Premiership", home: "0.0676", away: "0.8", draw: "0.1324", goals1: "3", goals2: "1", tform1: "96.33", tform2: "27.51", def1: "0.58", def2: "2.52", off1: "3.33", off2: "1.28", grank1: "3", grank2: "302", team1: "Celtic", team2: "Dundee",}] 

To merge the arrays, I used Lodash _.merge function

var result = _.merge(ranking, matches); 

The output it returned did merge some objects and omitted homogeneous objects.

Please I need some help and insight in achieving this task. I wouldn’t mind any javascript (client-side) solution.

How do we sort the chunk in the first pass of external merge sort?

Referring to the 9th page of a slide, when we use multi-pass multi-way external merge sort on a file with $ N$ pages using $ B$ buffer pages, in "pass 0" we’ll read a chunk of $ B$ pages into all buffers, sort the chunk, and write it back to disk, repeatedly to produce $ \lceil{N/B}\rceil$ sorted chunks. In later passes, we’ll use $ (B-1)$ buffers for input and the last buffer for output to merge $ (B-1)$ sorted chunks together each time each pass.

As is not mentioned at all, how the whole chunk (in pass 0) can be sorted when all buffers are being used for input?

Error in pivot selection algorithm for merge phase [Sorting]

In the paper Comparison Based Sorting for Systems with Multiple GPUs, the authors describe the selection of a pivot element with respect to the partition on the first GPU (and its mirrored counterpart on the other GPU-partition). That pivot element is crucial for being able to merge the two partitions, given that we have already sorted them on each GPU locally.

However, the pseudo-code for that pivot-selection, as shown in the paper, doesn’t seem to reflect the whole truth since when implementing it 1:1, the selected pivot element is off by some elements in some cases, depending on the input – the amount of elements to sort and therefore the amount of elements per partition (the chunk of data that each GPU gets).

To get more specific, the problem is – to my understanding – that the while loop is exited too early due to the stride being reduced down to zero before the correct pivot element has been found. In general, the approach is binary search-like, where the range of where the pivot can fall, is halved each iteration.

Can anyone spot what needs to be done here?

Here is a C++ implementation of the pivot selection:

size_t SelectPivot(const std::vector<int> &a, const std::vector<int> &b) {   size_t pivot = a.size() / 2;   size_t stride = pivot / 2;   while (stride > 0)   {     if (a[a.size() - pivot - 1] < b[pivot])     {       if (a[a.size() - pivot - 2] < b[pivot + 1] &&           a[a.size() - pivot] > b[pivot - 1])       {         return pivot;       }       else       {         pivot = pivot - stride;       }     }     else     {       pivot = pivot + stride;     }     stride = stride / 2;   }   return pivot; } 

P.S.: I tried ceiling the stride in order to not skip iterations when the stride is odd, but this introduced the issue of moving out of bounds of the array and even after handling those cases by clipping to the array bounds, the pivot was not always correct.

Attempt to merge 2 small fresh projects leads to freeze of GSA SER

Hello Sven,
I have tried several times, to make all projects Inactive, GSA SER is Stopped, no Threads is recognized on the status bar. I have also reset the Submitted records, and projects just keeps Options and Verified (105 and 130) to make projects for merge smaller. No matter what, anytime I try it, GSA SER got frozen. At the moment of trying is running only GSA SEO Indexer,  CapMonster, GSA Proxy Scraper and DropBox application, that feeds GSA SER by fresh lists.
Beyond mentioned apps is yet ran. I have set in all GSA apps count of threads to 20, however neither like that it have no possitive impact.

Is there anything else I can do, not to make GSA SER freezing all the time? That leads me to kill it in Task Manager and start it again.

My HW config is following: Intel i3-7130U @2,70GHz, 12GB RAM DDR4, 1TB M2 NvMe, 500Gbps WAN
System resources are following: 12%CPU, 35%RAM, 0%HDD, 0%LAN. 
My OS is MS Windows 10 PRO 64-bit. The machine is completely dedicated just for purpose of link-building.
Thanks for Your answer.


Can a non-deterministic machine merge its branches?

Does an NDTM have the power to combine computational branches ie. can a result from branch A be used in the next step in the computation along branch B? Can branches use each others’ results, diagrammatically ‘merging’?


Branch i arrives at the number b after n steps, branch j arrives at the number c after 2n steps. After both have arrived at their respective values (we have waited 2n steps) I want the computer to multiply 3*5 (the results from the different branches). Can I do this?

Merge Multisites with Shared Network Media Library

So have a multisite setup which no longer needs to be a multisite but I’m left with a bit of a mess since I used Network Media Library plugin to host images for all sites on the network. I’ll try to break it down:

  • started out with WP multisite
  • created two sites on the network
  • installed Network Media Library
  • site #1 hosted the media library
  • both sites hosted posts
  • (about a year and a lot of posting goes by)
  • pulled site #1 out of multisite to be hosted independently
  • left with multisite running site #2 but still pulling it’s media from site #1

What I want to do now is combine site #2 which contains all my posts with site #1 which contains only media. My concerns are:

  • if I merge tables there will be ID conflicts (some posts will have same ID as attachments)
  • if I use import function to bring images into posts site then images will be given new IDs and post thumbnail relations will all break
  • if I use import function to bring posts into images site then post IDs would change which can’t happen because we use the ID in the post URL

The best idea I have so far is to somehow…

  • use the WordPress import function to import all the attachments into the posts site
  • log old and new IDs into a new table in the DB as the process works
  • then iterate over all the posts switching old for new IDs in the post_meta _thumbnail_id fields
  • ideally then be left with one site which contains all the posts and attachments so I can reduce the install down to regular non-multisite.

There’s tens of thousands of posts on these combined sites so performing these functions is no small feat and really not sure where to start so I wonder if anyone has any experience of a process like this or ideas for alternative solutions.

Thanks for reading.

Merge $k$-sorted arrays – without heaps/AVL tree in $O(n\log(k))$?

Given $ k$ -sorted arrays in ascending order, is it possible to merge all $ k$ arrays to a single sorted array in $ O(n\log(k))$ time where $ n$ denotes all the elements combined.

The question is definitely aiming towards a Min-heap/AVL tree solution, which can in fact achieve $ O(n\log(k))$ time complexity.

However i’m wondering if there exists a different approach, like a merge variant which can achieve the same result.

The closest I’ve seen is to merge all the arrays into one array which disregards their given ascending order, than doing comparison-based sort which takes $ O(n\log(n))$ but not quite $ O(n\log(m))$ .

Is there an algorithm variant which can achieve this result? Or a different data-structure?