Error in pivot selection algorithm for merge phase [Sorting]

In the paper Comparison Based Sorting for Systems with Multiple GPUs, the authors describe the selection of a pivot element with respect to the partition on the first GPU (and its mirrored counterpart on the other GPU-partition). That pivot element is crucial for being able to merge the two partitions, given that we have already sorted them on each GPU locally.

However, the pseudo-code for that pivot-selection, as shown in the paper, doesn’t seem to reflect the whole truth since when implementing it 1:1, the selected pivot element is off by some elements in some cases, depending on the input – the amount of elements to sort and therefore the amount of elements per partition (the chunk of data that each GPU gets).

To get more specific, the problem is – to my understanding – that the while loop is exited too early due to the stride being reduced down to zero before the correct pivot element has been found. In general, the approach is binary search-like, where the range of where the pivot can fall, is halved each iteration.

Can anyone spot what needs to be done here?

Here is a C++ implementation of the pivot selection:

size_t SelectPivot(const std::vector<int> &a, const std::vector<int> &b) {   size_t pivot = a.size() / 2;   size_t stride = pivot / 2;   while (stride > 0)   {     if (a[a.size() - pivot - 1] < b[pivot])     {       if (a[a.size() - pivot - 2] < b[pivot + 1] &&           a[a.size() - pivot] > b[pivot - 1])       {         return pivot;       }       else       {         pivot = pivot - stride;       }     }     else     {       pivot = pivot + stride;     }     stride = stride / 2;   }   return pivot; } 

P.S.: I tried ceiling the stride in order to not skip iterations when the stride is odd, but this introduced the issue of moving out of bounds of the array and even after handling those cases by clipping to the array bounds, the pivot was not always correct.

Optimal Selection of Non-Overlapping Jobs

I’m trying to find what the family of problem is – as well as an approach – for the following:

I have a set of tasks T = [t1, …, tn] to do, each of which has a corresponding reward ri. Each task takes place during a fixed interval – ie: task 1 is from times 1-4, task 2 from 2-5, and task 3 from 9-15. This means that I would have to pick either task 1 or 2 depending on which is more valuable, and then task 3 which does not conflict with either of the previous.

I’d like for this to scale to n tasks, and also to m "CPU’s" – where more than one task can be executed in parallel. This reminds me of the knapsack problem, but maybe an interval graph would provide a better approach?

Any suggestions on how to approach this problem, or any relevant references?

Is destructuring a heap (taking down a heap) also O(n) like building a heap? If so, can the selection problem be solved by this method in O(n) time?

If we can build up a heap with time O(n), can we take down a heap also by O(n)? (by delete-max repeatedly).

Intuitively, it may feel it is, because it is like the reverse of build it up.

If building a heap is O(n) in the worst case, including the numbers are all adding by ascending order, then taking the heap down is exactly the “reverse in time” operation, and it is O(n), but this may not be the “worst case” of taking it down.

If taking down a heap is really O(n), can’t the selection problem be solved by building a heap, and then taking it down (k – 1) time, to find the kth max number?

Wifi standards selection algorithm for a wireless communication

How the endpoints select the Wifi standard?

Assuming a 802.11 access point supporting 2,4Ghz b/g/n and a client compatible with these three technologies, 2,4Ghz b/g/n.

How does the client select the standard for the communication?

Are there any standard upgrading/downgrading during a session regarding some conditions (deterioration, environments perturbations, etc.)?

Finally, on Linux or Windows host, is there a way to find out which standard is currently used by the NIC?

Time complexity of a hybrid merge and selection sort algorithm

I’m trying to analyse the time and space complexity of the following algorithm, which is essentially a hybrid of a merge and selection sort. The algorithm is defined as follows:

def hybrid_merge_selection(L, k = 0):     N = len(L)     if N == 1:         return L     elif N <= k:         return selection_sort(L)     else:         left_sublist = hybrid_merge_selection(L[:N // 2])         right_sublist = hybrid_merge_selection(L[N // 2:])         return merge(left_sublist, right_sublist)  

My thinking is that the worst case scenario occurs when $ k$ is extremely large, which means that the insertion sort algorithm is always applied resulting in a time complexity of $ O(n^{2})$ , where $ n$ is the length of the list and the best case scenario occurs when $ N$ when $ k == 0$ , so the merge sort algorithm is only applied resulting in a time complexity of $ O(n\log_{2}n)$ . However, could somebody give me a more detailed and mathematical explanation of the time complexity, for all scenarios, namely worst, best, average.

Selection Sort vs Merge Sort

I recently wrote a written test for the recruitment of Scientists/Engineers in ISRO(Indian Space Research Organization) few days back and the following question appeared in the test.
Of the following algorithms, which has execution time that is least dependent on initial ordering of the input?

  1. Insertion Sort
  2. Quick Sort
  3. Merge Sort
  4. Selection Sort

Well if array is sorted, when we are doing 2 way merge always the left subarray will get exhausted and we have to simply fill in the right sub array. So, number of comparisons will be equal to length of left subarray everytime. In selection sort, if array is already sorted, number of comparisons will be the same as that of the worst case, however the index of minimum element will change only after 1 full pass.

In worst case, number of comparisons in 2 way merge will be (length of left subarray+length of right subarray-1). In selection sort, worst case minimum element’s index will keep on changing after every comparison.

Only 1 option can be correct. So, what’s the best answer?

Does input type=”file” selection support url ( upload from web ) ? -html

I wonder can I use the option of input type=”file” in html to upload from url ( web/ftp etc. ) ? Is there option to it? To more explain, I want to select zip file with url and uoload it to website.Which operating systems support it? How can do it on linux,mac and windows 10? I talking about the option on the link:

<input type="file">: How to Use This HTML Value

Genetic algorithm pressure using only selection

Suppose you have a population of N individuals with fitness 1, 2, . . . , N (i.e., all individuals have a unique fitness value). Suppose you repeatedly apply tournament selection without replacement with tournament size s = 2 to this population, without doing crossover, mutation, and replacement. In other words, you run a genetic algorithm with selection alone.

After a certain number of generations you will end up with a population consisting of N copies of the same individual. Can you give an estimate of the number of generations needed to achieve that?