How to select and delete all Output cells within current Selection only?

A similar question was already here: How to select and delete all Output cells?. But I have a more specific one: I want to delete all output cells within the currently selected cells only.

For this purpose I want to add an additional menu item Cell>Delete All Output within Selection.

Probably one would have to add something to


Has anybody already made such an improvement and could he please post it here?

(The version in this case is 12.1 and it probably has to reflect the version in question. This path is for a Linux machine. On a Windows machine the paths begin at a different root but parts from Wolfram to TextRessources probably will look the same, except for the forward slashes being replaced by backward ones- /X/ might be special for Linux).

What is the process of index selection?

I have used several databases (relational + NoSQL) as a developer for 3+ years but I have a basic idea about the core of the database processes and database administration tasks. My question is about index selection problem. What I understood when reading through several articles is that in some databases query optimizer can choose the most relevant index(es) and in some other a database administrator have the authority to select the index(es) from the suggested list of indexes by the optimizer. But the idea on the process of selecting indexes is still vague. Can you give me a descriptive answer on how the index selection process happens or recommend me a book or a article to read to get a precise idea on the process of index selection from A to Z. The key areas I need information are,

  1. What are the criteria used to decide an index is the most appropriate for a query?
  2. Is there a difference between index selection in relational databases and index selection in NoSQL databases?
  3. What role does the query optimizer plays in index selection?
  4. If you are to automate the index selection process what would you most consider on giving solutions or taking new approaches?
  5. Are there any practical problems when in it comes to index selection and the performance of the database?
  6. Do I have the freedom of choosing different index structures (b tree, b+ tree, hashing,…) while creating indexes initially or do I need to stick in to one type of index structure?

Error in pivot selection algorithm for merge phase [Sorting]

In the paper Comparison Based Sorting for Systems with Multiple GPUs, the authors describe the selection of a pivot element with respect to the partition on the first GPU (and its mirrored counterpart on the other GPU-partition). That pivot element is crucial for being able to merge the two partitions, given that we have already sorted them on each GPU locally.

However, the pseudo-code for that pivot-selection, as shown in the paper, doesn’t seem to reflect the whole truth since when implementing it 1:1, the selected pivot element is off by some elements in some cases, depending on the input – the amount of elements to sort and therefore the amount of elements per partition (the chunk of data that each GPU gets).

To get more specific, the problem is – to my understanding – that the while loop is exited too early due to the stride being reduced down to zero before the correct pivot element has been found. In general, the approach is binary search-like, where the range of where the pivot can fall, is halved each iteration.

Can anyone spot what needs to be done here?

Here is a C++ implementation of the pivot selection:

size_t SelectPivot(const std::vector<int> &a, const std::vector<int> &b) {   size_t pivot = a.size() / 2;   size_t stride = pivot / 2;   while (stride > 0)   {     if (a[a.size() - pivot - 1] < b[pivot])     {       if (a[a.size() - pivot - 2] < b[pivot + 1] &&           a[a.size() - pivot] > b[pivot - 1])       {         return pivot;       }       else       {         pivot = pivot - stride;       }     }     else     {       pivot = pivot + stride;     }     stride = stride / 2;   }   return pivot; } 

P.S.: I tried ceiling the stride in order to not skip iterations when the stride is odd, but this introduced the issue of moving out of bounds of the array and even after handling those cases by clipping to the array bounds, the pivot was not always correct.

Optimal Selection of Non-Overlapping Jobs

I’m trying to find what the family of problem is – as well as an approach – for the following:

I have a set of tasks T = [t1, …, tn] to do, each of which has a corresponding reward ri. Each task takes place during a fixed interval – ie: task 1 is from times 1-4, task 2 from 2-5, and task 3 from 9-15. This means that I would have to pick either task 1 or 2 depending on which is more valuable, and then task 3 which does not conflict with either of the previous.

I’d like for this to scale to n tasks, and also to m "CPU’s" – where more than one task can be executed in parallel. This reminds me of the knapsack problem, but maybe an interval graph would provide a better approach?

Any suggestions on how to approach this problem, or any relevant references?

Is destructuring a heap (taking down a heap) also O(n) like building a heap? If so, can the selection problem be solved by this method in O(n) time?

If we can build up a heap with time O(n), can we take down a heap also by O(n)? (by delete-max repeatedly).

Intuitively, it may feel it is, because it is like the reverse of build it up.

If building a heap is O(n) in the worst case, including the numbers are all adding by ascending order, then taking the heap down is exactly the “reverse in time” operation, and it is O(n), but this may not be the “worst case” of taking it down.

If taking down a heap is really O(n), can’t the selection problem be solved by building a heap, and then taking it down (k – 1) time, to find the kth max number?

Wifi standards selection algorithm for a wireless communication

How the endpoints select the Wifi standard?

Assuming a 802.11 access point supporting 2,4Ghz b/g/n and a client compatible with these three technologies, 2,4Ghz b/g/n.

How does the client select the standard for the communication?

Are there any standard upgrading/downgrading during a session regarding some conditions (deterioration, environments perturbations, etc.)?

Finally, on Linux or Windows host, is there a way to find out which standard is currently used by the NIC?

Time complexity of a hybrid merge and selection sort algorithm

I’m trying to analyse the time and space complexity of the following algorithm, which is essentially a hybrid of a merge and selection sort. The algorithm is defined as follows:

def hybrid_merge_selection(L, k = 0):     N = len(L)     if N == 1:         return L     elif N <= k:         return selection_sort(L)     else:         left_sublist = hybrid_merge_selection(L[:N // 2])         right_sublist = hybrid_merge_selection(L[N // 2:])         return merge(left_sublist, right_sublist)  

My thinking is that the worst case scenario occurs when $ k$ is extremely large, which means that the insertion sort algorithm is always applied resulting in a time complexity of $ O(n^{2})$ , where $ n$ is the length of the list and the best case scenario occurs when $ N$ when $ k == 0$ , so the merge sort algorithm is only applied resulting in a time complexity of $ O(n\log_{2}n)$ . However, could somebody give me a more detailed and mathematical explanation of the time complexity, for all scenarios, namely worst, best, average.