Faster merge with less auxiliary space?

In Algorithms fourth edition by Robert Sedgewick and Kevin Wayne, the exercise 2.2.10 states that:

Implement a version of merge() that copies the second half of a[] to aux[] in decreasing order and then does the merge back to a[]. This change al- lows you to remove the code to test that each of the halves has been exhausted from the inner loop.

This is my code in Python:

def merge(a, lo, mi, hi):      aux_hi = deque(a[mi:hi])     for i in range(lo, hi)[::-1]:         if aux_hi:              last_a = i-len(aux_hi)             if last_a < lo:                  a[i] = aux_hi.pop()             else:                  a[i] = aux_hi.pop() if aux_hi[-1] > a[last_a] else a[last_a]         else:               break 

Have I implemented it as required? Or can it be improved further?

Free space created for installation of Ubuntu ‘unusable’

Ubuntu – 18.10 Boot Mode – Legacy The bootable USB was created using rufus. I freed up 200GB for the installation. When i select ‘something else’ the freed space shows up as unusable. What to do ?enter image description here EDIT

During the installation the ubuntu installer says that there is no other OS on the machine but there is Windows 10 installed.

Difference between used space in Volume overview and Storagepool?

I got this difference between free space in the Volume overview (correct numbers) and the Storagepool overview (wrong, the discs #1, #3, #4, #5, #6 are not full).

Volume Overview Storagepool Overview

How can I force the storage pool in my Synology DS1813Plus to refresh the numbers of used space and available/free space?

And the other wrong number: The capacity of Volume 2 is shown with 3.58 TB. This was the old disc, replaced with a bigger one with 9.09 TB (as correctly shown in Storagepool).

How can I force the Volume manager to refresh his capacity numbers?

How can I merge primary APFS partition with other empty/free space APFS partition?

I am using MacOS Mojave. Below is the diskutil output

enter image description here

There is 150G of free Space. After executing the resize command. I am seeing Error 69519. How do I merge the free space with disc0s2 ?

MacBook-Pro:~ mac$ sudo diskutil apfs resizeContainer disk0s2 0
Started APFS operation
Error: -69519: The target disk is too small for this operation, or a gap is required in your partition map which is missing or too small, which is often caused by an attempt to grow a partition beyond the beginning of another partition or beyond the end of partition map usable space

I have ran the following steps(disk0s4 is not seen in the image as its deleted and is now free space)

Step 1: Delete the container

sudo diskutil apfs deleteContainer disk0s4

Step 2: Erase the volume

sudo diskutil eraseVolume “Free Space” %noformat% /dev/disk0s4

Step 3: Resize the container

sudo diskutil apfs resizeContainer disk0s2 0

Solution list from “Solve” too large for my RAM space

I was trying to evaluate a sum of the form $ $ \sum_{\{x_1,x_2,\ldots,x_{n}\}}f(x_1,x_2,\ldots,x_{n}),$ $ where $ \{x_1,x_2,\ldots,x_{n}\}$ are solutions of a system of linear equations and inequalities of the form, say, $ $ x_1+x_2+\cdots+x_{15}=4,x_7+x_9+x_{19}+x_{20}=4,\cdots,0\leq x_i\leq4.$ $ I used “Solve” to generate the list of all solutions of the linear equations and then substitute them into the sum. This works to about $ n=29$ , but for larger $ n$ the RAM space runs out just to store the solution list of the linear equations. Now I need to solve the problem for $ n=31$ , is there a way to circumvent the RAM issue? For example is there a way to generate the solutions on the fly and use them in the sum, and when a solution has been used it will be immediately dumped from memory?

Efficiently shuffling items in $N$ buckets using $O(N)$ space

I’ve run into a challenging algorithm puzzle while trying to generate a large amount of test data. The problem is as follows:

  • We have $ N$ buckets, $ B_1$ through $ B_N$ . Each bucket $ B_i$ maps to a unique item $ a_i$ and a count $ k_i$ . Altogether, the collection holds $ T=\sum_1^N{k_i}$ items. This is a more compact representation of a vector of $ T$ items where each $ a_i$ is repeated $ k_i$ times.

  • We want to output a shuffled list of the $ T$ items, all permutations equally probable, using only $ O(N)$ space and minimal time complexity. (Assume a perfect RNG.)

  • $ N$ is fairly large and $ T$ is much larger; 5,000 and 5,000,000 in the problem that led me to this investigation.

Now clearly the time complexity is at least $ O(T)$ since we have to output that many items. But how closely can we approach that lower bound? Some algorithms:

  • Algorithm 1: Expand the buckets into a vector of $ T$ items and use Fisher-Yates. This uses $ O(T)$ time, but also $ O(T)$ space, which we want to avoid.

  • Algorithm 2: For each step, choose a random number $ R$ from $ [0,T-1]$ . Traverse the buckets, subtracting $ k_i$ from $ R$ each time, until $ R<0$ ; then output $ i$ and decrement $ k_i$ and $ T$ . This seems correct and does not use extra space. However, it takes $ O(NT)$ time, which is quite slow when $ N$ is large.

  • Algorithm 3: Convert the vector of buckets into a balanced binary tree with buckets at the leaf nodes; the depth should be close to $ \log_2{N}$ . Each node stores the total count of all the buckets under it. To shuffle, choose a random number $ R$ from $ [0,T-1]$ , then descend into the tree accordingly, decrementing each node count as we go; when descending to the right, reduce $ R$ by the left count. When we reach a leaf node, output its value. It uses $ O(N)$ space and $ O(T\log{N})$ time.

  • Algorithm 3a: Same as Algorithm 3, but with a Huffman tree; this should be faster if the $ k_i$ values vary widely, since the most often visited nodes will be closer to the root. The performance is more difficult to assess, but looks like it would vary from $ O(T)$ to $ O(T\log{N})$ depending on the distribution of $ k_i$ .

Algorithm 3 is the best I’ve come up with. Here are some illustrations to clarify it:

Illustrations of Algorithm 3

Does anyone know of a more efficient algorithm? I tried searching with various terms but could not find any discussion of this particular task.

Back-propagation for learning parameters of multilayer, feed forward neural nets . What is its search space? What search method does it employ?

My question is about artificial intelligence, specifically neural networks.Hi I’m wondering what search method does back propagation use and its search space. I can’t find resources that state what it is.

Embedding of $CP^2/CP^1$ into euclidean space

It is a standard exercise in embedding theory to show that $ S^3 \to \mathbb{R}^4$ given by $ (x,y,z) \mapsto (x^2-y^2,xy,xz,yz)$ induces an embedding $ \mathbb{R}P^2 \to \mathbb{R}^4$ . Since $ \mathbb{R}P^2/\,\mathbb{R}P^1 \cong \mathbb{R}P^2$ , the previous map gives an embedding of $ \mathbb{R}P^2/\,\mathbb{R}P^1$ into $ \mathbb{R}^4$ .

Is there a nice embedding of $ \mathbb{C}P^2/\,\mathbb{C}P^1$ into $ \mathbb{R}^8$ ?

Choice free method to define immersions into Projective Space

Let $ X$ be a variety and $ \mathcal{L}$ be a very ample line bundle on $ X$ . Suppose $ H^0(X,\mathcal{L}) = \langle s_0,…,s_n \rangle$ then there is an immersion into projective space:

$ $ X \rightarrow \mathbb{P}(\,H^0(X,\mathcal{L})\check \,)$ $

given by evaluation on closed points $ x\rightarrow \{s\in H^0(X,\mathcal{L})\,|\,s(x)=0\}$ .

My first question is this: how can I extend the definition of this morphism to non-closed points? I know there is alternative approach as in Hartshorne, however I was looking for a choice free method to define such morphism.

My second question is somewhat related: it is often said that $ \mathcal{L}$ gives rise to a morphism

$ $ X \longrightarrow \mathrm{Proj}\left(\oplus_{k\in\mathbb{N}} \,H^0(X,\mathcal{L}^{\otimes k}\,)\right)\longrightarrow \mathbb{P}_k^n = \mathrm{Proj}\left(\,k\,[s_0,s_1,…,s_n]\,\right) $ $

where I believe the first map is an isomorphism (but I am not sure) and the second map is an immersion into projective space (again I am not sure you would want to define projective space in this way). How can you see this fact? I cannot find any references so feel welcome to just tell me where to look it up.

Thank you!