Algorithm for optimal spacing in intervals

Is there an algorithm to optimally space points within multiple intervals? Optimal in this case means maximizing the smallest distance between any two points so that each pair of points has at least distance X. For example, in the intervals (1,3) and (5,7) you can space out three points with a distance of at least 2 (at 1,5, and 7). But you can’t space out three points with a distance of at least 3. Is there an easy way to do this with a program?

Combining TWO Monte Carlo algorithms to get a Las Vegas algorithm that solves the same problem

I came across a problem that I have no clue how to solve.

Consider two Monte Carlo algorithms, called A and B that both solve the same problem. A is true-biased and t-correct, while B is false-biased and z-correct. Show that you can combine both A and B to obtain a Las Vegas algorithm to solve the same problem.

Also, how would I find the best value of R, which is the probability of the las vegas algorithm to find the right answer? For this second part, how would I find this fictional value of R with no concrete example or data set, this question seems completely out of left field.

Thank you kindly for your time 🙂

an algorithm for detecting if noisy univariate data is constant or is sum of step functions

In an explicit algorithm i’m writing, there is a certain stage where i need to determine whether or not a certain noisy univariate data is constant or is sum of step functions.

For example, defining foo as the algorithm i’m after (writing in python): assert foo([0]*20) == False assert foo([0]*20 + [1]*20) == True assert foo([0]*20 + [5]*30 + [1]*70) == True

  • The data in the examples is not noisy, but assume the real one is (just a bit, one can pinpoint where a step might take place by observing the plot of the data.

I’d be happy to hear any ideas, thank you.

Develop an algorithm

I participated in a programming competition at my University. I solved all the questions except this one. Now I am practicing this question to improve my skills. But I can’t figure out the algorithm. If there is any algorithm existing please update me. Or any similar algorithm is present then please tell me I will change it according to this question.

This is what I want to do.

  • The First line of input is the distance between two points.
  • After that, each subsequent line contains a pair of numbers indicating the length of cable and quantity of that cable. These cables are used to join the two points.
  • Input is terminated by 0 0


  • The output should contain a single integer representing the minimum number of joints possible to build the requested length of cableway. If no solution possible than print “No solution”.

Sample Input

444 16 2 3 2 2 2 30 3 50 10 45 12 8 12 0 0 

Sample Output


Defining an Algorithm for finding all alignments between two sequences

Let S and T be two sequences of length n and m, respectively. When calculating the dynamic programming table to find the optimal global alignments between the two sequences S and T, we can keep pointers to find the optimal alignments by following these pointers from cell (n, m) to cell (0,0). Each of the paths represents a different optimal alignment for the two sequences.

To illustrate, we have an example of a table between the sequences ACGTTA and AACTA. By following the arrows from the cell (6,5) to (0,0) we get a possible optimal alignment. There are many ways to get to (0,0). Each of those ways is a distinct possible optimal alignment.

enter image description here

The challenge is to find a dynamic programming algorithm that will give me the number of the possible paths from this first table probably by looking at pointers.

Because dynamic programming involves having a programming table I’m not sure how it should look like. What would be the appropriate policy that determines which cells to look at to find the result in the next cell.

I notice that each possible alignment is a path from (n,m) to (0,0). This implies that each different path is a divergence from a common path.

enter image description here

You will notice that the red path and the blue path have some parts in common. This seems to justify a dynamic programming solution because we have certain solutions in common that we can use to find the next solution.

My whole problem with this is how do I formalize this thinking into a policy that looks like: enter image description here

The left part is the intial conditions. It basically tells how to find the solution for the next cells based on previous cells. This is not specific to this problem it’s just an example of what I’m looking for.

What do you call a greedy algorithm that solves a combinatorial problem by optimizing the best k>1 choices altogether?

Suppose you have a problem which goal is to find the permutation of some set $ S$ given in input that minimizes an objective function $ f$ (for example the Traveling Salesman problem).

A trivial algorithm $ E(S)$ that find the exact solution enumerates all the permutations and outputs the one that minimizes $ f$ . Its time complexity is $ O(n!)$ where $ n$ is the size of $ S$ .

A trivial greedy algorithm $ G(S)$ that finds an approximation of the solution is:

 out[0] = select a good starting item from S according to some heuristic h_1. S = S - {out[0]} for i=1 to n-1 do:     out[i] = select the next best element using some heuristic h_2     S = S - {out[i]} return out 

Where $ h_1$ and $ h_2$ are two heuristics. Its complexity in time is $ O(n^2)$ assuming that $ h_2$ runs in constant time.

Sometimes I mix the two techniques (enumeration and greedy) by selecting at each step the best $ k$ items (instead of the best one) and enumerating all their permutations to find the one that locally minimizes $ f$ . Then I choose the best $ k$ items among the remaining $ n-k$ items and so on.

Here is the pseudocode (assuming $ n$ is a multiple of $ k$ ):

 for i in 0 to n/k do:     X = select the best k items of S according to some heuristic h     S = S - X     out[i*k ... (i+1)*k-1] = E(X) return out 

Where $ E(X)$ is algorithm that find the exact solution applied on a subset $ X \subset S$ rather than on the whole $ S$ . This last algorithm finds an approximate solution and has a time complexity of $ O(\frac{n}{k}(n \log k + k! ))$ assuming that $ h$ can be computed in constant time. This complexity can be comparable to $ O(n^2)$ if $ k$ is small although according to my experience the performances can be way better than the greedy approach.

I don’t think I invented this kind of optimization technique: do you know its name? Can you please include some theoretical references?

I know for sure it is not beam search, because beam search never mixes the best $ k$ solutions found at each step.

Thank you.

Faster Algorithm v/s Faster Machine

(a) Suppose that a particular algorithm has time complexity T(n)=5nlog(n), and executing an implementation of it on a particular machine takes T seconds for n inputs. Now suppose that we are presented with a machine that is 64 times faster. How many inputs would we process on the new machine in T seconds?

(b) If the running time is T(n) = 2n^3, how many inputs would we process on the new machine