What recursive T(N) function typically can conclude the algorithm is O(n ^ 2), O(n log n), O(n), and O(log n)?

Is it true that some common forms of recursive T(n) can give the following conclusions?

When

T(n) = T(n/c) + b    where c is a constant > 1, b is any constant 

then the algorithm is O(log n).

When

T(n) = T(n/c) + T(n/d) + bn   where c and d are constants > 1, b is any constant 

then the algorithm is O(n log n).

When

T(n) = T(n - c) + bn   where c, b are constants > 1 

then the algorithm is O(n2) and seems like many useful algorithms don’t have this pattern and O(n2) is not often seen in classical algorithms.

I have seen the form:

T(n) = T(n/c) + T(n/d) + O(n)   where c and d are constants > 1 

for the selection / median algorithm and it is concluded that the algorithm is O(n) but isn’t the T(n) the same as formula 2 for O(n log n) above?

Modifying relaxation for the Bellman-Ford algorithm

I’m using the Bellman-Ford algorithm to find the best path in my graph. However, instead of choosing the path with the lower value, I want to choose the path with the highest value. And instead of using the sum of all edges as a length of a path, I want to use the following formula: lengthOfPath = sqrt(a^2 - b^2) for each edge, where a is the edge we are coming from, and b is the edge we are arriving to.

Basically, the formula represents the number of units left in our army after conquering a city, so logically, we want to choose the path with the least casualties.

My idea was to use the Bellman-Ford algorithm and rewrite the relaxation so it works according to our rules. I’m using a custom library for graphs and I already created a graph g, added all cities as vertices and set the weight of an edge between a & b is the size of our military, while the weight of an edge between b & a is the size of enemy military guarding the city, where a & b are our vertices (cities).

I used the following code (g is our graph, getVertices() and getEdges() returns a set of all vertices and edges, getSource() and getTarget() returns a source or target vertex of an edge etc..):

public Map<Vertex, Double> bellmanFord(Vertex s) {         Map<Vertex, Double> d = g.createVertexMap(Double.POSITIVE_INFINITY);         d.put(s, 0d);         for (int i = 0; i < g.getVertices().size(); i++)                 for (Edge e : g.getEdges())                         relax(e, d);         return d; }  public void relax(Edge e, Map<Vertex, Double> d) {         Vertex u = e.getSource();         Vertex v = e.getTarget();         if (d.get(u) + e.getWeight() < d.get(v))                 d.put(v, d.get(u) + e.getWeight()); } 

And below is my modified code for the relaxation:

public void relax(Edge e, Map<Vertex, Double> d) {         Vertex u = e.getSource();         Vertex v = e.getTarget();         if (d.get(u) - formula(g.getEdge(u, v).getWeight(), g.getEdge(v, u).getWeight()) > d.get(v))             d.put(v, d.get(u) - formula(g.getEdge(u, v).getWeight(), g.getEdge(v, u).getWeight())); }  public double formula(double ourCity, double enemyCity) {         double a = Math.pow(ourCity, 2);         double b = Math.pow(enemyCity, 2);         double result = a - b;         return Math.sqrt(vysledok); } 

However, using my modified code for relaxation the Bellman-Ford is not working correctly. It always returns a map with all values set to infinite. Could you help me fix my problem?

Thanks

Which algorithm would be better to integrate “step” like functions

I have a function f[wr], in which the appearance is step like, first of all, the NIntegrate function is not handling this function, so I decided to do the integration algorithm by hand.

enter image description here

Without joined in plot

I have tried to integrate f[wr] using Riemann’s sum

data[mu_] :=   Module[{Te = 300., sum = 0., sum1 = 0., kb = 8.61*10^-5, a},    Do[    sum = sum + (f[wr] 0.01);     sum1 = sum1 + (f[wr] (wr - mu) 0.01);, {wr, mu - 30 kb Te,      mu + 30 kb Te, 0.01}]; sum1/(Te*sum)] 

and then, evaluate it with

Table[{data[mu],{mu,-1,1}] 

I think the results could be better if I try other algorithm. Could you suggest one?

edit: f[wr] algorithm https://pastebin.com/tfbGmhDa

Is there a way to modify Kadane’s Algorithm such that we know the resulting subarray?

Kadane’s Algorithm is an algorithm that solves the maximum subarray problem by clever dynamic programming. Is there a way to further modify the algorithm so that we would get to know the resulting subarray that produces the corresponding maximum sum?

PS: I don’t know whether I should post this here, or Stack Overflow, or both.

Design a greedy algorithm by intermingling two sequences

I find myself solving problems for a test and the next problem I still can’t solve it. There are n ordered sequences, $ S_1$ to $ S_n$ . It is requested to intermingling them to obtain a single sequence, but doing the minimum possible work. The sequences are interspersed by two, the interleaving sequences $ A$ and $ B$ a long sequence is obtained $ | A | + | B |$ , and the work done is proportional to the length of the resulting sequence.

1 – Create an algorithm to perform this task, choosing pairs of sequences to be inserted in each step.

2 – What is the total cost of your algorithm, in terms of the lengths of the original sequences?

3 – Show that your algorithm delivers an optimal sequence.

I would really appreciate your help. Thanks in advance.

What is the time complexity of a binary multiplication using Karatsuba Algorithm?

My apologies if the question sounds naive, but I’m trying wrap my head around the idea of time complexity.

In general, the Karatsuba Multiplication is said to have a time complexity of O(n^1.5...). The algorithm assumes that the addition and subtraction take about O(1) each. However, for binary addition and subtraction, I don’t think it will be O(1). If I’m not mistaken, a typical addition or subtraction of two binary numbers takes O(n) time.

What will be the total time complexity of the following program then that multiplies two binary numbers using Karatsuba Algo that in turn performs binary addition and subtraction?

long multKaratsuba(long num1, long num2) {  if ((num1>=0 && num1<=1) && (num2>=0 && num2<=1)) {    return num1*num2;  }   int length1 = String.valueOf(num1).length(); //takes O(n)? Not sure  int length2 = String.valueOf(num2).length(); //takes O(n)? Not sure   int max = length1 > length2 ? length1 : length2;  int halfMax = max/2;   // x = xHigh + xLow  long num1High = findHigh(num1, halfMax); // takes O(1)  long num1Low = findLow(num1, halfMax); // takes O(1)   // y = yHigh + yLow   long num2High = findHigh(num2, halfMax); // takes O(1)  long num2Low = findLow(num2, halfMax); // takes O(1)   // a = (xHigh*yHigh)  long a = multKaratsuba(num1High, num2High);   // b = (xLow*yLow)  long b = multKaratsuba(num1Low, num2Low);   //c = (xHigh + xLow)*(yHigh + yLow) - (a + b);  long cX = add(xHigh,xLow); // this ideally takes O(n) time  long cY = add(yHigh,yLow); // this ideally takes O(n) time  long cXY = multKaratsuba(cX, cY);  long cAB = add(a, b) // this ideally takes O(n) time  long c = subtract(cXY, cAB) // this ideally takes O(n) time   // res = a*(10^(2*m)) + c*(10^m) + b  long resA = a * (long) Math.pow(10, (2*halfMax)); // takes O(1)  long resC = c * (long) Math.pow(10, halfMax); // takes O(1)  long resAC = add(resA, resC); // takes O(n)  long res = add(resAC, b); // takes O(n)   return res; } 

Proof of the average case of the Heap Sort algorithm

Consider the following python implementation of the Heap Sort algorithm:

def heapsort(lst):     length = len(lst) - 1     leastParent = length // 2     for i in range (leastParent, -1, -1):         moveDown(lst, i, length)      for i in range(length, 0, -1):         if lst[0] > lst[i]:             swap(lst, 0, i)             moveDown(lst, 0, i - 1)   def moveDown(lst, first, last):     largest = 2 * first + 1     while largest <= last:         # right child is larger than left         if (largest < last) and (lst[largest] < lst[largest + 1]):             largest += 1          # right child is larger than parent         if lst[largest] > lst[first]:             swap(lst, largest, first)             # move down to largest child             first = largest;             largest = 2 * first + 1         else:             return # exit   def swap(lst, i, j):     tmp = lst[i]     lst[i] = lst[j]     lst[j] = tmp 

I have been able to formally prove that worst-case is in $ \Theta(n \log(n))$ and that the best-case is in $ \Theta(n)$ (some might argue the best-case is in $ \Theta(n \log(n))$ as well since that’s what most searches on the internet will return but just think of what happens when an input list where all of the elements are the same number).

I have showed both upper and lower bounds of the worst and best-case by realizing that the route taken by moveDown function is dependent on the height of the heap/tree and whether the elements in the list are distinct or the same number across the whole list.

I have not been able to prove the average case of this algorithm which I know is also in $ \Theta(n \log(n))$ . I do know, however, that I am supposed to consider an input set or family of lists of all length $ n$ and I am allowed to make an assumption such as that all of the elements in the list are distinct. I confess that I am not good at average-case analysis and would really appreciate it if someone could give a complete and thorough proof(including the exact expressions especially of the number of inputs) as it would help me understand the concept a great deal.

Design of Evolutionary Algorithm

I was going through some study material and a practice question popped up asking the reader to design an evolutionary algorithm to solve the following task:

“A candy bar consists of sugar, choclate, milk, and a mixture of additives (flavors). The task for the EA is to determine a mixture of additives to produce a good candy bar. The mixture of additives can consist of up to A = 17 ingredients, out of possible M = 42 ingredients; none of the additives shall exceed 13% of the flavor mixture.

Assume further, that you have access to a large pool of students, that are willing to test, and judge the quality of your creation.

Describe all essential steps that you propose for the EA with respect to the given task, and propose a setting of the relevant EA parameters whenever appropriate.”

I have got absolutely no idea how to approach the task. Any suggestions with respect to the genome, selection, inheritance and mutation operators and the fitness function would be highly appreciated.

Number of Times to Run an Evolutionary (Genetic) Algorithm

Evolutionary algorithms like genetic algorithms (GAs) are typically run multiple times and the outputted results are averaged across successive runs.

However, in the case of long-runtime algorithms (e.g., due to large population sizes or algorithm complexity), is there any justification for running GAs (and the like) only once? Clearly, statistical evaluation of variability is not possible in such a case.

I’ve not been able to find anything in the literature on the subject, so am asking here to gain some insight from the CS community.