How to Reconcile Apparent Discrepancy in this Algorithm’s Runtime?

I’m currently working through Algorithms by Dr. Jeff Erickson. The following is an algorithm presented in the book:

NDaysOfChristmas(gifts[2 .. n]):      for i ← 1 to n         Sing “On the ith day of Christmas, my true love gave to me”          for j ← i down to 2             Sing “j gifts[j],”         if i > 1             Sing “and”          Sing “a partridge in a pear tree.”  

Here’s the runtime analysis of the algorithm presented by Dr. Erickson:

The input to NDaysOfChristmas is a list of $ n − 1$ gifts, represented here as an array. It’s quite easy to show that the singing time is $ \Theta(n^{2})$ ; in particular, the singer mentions the name of a gift $ \sum_{i=1}^ni = \frac{n(n + 1)}{2}$ times (counting the partridge in the pear tree). It’s also easy to see that during the first $ n$ days of Christmas, my true love gave to me exactly $ \sum_{i=1}^{n}\sum_{j=1}^{i}= \frac{n(n + 1)(n + 2)}{6} = \Theta(n^3)$ gifts.

I can’t seem to grasp how it is possible your $ “$ true love$ “$ had given you $ \Theta(n^3)$ gifts, while a computer scientist looking at this algorithm would say the algorithm’s runtime complexity is $ \Theta(n^2)$ ?

Dr. Erickson also says the name of a gift is mentioned $ \frac{n(n+1)}{2}$ times, which is in $ \Theta(n^2)$ .

Is it possible for the runtime and input size in an algorithm to be inversely related?

I’m wondering if it’s possible for algorithms that have monotonically decreasing runtime with the input-size – just as a fun mental exercise. If not, is it possible to disprove this claim? I haven’t been able to come up with an example or counterexample so far, and this sounds like an interesting problem.

P.S. Something like $ O(\frac{1}{n})$ , I guess (if it exists)

Bubble Sort: Runtime complexity analysis like Cormen does

I’m trying to analyze Bubble Sort runtime in a method similar to how Cormen does in "Introduction to Algorithms 3rd Ed" for Insertion Sort (shown below). I haven’t found a line by line analysis like Cormen’s analysis of this algorithm online, but only multiplied summations of the outer and inner loops.

For each line of bubblesort(A), I have created the following times run. Appreciate any guidance if this analysis is correct or incorrect. If incorrect, how it should be analyzed. Also, I do not see the best case where $ T(n) = n$ as it appears the inner loop always runs completely. Maybe this is for "optimized bubble" sort, which is not shown here?

Times for each line with constant run time $ c_n$ , where $ n$ is the line number:

Line 1: $ c_1 n$

Line 2: $ c_2 \sum_{j=2}^n j $

Line 3: $ c_3 \sum_{j=2}^n j – 1$

Line 4: $ c_4 \sum_{j=2}^n j – 1$ Worst Case

$ T(n) = c_1 n + c_2 (n(n+1)/2 – 1) + c_3 (n(n-1)/2) + c_4 (n(n-1)/2)$

$ T(n) = c_1 n + c_2 (n^2/2) + c_2 (n/2) – c2 + c_3 (n^2/2) – c_3 (n/2) + c_4 (n^2/2) – c_4 (n/2)$

$ T(n) = (c_2/2+c_3/2+c_4/2) n^2 + (c_1 + c_2/2+c_3/2+c_4/2) n – c_2 $

$ T(n) = an^2 + bn – c$

Bubble Sort from Cormen

Insertion Sort from Cormen

Runtime for Search in Unordered_map C++ [closed]

I have come across lot of articles and questions suggesting that unordered_map is a lookup table that offers O(1) search time complexity. And I wonder how this is possible, And they say it is amortized to O(1) and worst case is O(n) for lookup. Now, even though after an extensive search I haven’t found when this lookup time hits O(n) and how actually unordered_map is implemented under the hood?

Runtime error : How do I avoid it for a large test case?

I have been solving the CSES problem set and I am stuck on the following problem : CSES-Labyrinth

Here is my solution :

#include <bits/stdc++.h> using namespace std;  int main() {     int n,m,distance=0,x=0,y=0;     string str1="NO",str2="";     cin>>n>>m;     char grid[n+1][m+1];     int vis[n+1][m+1];     int dis[n+1][m+1];     string path[n+1][m+1];     int dx[]={0,0,1,-1};     int dy[]={1,-1,0,0};     char dz[]={'R','L','D','U'};     queue<pair<int,int>>s;      for(int i=0;i<n;i++)         for(int j=0;j<m;j++){             cin>>grid[i][j];             if(grid[i][j]=='A'){                 x=i; y=j;             }             vis[i][j]=0;             dis[i][j]=0;             path[i][j]="";         }          s.push({x,y});     while(!s.empty()){         pair<int,int>a=s.front();         s.pop();         if(grid[a.first][a.second]=='B'){             distance=dis[a.first][a.second];             str1="YES";             x=a.first; y=a.second;             break;         }         if(vis[a.first][a.second]==1)         continue;         else{             vis[a.first][a.second]=1;             for(int i=0;i<4;i++){                 if(a.first+dx[i]<n && a.first+dx[i]>=0 && a.second+dy[i]<m && a.second+dy[i]>=0 && (grid[a.first+dx[i]][a.second+dy[i]]=='.' || grid[a.first+dx[i]][a.second+dy[i]]=='B')){                     s.push({a.first+dx[i], a.second+dy[i]});                     dis[a.first+dx[i]][ a.second+dy[i]]=dis[a.first][a.second]+1;                     path[a.first+dx[i]][ a.second+dy[i]]=path[a.first][a.second]+dz[i];                 }             }         }     }     if(str1=="YES"){         cout<<str1<<endl<<distance<<endl<<path[x][y];     }     else     cout<<str1; } 

I am getting a Runtime error on 3/15 test cases and this was the best result I could reach (other 12 cases are accepted). How do I avoid runtime errors? What is wrong with my solution?

Improve Prim’s algorithm runtime

Assume we run Prim’s algorithm when we know all the weights are integers in the range {1, …W} for W, which is logarithmic in |V|. Can you improve Prim’s running time?

When saying ‘Improving’, it means to at-least: $ $ O(|E|)$ $

My question is – without using priority queue, is it even possible? Currently, we learned that Prim’s runtime is $ $ O(|E|log|E|)$ $

And I proved I can get to O(|E|) when weights are from {1,….,W) when W is constant, but when W is logarithmic in |V|, I can’t manage to disprove/prove it.

Thanks

Is it correct or incorrect to say that an input say $C$ causes an average run-time of an algorithm?

I was going through the text Introduction to Algorithm by Cormen et. al. where I came across an excerpt which I felt required a bit of clarification.

Now as far as I have learned that that while the Best Case and Worst Case time complexities of an algorithm arise for a certain physical input to the algorithm (say an input $ A$ causes the worst case run time for an algorithm or say an input $ B$ causes the best case run time of an algorithm , asymptotically), but there is no such physical input which causes the average case runtime of an algorithm as the average case run time of an algorithm is by it’s definition the runtime of the algorithm averaged over all possible inputs. It is something I hope which only exists mathematically.

But on the other hand inputs to an algorithm which are neither the best case input nor the worst case input is supposed to be somewhere in between both the extremes and the performance of our algorithm is measured on them by none other than the average case time complexity as the average case time complexity of the algorithm is in between the worst and best case complexities just as our input between the two extremes.

Is it correct or incorrect to say that an input say $ C$ causes an average run-time of an algorithm?

The excerpt from the text which made me ask such a question is as follows:

In context of the analysis of quicksort,

In the average case, PARTITION produces a mix of “good” and “bad” splits. In a recursion tree for an average-case execution of PARTITION, the good and bad splits are distributed randomly throughout the tree. Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits. Figure(a) shows the splits at two consecutive levels in the recursion tree. At the root of the tree, the cost is $ n$ for partitioning, and the subarrays produced have sizes $ n- 1$ and $ 0$ : the worst case. At the next level, the subarray of size $ n- 1$ undergoes best-case partitioning into subarrays of size $ (n-1)/2 – 1$ and $ (n-1)/2$ Let’s assume that the boundary-condition cost is $ 1$ for the subarray of size $ 0$ .

The combination of the bad split followed by the good split produces three sub- arrays of sizes $ 0$ , $ (n-1)/2 – 1$ and $ (n-1)/2$ at a combined partitioning cost of $ \Theta(n)+\Theta(n-1)=\Theta(n)$ . Certainly, this situation is no worse than that in Figure(b), namely a single level of partitioning that produces two subarrays of size $ (n-1)/2$ , at a cost of $ \Theta(n)$ . Yet this latter situation is balanced! Image

Longest palindrome substring in logarithmic runtime complexity

In a palindrome of size N, the amount of candidates for the longest palindrome is N^2. Therefore, the information theoretic lower bound (IBT) should be lg(N^2), which is equivalent to a runtime complexity of lg(N).

By IBT I mean that if we use a comparison based algorithm and you think about a decision tree to apply it, the leafs of that tree will be all of the possibilities (N^2 leafs), therefore the height of that tree is lg(N^2). However, I was not able to find any algorithm that is able to solve this question in this runtime complexity; the best I have found is Manacher’s algorithm that solves the question in linear time.

FindInstance runtime is too long

FindInstance[  0 < x1 + x2 + x3=x4 < 2 && 0 < x4 < x3 < x2 < x1 < 1 &&    1/x1 + 1/(x1 - x2) + 1/(x1 - x3) + 1/(x1 - x4) < 1/(1 - x1) &&    1/x2 + -1/(x1 - x2) + 1/(x2 - x3) + 1/(x2 - x4) < 1/(1 - x2) &&    1/x3 - 1/(x1 - x3) - 1/(x2 - x3) + 1/(x3 - x4) < 1/(1 - x3) &&    1/x4 - 1/(x1 - x4) - 1/(x2 - x4) - 1/(x3 - x4) < 1/(1 - x4), {x1,    x2, x3, x4}, Reals]  FindInstance[  2 < x1 + x2 + x3+x4 < 2.1 && 0 < x4 < x3 < x2 < x1 < 1 &&    1/x1 + 1/(x1 - x2) + 1/(x1 - x3) + 1/(x1 - x4) < 1/(1 - x1) &&    1/x2 + -1/(x1 - x2) + 1/(x2 - x3) + 1/(x2 - x4) < 1/(1 - x2) &&    1/x3 - 1/(x1 - x3) - 1/(x2 - x3) + 1/(x3 - x4) < 1/(1 - x3) &&    1/x4 - 1/(x1 - x4) - 1/(x2 - x4) - 1/(x3 - x4) < 1/(1 - x4), {x1,    x2, x3, x4}, Reals] 

I’m trying to run the above above commands. I suspect that the first wouldn’t have a solution while the second would, however FindInstance does not return an answer, or at least running it for an hour didn’t result in anything. How can I speed up FindInstance, at least in this case?

I should note that I was able to quickly compute (<1 second) the 3-dimensional analog of the above.

What is the runtime of this pseudocode?

I need help figuring out the runtime of the following pseudocode. I believe it is O(|E| + |V|), but I’m not totally sure…

graphFunction(graph):      q = Queue()      for vertex in graph:          q.enqueue(vertex)      while q is not empty():          v = q.dequeue()          for  every outgoing edge of v:              edge.weight++          for every incoming edge of v:              edge.weight++