Runtime error : How do I avoid it for a large test case?

I have been solving the CSES problem set and I am stuck on the following problem : CSES-Labyrinth

Here is my solution :

#include <bits/stdc++.h> using namespace std;  int main() {     int n,m,distance=0,x=0,y=0;     string str1="NO",str2="";     cin>>n>>m;     char grid[n+1][m+1];     int vis[n+1][m+1];     int dis[n+1][m+1];     string path[n+1][m+1];     int dx[]={0,0,1,-1};     int dy[]={1,-1,0,0};     char dz[]={'R','L','D','U'};     queue<pair<int,int>>s;      for(int i=0;i<n;i++)         for(int j=0;j<m;j++){             cin>>grid[i][j];             if(grid[i][j]=='A'){                 x=i; y=j;             }             vis[i][j]=0;             dis[i][j]=0;             path[i][j]="";         }          s.push({x,y});     while(!s.empty()){         pair<int,int>a=s.front();         s.pop();         if(grid[a.first][a.second]=='B'){             distance=dis[a.first][a.second];             str1="YES";             x=a.first; y=a.second;             break;         }         if(vis[a.first][a.second]==1)         continue;         else{             vis[a.first][a.second]=1;             for(int i=0;i<4;i++){                 if(a.first+dx[i]<n && a.first+dx[i]>=0 && a.second+dy[i]<m && a.second+dy[i]>=0 && (grid[a.first+dx[i]][a.second+dy[i]]=='.' || grid[a.first+dx[i]][a.second+dy[i]]=='B')){                     s.push({a.first+dx[i], a.second+dy[i]});                     dis[a.first+dx[i]][ a.second+dy[i]]=dis[a.first][a.second]+1;                     path[a.first+dx[i]][ a.second+dy[i]]=path[a.first][a.second]+dz[i];                 }             }         }     }     if(str1=="YES"){         cout<<str1<<endl<<distance<<endl<<path[x][y];     }     else     cout<<str1; } 

I am getting a Runtime error on 3/15 test cases and this was the best result I could reach (other 12 cases are accepted). How do I avoid runtime errors? What is wrong with my solution?

Improve Prim’s algorithm runtime

Assume we run Prim’s algorithm when we know all the weights are integers in the range {1, …W} for W, which is logarithmic in |V|. Can you improve Prim’s running time?

When saying ‘Improving’, it means to at-least: $ $ O(|E|)$ $

My question is – without using priority queue, is it even possible? Currently, we learned that Prim’s runtime is $ $ O(|E|log|E|)$ $

And I proved I can get to O(|E|) when weights are from {1,….,W) when W is constant, but when W is logarithmic in |V|, I can’t manage to disprove/prove it.

Thanks

Is it correct or incorrect to say that an input say $C$ causes an average run-time of an algorithm?

I was going through the text Introduction to Algorithm by Cormen et. al. where I came across an excerpt which I felt required a bit of clarification.

Now as far as I have learned that that while the Best Case and Worst Case time complexities of an algorithm arise for a certain physical input to the algorithm (say an input $ A$ causes the worst case run time for an algorithm or say an input $ B$ causes the best case run time of an algorithm , asymptotically), but there is no such physical input which causes the average case runtime of an algorithm as the average case run time of an algorithm is by it’s definition the runtime of the algorithm averaged over all possible inputs. It is something I hope which only exists mathematically.

But on the other hand inputs to an algorithm which are neither the best case input nor the worst case input is supposed to be somewhere in between both the extremes and the performance of our algorithm is measured on them by none other than the average case time complexity as the average case time complexity of the algorithm is in between the worst and best case complexities just as our input between the two extremes.

Is it correct or incorrect to say that an input say $ C$ causes an average run-time of an algorithm?

The excerpt from the text which made me ask such a question is as follows:

In context of the analysis of quicksort,

In the average case, PARTITION produces a mix of “good” and “bad” splits. In a recursion tree for an average-case execution of PARTITION, the good and bad splits are distributed randomly throughout the tree. Suppose, for the sake of intuition, that the good and bad splits alternate levels in the tree, and that the good splits are best-case splits and the bad splits are worst-case splits. Figure(a) shows the splits at two consecutive levels in the recursion tree. At the root of the tree, the cost is $ n$ for partitioning, and the subarrays produced have sizes $ n- 1$ and $ 0$ : the worst case. At the next level, the subarray of size $ n- 1$ undergoes best-case partitioning into subarrays of size $ (n-1)/2 – 1$ and $ (n-1)/2$ Let’s assume that the boundary-condition cost is $ 1$ for the subarray of size $ 0$ .

The combination of the bad split followed by the good split produces three sub- arrays of sizes $ 0$ , $ (n-1)/2 – 1$ and $ (n-1)/2$ at a combined partitioning cost of $ \Theta(n)+\Theta(n-1)=\Theta(n)$ . Certainly, this situation is no worse than that in Figure(b), namely a single level of partitioning that produces two subarrays of size $ (n-1)/2$ , at a cost of $ \Theta(n)$ . Yet this latter situation is balanced! Image

Longest palindrome substring in logarithmic runtime complexity

In a palindrome of size N, the amount of candidates for the longest palindrome is N^2. Therefore, the information theoretic lower bound (IBT) should be lg(N^2), which is equivalent to a runtime complexity of lg(N).

By IBT I mean that if we use a comparison based algorithm and you think about a decision tree to apply it, the leafs of that tree will be all of the possibilities (N^2 leafs), therefore the height of that tree is lg(N^2). However, I was not able to find any algorithm that is able to solve this question in this runtime complexity; the best I have found is Manacher’s algorithm that solves the question in linear time.

FindInstance runtime is too long

FindInstance[  0 < x1 + x2 + x3=x4 < 2 && 0 < x4 < x3 < x2 < x1 < 1 &&    1/x1 + 1/(x1 - x2) + 1/(x1 - x3) + 1/(x1 - x4) < 1/(1 - x1) &&    1/x2 + -1/(x1 - x2) + 1/(x2 - x3) + 1/(x2 - x4) < 1/(1 - x2) &&    1/x3 - 1/(x1 - x3) - 1/(x2 - x3) + 1/(x3 - x4) < 1/(1 - x3) &&    1/x4 - 1/(x1 - x4) - 1/(x2 - x4) - 1/(x3 - x4) < 1/(1 - x4), {x1,    x2, x3, x4}, Reals]  FindInstance[  2 < x1 + x2 + x3+x4 < 2.1 && 0 < x4 < x3 < x2 < x1 < 1 &&    1/x1 + 1/(x1 - x2) + 1/(x1 - x3) + 1/(x1 - x4) < 1/(1 - x1) &&    1/x2 + -1/(x1 - x2) + 1/(x2 - x3) + 1/(x2 - x4) < 1/(1 - x2) &&    1/x3 - 1/(x1 - x3) - 1/(x2 - x3) + 1/(x3 - x4) < 1/(1 - x3) &&    1/x4 - 1/(x1 - x4) - 1/(x2 - x4) - 1/(x3 - x4) < 1/(1 - x4), {x1,    x2, x3, x4}, Reals] 

I’m trying to run the above above commands. I suspect that the first wouldn’t have a solution while the second would, however FindInstance does not return an answer, or at least running it for an hour didn’t result in anything. How can I speed up FindInstance, at least in this case?

I should note that I was able to quickly compute (<1 second) the 3-dimensional analog of the above.

What is the runtime of this pseudocode?

I need help figuring out the runtime of the following pseudocode. I believe it is O(|E| + |V|), but I’m not totally sure…

graphFunction(graph):      q = Queue()      for vertex in graph:          q.enqueue(vertex)      while q is not empty():          v = q.dequeue()          for  every outgoing edge of v:              edge.weight++          for every incoming edge of v:              edge.weight++ 

Recurrence and Runtime of Divide and Conquer Flavored Bogo Sort

Here we propose a way to reduce Bogo Sort’s runtime from factorial to exponential using a divide and conquer approach. This is something we have likely all pondered on extensively.

https://en.wikipedia.org/wiki/Bogosort.
Input: An unsorted array A[1…n].
Output: A sorted array.

Let’s remind ourselves of why normal Bogo Sort’s runtime is O(n!).

Let’s say we were to randomly guess the smallest element of an array. What are our odds of guessing right? Our odds would be 1/n of course. Let’s say we guessed right! Now let’s try to randomly guess the second smallest element of an array, after guesting the first right. What are our odds of guessing right? Our odds would be 1/(n-1) of course.

$ $ \frac{1}{n}*\frac{1}{n-1}*\frac{1}{n-2}\; * \;… \;*\; \frac{1}{n-n+1}$ $ Expectation is n!

Let’s use a Divide and Conquer strategy on Bogo Sort to improve the runtime. We will use a modified Merge Sort to achieve this. We recall that merge sort merges two sorted arrays recursively. Can Bogo Sort take advantage of the fact that we are merging two already sorted arrays? Let’s say we were to randomly guess the smallest element of two sorted arrays. We know that the smallest element of both can only ever be the first element of one of these two arrays. If we randomly guess from just the first two smallest elements in each array, what are our odds of guessing right? Our odds would be 1/2. This sounds better than 1/n, but let’s look at an example recurrence tree.

Example Recurrence Tree:
n = 4     brackets [ ] represent a merge being performed.

$ $ [\frac{1}{2}*\frac{1}{2}*\frac{1}{2}*\frac{1}{2}]$ $ $ $ [\frac{1}{2}*\frac{1}{2}]\;+\;[\frac{1}{2}*\frac{1}{2}]$ $ $ $ [\frac{1}{2}]+[\frac{1}{2}]+[\frac{1}{2}]+[\frac{1}{2}]$ $

[Depth 0] Expectation is 16
[Depth 1] Expectation is 4 + 4 = 8
[Depth 2] Expectation is 1 + 1 + 1 + 1 = 4

Note, that in our base case we only need sort two values, guess between two values, a total of n times. From the above, we see the expected runtime of D&Q Bogo Sort:
$ $ O(2^n + 2^{\frac{n}{2}} + 2^{\frac{n}{4}} \; …)$ $

I need help confirming the runtime. My assumption originally was that the runtime is simply O(2^n). However, I believe I need a log( ) in there somewhere. I am unsure how to apply Master’s Theorem to O(2^n) in my hypothesized recurrence:

$ $ 2T(\frac{n}{2}) + O(2^n)$ $

We have successfully reduced Bogo Sort from factorial runtime to exponential runtime using divide and conquer. Let me know how I can improve my analysis.

Why is the output runtime error?


include

struct node{ int data; struct node * next; }; void make_node(struct node nd,int val){ nd->data=val; nd->next=NULL; printf(“%d”,nd->data); } void print_node(struct node nd){ printf(“%d”,nd->data); } int main () { int val=5; struct node *nd; make_node(nd,val); print_node(nd); return 0; }

Whereas, if I run the code by declaring struct node nd in the main and pass &nd in the arguments, it works fine. I understand why that works fine but why is the above incorrect? Please ignore any of my mistakes, as I am a beginner in C and this is my first question on StackExchange.

What can cause low cost and high runtime in EXPLAIN ANALYZE?

I have a database that pretty consistently runs queries in a magnitude of cost/10 ms. There are a couple queries where EXPLAIN ANALYZE reports a cost of 2000 (which I’d expect to be somewhere in the ballpark of 200ms) but runs take multiple minutes.

My first thought is that some other activity is bogging down postgres causing this (either other processes on the machine or concurrent database activity). Is there anything else I should be looking into? Am I mistaken to expect similar cost:time ratios for different queries?