Time complexity of combinations of n pairs of parentheses

I have the following code snippet for combinations of n pairs of parentheses.

def parens(n):     if n <= 0:         return ['']     else:         combinations = []         helper('', n, n, combinations)         return combinations   def helper(string, left, right, combinations):     if left <= 0 and right <= 0:         combinations.append(string)     else:         if left > 0:             helper(string + '(', left - 1, right, combinations)         if right > left and right > 0:             helper(string + ')', left, right - 1, combinations) 

What’s the reasonable estimation of the time complexity of it?

My trial:

  1. (2n)!/n!n! since it’s full permutation with the same elelment with more limitation:(upper bound)
  2. Resolve recurrence: T(n) = 2T(n-1) => O(2^n)

What is the run time of this algorithm, written in pseudocode?

count = 0 for i = 1 to n:     for j = 1 to i:         count += 1 

So from my understanding, we can break this up into 2 summations, by nesting the $ j$ loop as a summation into the $ i$ loop as a summation as follows:

$ \sum\limits_{i=1}^{n} \sum\limits_{j=1}^{i}$ 1

Since incrementing count is 1 operation O(1).

Then, we can manipulate the above summation to:

= $ \sum\limits_{i=1}^{n} (i – 1 + 1)$ using property of summations

= $ \sum\limits_{i=1}^{n} i$

= $ \frac{n(n+1)}{2}$ = $ O(n^2)$

Is this the correct approach?

Is the Clique Problem polynomial time reducible to the graph-Homomorphism Problem and if so what does the reduction look like?

Is the k-Clique Problem (given a Graph G and a natural number k does G kontain a Clique of size k)
polynomial time reduzible to the graph-Homomorphism Problem (given two graphs, G and H, is there a Homomorphism from G to H)

And if so what would the reduction look like?

Since i am a little confused by the subject, is the following correct?

A polynomial time reduction from Clique to graph-Homomorphism is a funktion that can be calculated in polynomial time and for which if you input a yes instance of clique it returns a yes instance of graph-Homomorphism, same for no instances.

Time complexity of a 2-heap question

The problem statement is pretty straight forward: given an array of integers and a window size, return an array of doubles of the median of each window.

arr = 1, 3, 5, 10, 6, 9, 2

k = 3

would yield a result of 3, 5, 6, 9, 6

Using std::priority_queue (in C++) for the heap implementation, there’s a minHeap and maxHeap. In a single iteration, insert the value entering our window in the correct heap, rebalance as necessary, add the median to the result (if the window is big enough), then remove the value which is leaving the window from whatever heap it’s in: This could require moving all but 1 heap element to the other heap, then moving them back.

The lesson I saw this on actually inherits from priority queue and implements remove functionality: Linear search [O(k)], then removes the item [O(log k)]. It claims O(n * k) complexity as at each iteration the insertion is O(log k) and the search to remove is O(k). I assume in an interview extending a heap beyond it’s traditional form is not only unnecessary but probably frowned upon.

I’m curious of the complexity at which the version w/o the direct removal would run. The O(n) part is obvious but the sub operations not as clear to me. A heap will generally have k/2 items in it. In the worst case you delete [O(log k)] and then insert [O(1)] each one. My mind is telling me O(n * k log k) but I wouldn’t bet my house on it.

For the record: Not looking for an optimal solution – just the runtime of this one.

class SlidingWindowMedian { public:     virtual std::vector<double> findSlidingWindowMedian(const std::vector<int> &nums, int k) {         std::vector<double> result{};          int left{0};         for (int right = 0; right < nums.size(); right++) {             insert(nums[right]);              if (right >= k-1) {                 result.push_back(getMedian());                 remove(nums[left++]);             }          }          return result;     }      void insert(int num)     {         if (maxHeap.empty() || maxHeap.top() >= num) {             maxHeap.push(num);         } else {             minHeap.push(num);         }          if (maxHeap.size() > minHeap.size() + 1) {             minHeap.push(maxHeap.top()); maxHeap.pop();         } else if (minHeap.size() > maxHeap.size()) {             maxHeap.push(minHeap.top()); minHeap.pop();         }      }      double getMedian()     {         if (maxHeap.size() == minHeap.size()) {             return (maxHeap.top() + minHeap.top()) / 2.0;         } else if (maxHeap.size() > minHeap.size()) {             return maxHeap.top();         } else {             return minHeap.top();         }     }      // Faster would be to extend priority queue to support remove     void remove(int num)      {         if (maxHeap.top() >= num) {             while (maxHeap.top() != num) {                 minHeap.push(maxHeap.top()); maxHeap.pop();             }              if (maxHeap.top() == num) maxHeap.pop();              while (minHeap.size() > maxHeap.size()) {                 maxHeap.push(minHeap.top()); minHeap.pop();             }         } else {             while (!minHeap.empty() && minHeap.top() != num) {                 maxHeap.push(minHeap.top()); minHeap.pop();             }              if (minHeap.top() == num) minHeap.pop();              while(maxHeap.size() > minHeap.size() + 1) {                 minHeap.push(maxHeap.top()); maxHeap.pop();             }         }      }      std::priority_queue<int, std::vector<int>, std::greater<int>> minHeap{};     std::priority_queue<int, std::vector<int>, std::less<int>> maxHeap{}; }; 

How to maximize f while minimizing g at the same time?

Lately, I have been dealing with a problem that I didn’t know how to name it to solve it properly.

The problem is as follow: lets a assume that we have a set of element A. And, we have two function f and g, where for any sub-set B \in A, where |B|< k, k is a constraint:

  1. f(B) : estiamtes the gain obtainted by the set B.
  2. g(B) : estiamtes the lost obtainted by the set B.

In our problem we have two strategies S1, S2 which depends on the circumstances of the environment

  1. S1: selelects a set B that maximize the gain
  2. S2: selelects a set B that minimize the lost

my strategy is a hybride strategy, which is selecting a sets B1 and B2 where |B1|+|B2|

NT: given that there are several circumstances some times S1 works more effecientlly , and some cases S2 works better

is ther anyone who knows what type of problem? any documentation about it ? since it is a NP-hard problem, is there way to find an approximation with in the optimal solution?

Algorithm for finding an irreducible kernel of a DAG in O(V*e) time, where e is number of edges in output

An irreducible kernel is the term used in Handbook of Theoretical Computer Science (HTCS), Volume A “Algorithms and Complexity” in the chapter on graph algorithms. Given a directed graph G=(V,E), an irreducible kernel is a graph G’=(V,E’) where E’ is a subset of E, and both G and G’ have the same reachability (i.e. their transitive closures are the same), and removing any edge from E’ would not satisfy this condition, i.e. E’ is minimal (although not necessarily the minimum size possible).

A minimum equivalent graph is similar, except it also has the fewest number of edges among all such graphs. Both of these concepts are similar to a transitive reduction, but not the same because a transitive reduction is allowed to have edges that are not in E.

HTCS says that there is an algorithm to calculate an irreducible kernel of a directed acyclic graph in time O(V*e) time, where V is the number of vertices, and e is the number of edges in the irreducible kernel, i.e. the output of the algorithm. The reference given for this is the following paper, which I have not been able to find an on line source for yet (links or other sources welcome — I can ask at a research library soon if nothing turns up).

Noltemeier, H., “Reduction of directed graphs to irreducible kenrels”, Discussion paper 7505, Lehrstuhl Mathematische Verfahrenforschung (Operations Research) und Datenverarbeitung, Univ. Gottingen, Gottingen, 1975.

Does anyone know what this algorithm is? It surprises me a little that it includes the number of edges in the output graph, since that would mean it should run in O(n^2) time given an input graph with O(n^2) edges that represents a total order, e.g. all nodes are assigned integers from 1 up to n, and there is an edge from node i to j if i < j. That doesn’t seem impossible, mind you, simply surprising.

Improving time complexity from O(log n/loglog n) to O((log ((nloglog n)/log n))/loglog ((nloglog n)/log n))

Suppose I have an algorithm whose running time is $ O(f(n))$ where $ f(n) = O\left(\frac{\log n}{\log\log n}\right)$

And suppose I can change this running time in $ O(1)$ steps into $ O\left(f\left(\frac{n}{f(n)}\right)\right)$ , i.e. I can get an algorithm whose running time is $ O(g(n)) = O\left(\frac{\log\frac{n}{\frac{\log n}{\log\log n}}} {\log\log\frac{n}{\frac{\log n}{\log\log n}}}\right) = O\left(\frac{\log\frac{n\log\log n}{\log n}} {\log\log\frac{n\log\log n}{\log n}}\right)$ .

I’m pretty sure that $ g(n) < f(n)$ for big enough $ n$ (by using wolfram alpha) but wasn’t able to prove it.

My questions are:

  1. Is $ g(n) < f(n)$ in fact true (starting from some n)?

  2. Is $ g(n)$ asymptotically better the $ f(n)$ , i.e. is $ g(n) = o(f(n))$

  3. Assuming this is asymptotically better, I can do this step again and further improve the running time of the algorithm. Meaning that in 1 more step I can make my algorithm run in time of $ O\left(\frac{n}{f\left(\frac{n}{f(n)}\right)}\right)$ , and I can repeat this process as many times as I want. How many times should the process be repeated to get the best asymptotically running times and what will it be? obviously repeating it $ O(f(n))$ times will already have a running time of $ O(f(n))$ only for the repetition of this process and will not improve the overall algorithm complexity.

Time complexity analysis of shortest path algorithm

Below is Dijkstra’s algorithm from CLRS:

enter image description here

In the time complexity analysis of Dijkstra, CLRS says, RELAX() contains call to DECREASE-KEY(), which is essentially reducing edge weights associated with nodes stored in priority queue implemented as binary min heap.

enter image description here

Now in DAG-SHORTEST-PATHS() algorithm below, it says inner loop takes O(1). But I guess we need to run topological sort at the time of every relax(). So the inner loop wont be $ \Theta(1)$ and overall complexity will be $ \Theta(E^2)$ or something else but definitely not $ \Theta(V+E)$ as stated.

enter image description here

Am I correct with this?

[ Psychology ] Open Question : Why is it that life sucks, so bad, so much of the time?  ?

Most people are lying.  Most people will break the rules as much as they can. Most people will cheat on their taxes if they can. Most people do live lives contrary to their religion. Most students are cheating. Most spouses do not love their spouse. Most politicians become criminals. Most people use some form of drug, in excess, to escape. Most people are angry at other people, and want others to not exist. And if you vehemently disagree with this, please keep to yourself, as you are committing #1 on this list.  I just want to know why.