## Show that the best case time complexity of Quicksort is $\Omega(n \log n)$

I am trying to show that the best case time complexity of Quicksort is $$\Omega(n \log n)$$.

The following recurrence describes the best-case time complexity of Quicksort:

$$T(n) = \min_{0 \le q \le n-1} \left(T(q) + T(n-q-1) \right) + \Theta(n).$$

But I have difficulty in proving that $$T(n) = \Omega(n \log n)$$ using the recurrence above.

So how to solve this recurrence?

Posted on Categories proxies

## What is the Big theta of $(\log n)^2+2n+4n+\log n + 50$?

$$f(n)=(\log n)^2+2n+4n+\log n + 50$$

I am trying to mathematically prove that the $$f(n)$$ falls under the time complexity of $$\theta((\log n)^2)$$.

I need to come to the conclusion of $$f(n)\leq C(\log n)^2$$, for some positive constant $$C$$ and $$x\geq k$$.

What I have tried is:

$$(\log n)^2+\log n \leq 2(\log n)^2$$

I want to add $$6n + 50$$ to both sides, but cannot find the constant $$c$$ by algebra in $$c(\log n)^2$$.

I have just tried putting random values of $$c$$ to make $$c(\log n)^2 \geq 6n$$ true and $$c=100$$ works for $$n$$ greater than some value, but is there a mathematical way of finding this value of $$c$$ and therefore finding the big theta of this function?

Posted on Categories proxies

## Show that: $0.01n \log n – 2000n+6 = O(n \log n)$

Show that $$0.01n \log n – 2000n+6 = O(n \log n)$$.

Starting from the definition: $$O(g(n))=\{f:\mathbb{N}^* \to \mathbb{R}^*_{+} | \exists c \in \mathbb{R}^*_{+}, n_0\in\mathbb{N}^* s. t. f(n) \leq cg(n), \forall n\geq n_0 \}$$

For $$f(n) = 0.01n \log n – 2000n+6$$ and $$g(n) = n \log n$$

Let $$c = 0.01\implies 0.01n \log n – 2000n + 6 \leq 0.01 n \log n$$

Subtract $$0.01 n \log n$$ from both sides: $$-2000n +6 \leq 0$$ Add $$2000n$$ on both sides: $$2000n \geq 6$$ Divide by $$2000$$: $$n\geq 6/2000$$

If $$n \in \mathbb{N}^*\implies n \geq 0\implies n_0 = 0$$

Thus,

$$0.01n \log n – 2000n+6 = O(n \log n)$$

I’m not sure if what I did is completely correct and if $$n_0 = 0$$ is actually a good answer.. If it’s correct, can it be done another easier way? and if it’s not correct, where did I go wrong?

## log files with .log extension is getting downloaded instead of opening in the browser with nginx server

I have created a web server using nginx and using it to store log files. When I access any of the .log file, it downloads the file instead of opening it in the browser.

Posted on Categories Best Proxies

## How do I convert *.log file to csv format or .mat?

I have a .log file which is unreadable and I need to convert it to either CSV or Matlab.

I use OpenLogger from Diligentinc but ridiculously the company decided not to make the file readable by 3rd party.

## Derandomize MAX-CUT problem using $\log n$ bits

Consider the MAX-CUT problem. We can flip $$n$$ coins to generate a random cut, and by linearity of expectation we get that with “good probability” our cut we’ll be bigger then $$\frac{n}{2}$$.

Using pseudo random generators (XOR for example) we can generate $$n$$ pairwise independent bits from $$\log n$$ random bits. Using that approach, we can de-randomize the MAX-CUT problem with polynomial complexity.

With that algorithm, we are only checking $$n$$ possible cuts, where there are total of $$2^n$$. Is it promised that a “good” cut is within these $$n$$ cuts? Why?

Posted on Categories proxies

## Arched coloring of graph: $A(D) \ge \log \chi(D)$

Let $$D$$ be a directed graph. Arched coloring of $$D$$ is any coloring of edges such that $$\forall_{a,b} (a,b) \mbox{ and } (b,c)$$ have different colors for any $$(a,b), (b,c) \in E(D)$$. A(D) is the smallest number of colors on which we can arche color $$D$$. Proof that $$A(D) \ge \log \chi(D)$$

## Word factorization in $O(n^2 \log n)$ time

Given two strings $$S_1, S_2$$, we write $$S_1S_2$$ for their concatenation. Given a string $$S$$ and integer $$k\geq 1$$, we write $$(S)^k = SS\cdots S$$ for the concatenation of $$k$$ copies of $$S$$. Now given a string, we can use this notation to ‘compress’ it, i.e. $$AABAAB$$ may be written as $$((A)^2 B)^2$$. Let’s call the weight of a compression the number of characters appearing in it, so the weight of $$((A)^2 B^2)$$ is two, and the weight of $$(AB)^2 A$$ (a compression of $$ABABA$$) is three (separate $$A$$s are counted separately).

Now consider the problem of computing the ‘lightest’ compression of a given string $$S$$ with $$|S|=n$$. After some thinking there is an obvious dynamic programming approach which runs in $$O(n^3 \log n)$$ or $$O(n^3)$$ depending on the exact approach.

However, I have been told this problem can be solved in $$O(n^2 \log n)$$ time, though I cannot find any sources on how to do this. Specifically, this problem was given in a recent programming contest (problem K here, last two pages). During the analysis an $$O(n^3 \log n)$$ algorithm was presented, and at the end the pseudo quadratic bound was mentioned (here at the four minute mark). Sadly the presenter only referred to ‘a complicated word combinatorics lemma’, so now I have come here to ask for the solution 🙂

Posted on Categories proxies

## which rule can conduct this formula $\log n = O(n^{0.000001})$? [duplicate]

$$\log n = O(n^{0.000001})$$