Time complexity of finding predecessor for a dictionary implemented as a sorted array

I’m currently reading “The Algorithm Design Manual” by Steven Skiena. On page 73, he discusses the time complexity of implementing $ Predecessor(D, k) $ and $ Successor(D, k) $ and suggests that it takes O(1) time.

If the data structure looks something like

[(k0, x), (k1, x), ...] 

where the keys k0 to kn are sorted, given k, I thought the successor(D, k) would first have to search the sorted array for k ( $ O(log n) $ ) and then get the successor ( $ O(1) $ ) since the array is sorted, and hence the overall time complexity should be $ O(log n) $ instead of $ O(1) $ as mentioned in the book. This should also apply for predecessor(D, k).

For an unsorted array, the time complexity for predecessor and successor remain as $ O(n) $ since searching the unsorted array also takes $ O(n) $ .

Did I misunderstand something?

What’s the proof complexity of E-KRHyper (E-hyper tableau calculus)?

Before the question, let me explain better what is E-KRHyper:

E-KRHyper is a theorem proving and model generation system for first-order logic with equality. It is an implementation of the E-hyper tableau calculus, which integrates a superposition-based handling of equality into the hyper tableau calculus (source: System Description: E-KRHyper).

I am interested in the complexity of system E-KRHyper because it is used in the question-answer system Log-Answer (LogAnswer – A Deduction-Based Question Answering System (System Description)).

I have found a partial answer:

our calculus is a non-trivial decision procedure for this fragment (with equality), which captures the complexity class NEXPTIME (source: Hyper Tableaux with Equality).

I don’t understand much of complexity theory so my question is:

What is the complexity of a theorem to be proved in terms of the number of axioms in the database and in terms of some parameter of the question to be answered?

What is the time complexity of a binary multiplication using Karatsuba Algorithm?

My apologies if the question sounds naive, but I’m trying wrap my head around the idea of time complexity.

In general, the Karatsuba Multiplication is said to have a time complexity of O(n^1.5...). The algorithm assumes that the addition and subtraction take about O(1) each. However, for binary addition and subtraction, I don’t think it will be O(1). If I’m not mistaken, a typical addition or subtraction of two binary numbers takes O(n) time.

What will be the total time complexity of the following program then that multiplies two binary numbers using Karatsuba Algo that in turn performs binary addition and subtraction?

long multKaratsuba(long num1, long num2) {  if ((num1>=0 && num1<=1) && (num2>=0 && num2<=1)) {    return num1*num2;  }   int length1 = String.valueOf(num1).length(); //takes O(n)? Not sure  int length2 = String.valueOf(num2).length(); //takes O(n)? Not sure   int max = length1 > length2 ? length1 : length2;  int halfMax = max/2;   // x = xHigh + xLow  long num1High = findHigh(num1, halfMax); // takes O(1)  long num1Low = findLow(num1, halfMax); // takes O(1)   // y = yHigh + yLow   long num2High = findHigh(num2, halfMax); // takes O(1)  long num2Low = findLow(num2, halfMax); // takes O(1)   // a = (xHigh*yHigh)  long a = multKaratsuba(num1High, num2High);   // b = (xLow*yLow)  long b = multKaratsuba(num1Low, num2Low);   //c = (xHigh + xLow)*(yHigh + yLow) - (a + b);  long cX = add(xHigh,xLow); // this ideally takes O(n) time  long cY = add(yHigh,yLow); // this ideally takes O(n) time  long cXY = multKaratsuba(cX, cY);  long cAB = add(a, b) // this ideally takes O(n) time  long c = subtract(cXY, cAB) // this ideally takes O(n) time   // res = a*(10^(2*m)) + c*(10^m) + b  long resA = a * (long) Math.pow(10, (2*halfMax)); // takes O(1)  long resC = c * (long) Math.pow(10, halfMax); // takes O(1)  long resAC = add(resA, resC); // takes O(n)  long res = add(resAC, b); // takes O(n)   return res; } 

Do relativized relations between complexity classes tell us anything about the nonrelativized relation?

The existence of relativized relations between complexity classes seems to often be treated as “circumstantial” evidence about the “true” or “real-world” (i.e. nonrelativized) relation between the classes. However, as far as I understand (please correct me if I am wrong), for complexity classes $ A$ and $ B$ and an oracle for a language $ L$ , all four of these cases are logically possible:

  1. $ A = B$ and $ A^L = B^L$
  2. $ A = B$ and $ A^L \neq B^L$
  3. $ A \neq B$ and $ A^L = B^L$
  4. $ A \neq B$ and $ A^L \neq B^L$

So presuming that at the end of the day the unrelativized result is what we really care about, what are relativized results “good for”?

I can see one application: if we happen to be able to find oracles for two languages $ L$ and $ L’$ such that $ A^L = B^L$ and $ A^{L’} \neq B^{L’}$ , then that tells us that any proof that either $ A = B$ or $ A \neq B$ cannot relativize, and this fact saves us a lot of time by allowing us to immediately skip many potential proofs.

But do oracle results give us any evidence about the actual relation? In particular, why are oracle separations treated as “evidence” (though not a proof) that the complexity classes are unequal? How strong does the complexity-theory community consider such evidence to be? (I know that last question is subjective and hard to answer precisely.)

Algorithm and Time Complexity for k-Sum problems

In fact, there are three different k-Sum problems:

Problem1: Given unsorted integer array $ \{a_1, a_2, …, a_n\}$ and a target number $ T$ , determine whether there exist at least one solution $ \{a_{i_1}, \cdots, a_{i_k}\}$ such that $ \sum_{j = 1}^{k} a_{i_j} = T$ .

For Problem1, we only need to return one solution. For example, 3-Sum, $ \{1,1,1,1,1,0,0,2,2\}$ and $ T=3$ , we can find at least one solution $ \{1,1,1\}$ .

Problem2: Given unsorted integer array $ \{a_1, a_2, …, a_n\}$ and a target number $ T$ , find all distinct k-tuples such that the sum is $ T$ .

For example, 3-Sum, $ \{1,1,1,1,1,0,0,2,2\}$ and $ T=3$ , we need to find all two solutions $ \{1,1,1\}$ and $ \{0,1,2\}$ .

Problem3: Given unsorted integer array $ \{a_1, a_2, …, a_n\}$ and a target number $ T$ , find all distinct k-tuples of indices such that total sum is $ T$ .

For example, 3-Sum, $ \{1,1,1,1\}$ and $ T=3$ , we need to return $ 4$ solutions of triple indices: $ \{0,1,2\}$ ,$ \{0,1,3\}$ ,$ \{1,2,3\}$ ,$ \{0,2,3\}$ .

My questions:

  1. For above three different k-Sum problems, what’s the best time complexity we can get so far? And what’s the algorithm?

I guess problem1 is $ O(n^{\lceil k/2 \rceil})$ , problem2 is $ O(n^{k-1})$ and problem3 is $ O(n^k)$ . Is it correct? How to prove?

Could someone explain time complexity of solution in this tutorial (linked in body)?

Could someone explain time complexity of solution of in this tutorial?

I’m having hard time figuring out, how asymptotic bounds for first solution is $ O(3^k . k)$ .

What I figured so far is, for computing $ f(Y)$ , we would require $ O(\lvert Y\rvert)$ steps and for fixed set $ X$ (such that $ \lvert X\rvert=k$ ), we would have $ 2^{n-k}$ number of $ Y$ sets that would contain set $ X$ . And there would be $ {n \choose k}$ such $ X$ sets.

So total steps should be $ $ {n \choose k } * {\sum_{r=0}^{n-k} {n-k \choose r} * ({k+r})} \leq {n \choose k} * n * 2^{n-k}$ $

But is this value tightly bounded by $ 3^k * k$ ?

Thanks in advance.

Time complexity of quicksort for arrays in increasing or descreasing order

Two $ n$ -size arays are given: $ n_1$ is in decreasing order and $ n_2$ is in increasing order.

Let $ c_1$ be the time complexity for $ n_1$ using quicksort, and $ c_2$ the time complexity for $ n_2$ using quicksort.

I think $ c_1 = c_2$ and $ c_1=O(n^2)$ ? Is this correct ?

I am using the last element as a pivot for each partition.