Regarding space of linear speed-up theorem

I was reading the proof of speed-up lemma from this slide (page 10 to 13) but I could not understand why the plus two factor appears in the new space bound. Would anybody elaborate?

Furthermore for a Turing machine that uses a linear amount of space, isn’t it possible to reduce the amount of space used by a constant factor without additional constant overhead? (i.e. to have only the εf(n) part as the new space)

Theorem: Suppose TM M decides language L in space f(n). Then for any ε > 0, there exists TM M’ that decides L in space εf(n) + 2.

O(m+n) Algorithm for Linear Interpolation


Given data consisting of $ n$ coordinates $ \left((x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\right)$ sorted by their $ x$ -values, and $ m$ sorted query points $ (q_1, q_2, \ldots, q_m)$ , find the linearly interpolated values of the query points according to the data. We assume $ q_i \in (\min_j x_j, \max_j x_j)$

I heard off-hand that this problem could be solved in $ O(m+n)$ time but can only think of an $ O(m \log n)$ algorithm. I can’t seem to find this particular problem in any of the algorithm textbooks.

Linearithmic Algorithm

interpolated = [] for q in qs:     find x[i] such that x[i] <= q <= x[i+1] with binary search     t = (q - x[i]) / (x[i+1] - x[i])     interpolated.append(y[i] * (1-t) + y[i+1] * t) 

This gives us a runtime of $ O(m \log n)$ , it’s unclear to me how to get this down to $ O(m + n)$ as the search for $ x_i$ must be done for every query point.

Integer Linear Program as a feasability test

I am a beginner in Integer Linear Programs and I have a question about a problem that I am dealing. This problem tracks a configuration of a graph by unitary transformations on the graph and I want to minimize these number of transformations to achieve another configuration. As I only allow exactly one transformation per step, minimizing the number of transformations is the same as minimizing the number of steps.

But, I enter in the following problem: There is no internal property that can be tracked so that I can check if one or other state is closer or farther from the wanted configuration. That means that I can only check if a specific sequence of transformation is correct in a prior-of-execution defined step, for example, $ T$ . Then, what I am thinking in doing is testing a range of values for $ T$ , as there is a polynomial upper-bound for this value, in increasing order. Then, I will recover the answer of the first $ T$ that gives any answer, as I know it will be a optimal answer.

My questions are:

  • This sort of is a feasibility test for a fixed $ T$ , as if the polytope exists, any answer will be a optimal answer, as they all have the same number of steps $ T$ . This approach is possible? In the sense that it can be calculated given a infinite amount of time? Because I am not sure what is the behavior of a IL program when there is no possible answer (ie. no polytope).
  • If yes, there is some existing technique to deal/optimize this type of situations without finding a specific property?

Linear Partition problem (Dynamic Programming) in a circular array

I am practicing algorithms for the last couple of days and I came across this question, the gist of which is:

Given apples and oranges arranged in a circle indexed form 1 to n , where n is an odd number(we know which ones apple and which ones orange), divide these into k contiguous groups each havind odd number of fruits such that most of the groups have more apples than oranges. Also, the arrangement can be such that a group can contain fruits of indices (n-3, n-2, n-1, n, 1 , 2, 3).

This appears like the linear partition problem to me, but the circular arrangement confuses me. Also, I was thinking of masking Apples as 1 and Oranges as -1 so that it’s easy to calculate the which one is higher in the group( if sum is +ve, then apples are higher else oranges). Also, I observed that k must be an odd number as n is an odd and each of the k groups have odd fruits, so sum of odd number of odds is an odd number.

We have to maximize the sum of each of the groups in this case right?

It would be great if someone can help out.

Thanks a lot!!!

How to wiggle sort an array in linear time complexity?

The wiggle sort is nums[0]nums[2]nums[4]…

For an input: nums = [1, 5, 1, 1, 6, 4], the expected output is [1, 4, 1, 5, 1, 6] and there can be many other possible outputs satisfying the aforementioned criteria.

I realised that the problem has a pattern: nums[1] will be greatest among nums[0:3], nums[3] will be greatest among nums[3:6],…

So, I targetted at getting the next greatest element. This made me implement the heap:

import heapq   class Solution:     def wiggleSort(self, nums: List[int]) -> None:         """         Do not return anything, modify nums in-place instead.         """         nums_heap = []         for num in nums:             heapq.heappush(nums_heap, -1*num)         i = 1         while i < len(nums):             nums[i] = -1 * heapq.heappop(nums_heap)             i += 2         i = 0         while i<len(nums):             nums[i] = -1 * heapq.heappop(nums_heap)             i += 2  

However, the time complexity of my solution is O(n) along with O(n) space complexity. I want to solve this in O(1) space complexity and that would require me to not use heap(extra space).

How to do that?

The complete question is also posted here.

How to convert this if-clause condition to a linear programming optimization problem

Let’s consider the following conditional constraint. w is a set of input data. x,y(1:3),d(1:3) are the decision variables.

x= 3.9         % optimization variable (one can iterate over it 0<x<5 ) for i=1:3     if w[i]-x <=0          d[i]=x-w[i];         y[i]= w[i]     else         d[i]=0         y[i]=w[i]     end end 

The linear expression of the above constraint can be expressed as follows:

Max $ 40𝑥+15𝑦_1−30𝑑_1+15𝑦_2−30𝑑_2+15𝑦_3−30𝑑_3$

$ x_i – y_i <= d_i, i=1:3$

$ y_i = w_i , i=1:3$

$ 0 <= x <= 5$

$ y,d_i>= 0, i=1:3$

However, I want to change the conditional constraint as follows:

for i=1:3         if w[i]-x <=0              d[i]=x-w[i];             y[i]= w[i]         else             d[i]=0             y[i]=x    % The change         end end 

How can I modify the linear optimization problem to satisfy the new condition?!

Math behind Multi-class linear discriminate analysis (LDA)

I have a question about Linear Discriminant Analysis (LDA) for the purpose of Dimensionality Reduction.

So I understand for the algorithm to calculate for $ k$ projection vector(s) you need to determine the eigenvector(s) that corresponds to the top $ k$ eigenvalue(s). But can anyone explain what you do with those eigenvectors after you have calculated them to get a final output?

My guess is to multiply all of the eigenvectors (projection vectors) together and then multiply that with each point, $ x$ , in the original dataset to produce a new corresponding point $ y$ . Does this sound right?

Linear codes and syndrome

Assume a linear code with (4,2) where we want to encode 2-bit data to 4-bit data. The generator (G) matrix is

1 0 0 0 0 1 1 0 

Now, if we want to encode 00, we get

[0 0] * [1 0 0 0] = [0 0 0 0]         [0 1 1 0] 

Also the parity check matrix (H) matrix is

0 1 1 0 0 0 0 1 

and assume the received data is 0100 where a single bit error occurs on the second bit (from left to right).

Multiplying H.C_received, we get

            [0] [0 1 1 0] * [1]  = [10] [0 0 0 1]   [0]             [0] 

So the syndrome is not zero means there is an error in the received data. BUT, the value of syndrome 10 matches second and third column of the H matrix.

So, how do we find out exactly that the second bit is faulty?