Concatenate multiple integer arrays such that number of inversions in resulting array is minimal

Given N arrays of variable length.

Find a way to concatenate the arrays in such a way that the number of inversions is minimum

An inversion for an array Arr can be defined as,

a pair of indices (i,j) such that,

i != j  i < j  Arr[i]>Arr[j] 

I tried to concatenate the arrays on the basis of sum, such that the one with minimal sum goes first and one with maximal sum goes last.

It didn’t work out though.


[14, 18, 18, 20, 16, 6, 11] SUM:- 103  [2, 4, 11, 40, 20, 14, 19] SUM:- 110  [14, 18, 18, 20, 16, 6, 11, 2, 4, 11, 40, 20, 14, 19] Inversions:- 42  [2, 4, 11, 40, 20, 14, 19, 14, 18, 18, 20, 16, 6, 11] Inversions:- 40 

Smallest integer i stored as a float such that i+1=i

So I had an assignment which asked me to find the smallest integer $ i$ which when represented as a float is such that $ i+1=i$

My approach- By making a simple C++ program , we get $ i=16777216$ or $ i=2^{24}$ But if we want to do that theoretically, I am unable to arrive at this number.

So a float is 32 bit variable and the first bit represents sign of mantissa and the next 23 bits represent the number in Mantissa. The next bit represents sign of the exponent and the following 7 bits represent the value of exponent.

Now consider $ 2^{23}$ . In Binary, it is represented as $ 1000,0000,0000,0000,0000,0000$ (the comma’s are just to make things readable). Now if we add $ 1$ to it, it becomes $ 1000,0000,0000,0000,0000,0001$

Now we store these numbers as float in C++. Both the numbers $ 2^{23}$ and $ 2^{23}+1$ are stored as $ 0100,0000,0000,0000,0000,0000,00010111$ (the 1 in the end has to be scrapped in order to fit the number in 24 bits). So both of them are essentially the same for the computer.

But why does the computer give me answer as $ 2^{24}$ ?

The code that I used

#include<iostream>  using namespace std;  int main(){      float i=1;     while(1<2)     {         if(i+1==i)             {cout << fixed << i << endl;                 break;}         i=i+1;     }   } 

What is the biggest integer path in 2D array

You are given a board of N rows and M columns. Each field of the board contains single digit (0-9).

You want to find a path consisting of four neighboring if they share a common side, Also the fields in your path should be distinct.

The four digits of your path, in the order in which you visit them, create an integer. What is the biggest integer that you can achieve this way?

fun (A[][] int) int 

Sol1 sol2 sol3

Data structure & algorithms for super-interval queries on intervals with small integer ends

I would like to have an online data structure that supports inserting an interval, and given a query interval $ I_q=[l_q,h_q]$ answer if $ I_q$ is contained at some interval of the data structure, i.e. if $ I_q$ is a super-interval of some interval of the structure (so the answer of the query is just boolean, no need to output all such intervals on the structure) at the fastest possible time complexity.

I have searched for such a combination, and found out that probably an Interval Tree would be appropriate for my situation, with $ O(\log n)$ interval insertion and overlapping intervals query (so it’s not exactly my desired query, but I think that it could possibly be turned to it. Also I can avoid the output complexity dependence, since the desired output is boolean and therefore on the first match I would know the answer is true).

Furthermore, here ( it is stated that:

If the endpoints of intervals are within a small integer range (e.g., in the range [1,…,O(n)]), faster data structures exist with preprocessing time O(n) and query time O(1+m) for reporting m intervals containing a given query point.

Since I also can guarantee that both interval ends are going to be small integers (i.e. not floats, but natural integers up to $ \approx 10^6$ ), what would be the best data-structure/algorithmic way (considering time-complexity of the above two operations) to implement those two operations I would like to have?

If the fastest time-complexity would be an Interval Tree, then how can I modify the overlapping-intervals query to support my query in $ O(\log n)$ time, i.e. not $ O(k\cdot\log n)$ where $ k$ is $ I_q$ ‘s range? However, I am quite interested in the above quoted passage, and how I could possibly (with another data structure, maybe?) manage such a fast complexity, so in that case Interval Trees wouldn’t matter.

Note: In my attempt to test such an algorithm and speed on an Interval Tree, I have found out the following library: where a similar query seems to be implemented on the envelop query with a time-complexity of $ O(m+k\cdot\log n)$ , where “$ n$ = size of the tree, $ m$ = number of matches, $ k$ = size of the search range”, which however is not as fast as I would like (especially considering the $ k$ multiplying factor).

Why many programming languages and applications use integer instead of floating point to represent time?

In my experience, the value of time mostly is represented by integer number, by programming languages (C/Java/Golang/Javascript(*)…), and by databases (MySQL, Postgres…), exceptions maybe some GUI/game engines and I always think it is normal. But when I thought more about this topic, I found that maybe it makes more sense to represent time in floating point instead.

Some of my points:

  • The value of time is most frequently used in term of continual sequence of events. (eg: is this contract expired yet, is this football match ended yet…). I don’t really remember a time when I need to compare 2 time values are equal. So the precision value of integer is not really useful in this case.
  • Secondly, when the precision time is needed (let say we need to represent exact time like 01/01/2020 UTC). The 64-bit floating point can represent it just fine. Likes in javascript, where the value is represent by number – 64-bit floating point number of milliseconds from epoch.
  • Using 64-bit floating point to store time is actually more space efficient when you need to represent a fraction of unit. Like in golang and C, you need 2 integers (both 64-bit), 1 for number of seconds from epoch, 1 for number of nano seconds.

I can think of some reasons:

  • That’s how hardware(CPU?) works?

So what do you think is the reason(s)? Thanks.

(*) Javascript don’t have native integer type (until recently with BigInt). So the value of date is actually a number, which is 64-bit floating point, but the value is always integer.

Prove that the greedy algorithm to remove k digits from a n-digit positive integer is optimal

Given a positive n-digit integer, such as 1214532 (n=7), remove k digits (for example k=4) such that the resulting integer is the smallest one.

A greedy algorithm for this would keep removing digits such that the resulting integer is the smallest. For the above example:

Step 1: Remove 2 => 114532 Step 2: Remove 5 => 11432 Step 3: Remove 4 => 1132 Step 4: Remove 3 => 112 

Can you prove that this algorithm is optimal (i.e. the final integer is the smallest possible)? Or if it is not, show a counterexample?


Limitless Integer in C++

C++ has these int data types with different sizes. Even though I’m not yet familiar with Python, I found out from the internet that it has just one integer data type with no limits (except the memory of the computer). I’ve “made” something similar in c++ but it’s inefficient. On the internet, I found that Python was written in C.

So, how would one “make” this limitless integer data type in C++, efficiently?

Why integer overflow not cycling?

Imagine we have 8-bit integer. So we can store integers from -128 to 127. so if we append 2, it will cause Arithmetic operation ‘127 + 2’ (on type ‘Int8’) results in an overflow. Since the very left bit is sign-bit the result should be changed sign-bit + rest of the value and should be -1 in Int8 (correct me if I’m wrong). But almost every time I try to hack the memory and add sum overflow value to a number, it will become some random number and I don’t know why.

The question is why it is not cycled to the reversed number and what is that random number (where did it came from).