## Sums of $2^{-l}$ that add to 1

Consider the following problem:

You are given a finite set of numbers $$(l_k)_{k\in \{ 1, …, n \}}$$ such that $$\sum_{k=1}^n2^{-l_k}<1$$. Describe an algorithm to find a set $$(l’_k)_{k\in \{ 1, …, n \}}$$ such that $$\forall k \in \{ 1, …, n \}:l’_k\le l_k$$ and $$\sum_{k=1}^n2^{-l’_k}=1$$.

(For what it’s worth, this is a problem arising from Information Theory, where the Kraft-McMillan Theorem gives that the result above yields a more efficient binary code than the one with codeword lengths $$(l_k)$$.)

Here are my initial thoughts. We can consider $$\sum_{k=1}^n2^{-l_k}$$ as a binary number e.g. $$0.11010011$$ and then we need to reduce the value of some $$l_k$$ values whose digit position is preceded by a $$0$$. So for instance, with the initial $$0$$ in the $$\frac{1}{8}$$ position of the example number I gave, we want to decrease the value of some $$l_i=4$$ to $$l’_i=3$$ to add $$\frac{1}{8}$$ and subtract $$\frac{1}{16}$$ to the sum. We then have $$0.11100011$$, so we’ve moved the problematic $$0$$ along a digit. When we get to the end we presumably have something like $$0.11111110$$, and then need to reduce the value of the longest codeword by 1 to get the overflow to $$1.00000000$$.

However, I encounter two problems: there may not be such an $$l_i=4$$, for instance, if the $$1$$ in the $$\frac{1}{16}$$ digit place arises as the sum of three $$l_i=5$$ numbers. Additionally, if we multiple have $$0$$ digits in a row then we presumably need to scan until the next $$1$$ and then decrement a corresponding $$l_i$$ multiple times, but it’s conceivable that I would “run out” of large enough $$l_i$$ codewords that I can manipulate in this way.

Can anyone describe an algorithm with a simple proof of correctness?

A follow-up problem: how do we generalise this algorithm to bases other than $$2$$?

## Comparing growth of two sums of functions

Does $$n+n^4$$ grow faster than $$n^2+n^3$$? If so, why?

## Local variables in sums and tables – best practices?

Stumbled on Local variables when defining function in Mathematica in math.SE and decided to ask it here. Apologies if it is a duplicate – the only really relevant question with a detailed answer I could find here is How to avoid nested With[]? but I find it sort of too technical somehow, and not really the same in essence.

Briefly, things like f[n_]:=Sum[Binomial[n,k],{k,0,n}] are very dangerous since you never know when you will use symbolic k: say, f[k-1] evaluates to 0. This was actually a big surprise to me: by some reason I thought that summation variables and the dummy variables in constructs like Table are automatically localized!

As discussed in answers there, it is not entirely clear what to use here: Module is completely OK but would share variables across stack frames. Block does not solve the problem. There were also suggestions to use Unique or Formal symbols.

What is the optimal solution? Is there an option to automatically localize dummy variables somehow?

## Number partitioning targeting ratio of subset sums and equal size

I’ve seen a number of questions and answers related to the partitioning problem of dividing a set into 2 subsets of equal size and sum that use greedy or dynamic programming solutions to get approximate answers. However, I am looking to split a set into 2 subsets of minimum difference in size but with a target ratio of sums.

I have tried variations of greedy algorithms that will work to minimize the largest difference of the two metrics at any point in the calculation, but this just results in a result that splits the difference between the two. I’m happy with an approximate solution since it needs to run in a reasonable amount of time.

## Enumerate all valid orders of subset sums

Given an positive integer $$n$$, we define an order of subset sums to be a sequence of all subsets of $$\{1,\ldots,n\}$$. For example, when $$n=2$$, the sequence $$\emptyset,\{1\},\{2\},\{1,2\}$$ is an order of subset sums.

We call an order of subset sums $$S_1,\ldots,S_{2^n}$$ valid if there exist real numbers $$0 such that $$\sum_{i\in S_1}x_i<\cdots<\sum_{i\in S_{2^n}}x_i$$. For example, when $$n=2$$, the sequence $$\emptyset,\{1\},\{2\},\{1,2\}$$ is a valid order of subset sums, but the sequence $$\emptyset,\{1\},\{1,2\},\{1\}$$ is not a valid order of subset sums because we cannot make $$x_1+x_2.

The question is, given $$n$$, how to enumerate all possible valid orders of subset sums. I know this problem cannot be solved in time polynomial in $$n$$, because there may be exponentially many valid orders of subset sums, so an algorithm with exponential time is welcome.

A trivial algorithm would be to iterate over all possible orders of subset sums, and check for each one if it is valid. But I cannot even find an (efficient) way to check if an order of subset sums is valid.

## Arrays sums for 3 distinct values

My problem is that we have an array of N integers (N <=5000) on the interval [-10^6,10^6]. We also have Q queries (Q <= 10^5) giving us some range in the array.

For each query, we want to find the number of triplets of indices in the range such that the sum of those array values equal to zero. More formally we want to find the number of ways we can choose unordered distinct triples i j and k(within the given query range) such that a[i]+a[j]+a[k] = 0.

I’m thinking of doing an N^2 precomputation + logN time for each query but I am unable to come up with a concrete working idea. Any help would be appreciated.

Edit: queries can be processed offline, as the array does not need to be updated.

## in the lambda calculus with products and sums is $f : [n] \to [n]$ $\beta\eta$ equivalent to $f^{n!}$?

$$\eta$$-reduction is often described as arising from the desire for functions which are point-wise equal to be syntactically equal. In a simply typed calculus with products it is sufficient, but when sums are involved I fail to see how to reduce point-wise equal functions to a common term.

For example, it is easy to verify that any function $$f: (1+1) \to (1+1)$$ is point-wise equal to $$\lambda x.f(fx)$$, or more generally $$f$$ is point-wise equal to $$f^{n!}$$ when $$f: A \to A$$ and $$A$$ has exactly $$n$$ inhabitants. Is it possible to reduce $$f^{n!}$$ to $$f$$? If not, is there an extension of the simply typed calculus which allows this reduction?

## Quickly obtaining sums of sets of numbers

We are given a set of $$n$$ bits, call them $$a_1$$, $$a_2$$,…,$$a_n$$. We are also given a set of $$m$$ sums, where the sums $$s_1$$, $$s_2$$,…,$$s_k$$,…,$$s_m$$ are given as sums of some of the bits. For example:

$$s_k = a_3 + a_5 + a_{17} + a_{22} + a_{35}$$

There is more structure to the sums, however. The sums are split into $$\alpha$$ groups, where each sum is in only one group. For example, and to make things easier, sums $$s_1$$, $$s_2$$,…,$$s_{m/k}$$ are in group 1, sums $$s_{m/k + 1}$$,…,$$s_{2m/k}$$ are in group 2, and so on. Then we know that each bit will occur at most once in each group.

So for example, the bit $$a_1$$ will appear in each group, the bit $$a_2$$ will appear once in each group, and so on…

QUESTION

How fast can we calculate all of the sums?

MY IDEAS

If we assume that there are $$\alpha$$ sums in each group, then there are at most $$2^\alpha$$ combinations of bits. For example, if there are two sums, we know that there are four combinations of bits:

(0) Bits that are not in either sum

(1) Bits that are in sum 1 ($$s_1$$), but not in sum 2 ($$s_2$$)

(2) Bits that are not in sum 1 ($$s_1$$), but are in sum 2 ($$s_2$$)

(3) Bits that are in both sums.

Thus we need at most $$n$$ additions to calculate the sums of each of the $$\alpha$$ groups. So our total time is at most $$n(m/\alpha)$$ additions, since there are $$m/\alpha$$ groups.

However, I believe that we can do better! I’m guessing that we can also use subtraction and sums from different groups to arrive at a much better algorithm.

## Sum of n sums, Permutations of the indices, how to write them in Mathematica?

I was wondering how to write a function $$F (r, q, n, f)$$ in Mathematica, defined in this way:

$$F(r,q,n,f):=\sum_{i_0=1}^q f(i_0) \Biggl(\sum_{i_1=i_0+1}^{q+1} f(i_1)\biggl(\sum_{i_2=i_1+1}^{q+2} f(i_2)\Bigl(\ldots(\sum_{i_n=i_{n-1}+1}^{q+n} f(i_n))\ldots \Bigl) \biggl) \Biggl)$$ es. $$\sum_{i_0=1}^2 f(i_0) \Biggl(\sum_{i_1=i_0+1}^{3} f(i_1)\biggl(\sum_{i_2=i_1+1}^{4} f(i_2) \biggl) \Biggl)=f(1)f(2)f(3)+f(1)f(2)f(4)+f(1)f(3)f(4)+ +f(2)f(3)f(4)$$

does an operator already exist that can be used in this way?

trying to write this function on mathematica I realized that the “recursion” is variable and I don’t know how to program in this case.

thank you

$$\$$

$$\$$

further example

$$\sum_{i_0=1}^1 f(i_0) \Biggl(\sum_{i_1=i_0+1}^{2} f(i_1)\biggl(\sum_{i_2=i_1+1}^{3} f(i_2)(\sum_{i_3=i_2+1}^{4} f(i_2)) \biggl) \Biggl)=f(1)f(2)f(3)f(4)$$

## Minimize cost of recursive pairwise sums: how to prove the greedy solution works?

The problem is in this other question.

Why does this always work? It’s not clear to me how one would use induction.

For $$n = 3$$, a quick calculation shows it works, however, I think it generalizes well.