## Calculate the number of combinations of a sequence of numbers in a particular order

I have a problem solving coding challenge when I have to calculate the number of combinations, numbers from 0 to 9, with the length n, with 2 rules –

The first number cannot be 0

Every other number can be 0 or must be divisible by the previous number (number 1 can not be used as divisor), for example [5.0], [1,0] or [2,8], [4,8], [3,6]

For example, if the length n were 2, number of combinations would be 21 – [1,0]…[9,0] + [2,4], [2,6], [2,8], [3,6], [3,9], [4,8] + [2,2]…[9,9]

The resulting response can be code in some programming language or a formula to calculate answer

## sequence of insert and delete operation in (2,3)-tree

I need help by understanding a theorem and its proof from a script. It says “There is a sequence of $$n$$ insert and delete operations in a (2,3)-tree that require $$\Omega ($$n log n$$)$$ many split and merge operations.”

I actually have no idea how such a sequence looks like. I have tried to perform some insert and delete operations on a (2,3)-tree and I got different results on split and merge operations but I think there is a “special” sequence to maximize the split and merge operations.

I would be very grateful if someone can help me with this issue, so I can try to see how the proof of the theorem can be done.

Greetings

## Generating number sequence

I am very new to mathematica. I am trying to generate list of number sequence. I want to make 6 sequences. 1 – 10, 10 – 100, 100 – 1 000, 1 000 – 10 000, 10 000 – 100 000. All reversed. Is there any elegant to way how to approach that? I am trying to figure it out using documentation, but I can’t. Thanks

## Minimum increment/decrement to change an array into non-decreasing sequence

I was trying to solve a codeforces problem https://codeforces.com/contest/713/problem/C.

The solution that I thought of was naive(I am a beginner in competitive coding), so I read the editorial. I understood the dynamic programming solution but then I found a blog post – https://codeforces.com/blog/entry/47821 that describes an (nlogn) solution to the problem. I tried hard to understand the details but I do not understand why they are considering slope and that too when they already have a recurrence relation at the start. After looking at the implementation, basically what they did I think was just consider the largest element found so far in the sequence and keep it as the key point. Any other element that is less than that number is changed to that number rather than considering decreasing the larger number because this reduces the chances of further modification in array. Is there something I am missing which was the point of whole mathematical slope analysis. I would appreciate if anybody could shed some light in simple manner as to what the blog tries to say or explain the why this approach is correct in simple manner. Here is the implementation

#include<stdio.h> #include<queue>  int main() {     int n, t;     long long ans = 0;     std::priority_queue<int> Q;     scanf("%d%d", &n, &t);     Q.push(t);     for(int i=1; i<n; i++)     {         scanf("%d", &t); t-=i;         Q.push(t);         if(Q.top() > t)         {             ans += Q.top() - t;             Q.pop();             Q.push(t);         }     }     printf("%lld", ans);     return 0; }  

## Is there an efficient algorithm for search subsequence in non-contiguous sequence?

Restriction: you can not copy the arrays into a single contiguous array for searching

## Forbidden Sequence Dynamic Programming

Given a finite set $$\Omega$$, I have the following problem. Say there is a list of forbidden subsequences $$F \subset \Omega \cup \Omega^2 \cup \Omega^3 \dots \Omega^\infty$$, while we do not know the contents of list before hand, we can make a query about any sequence $$S \in \Omega^i$$ to see if $$\exists f \in F, f \subseteq S$$. I want to construct a sequence $$S \in \Omega^n$$ such that $$f \not \subseteq S, \forall f \in F$$.

I want to construct all the sequences $$S \in \Omega^n$$ such that $$f \not \subset S, \forall f \in F$$.

The approach I thought would be best is to use dynamic programming. We iteratively construct valid sets $$V_k := \{S \in \Omega_k: f \not \subset S ,\forall f \in F, |f|< k\}$$, by requiring each subsequence of $$s \in V_1 \cup \dots V_{k-1}, \forall s \subsetneq S$$, and then remove all $$S \in F$$ with queries. My question is, what’s the most efficient way to construct $$V_k$$? One simple way would be to take $$V_{k-1}$$ and then try adding each element in $$\Omega$$ at the end, and then do some extra queries, but is there some better way?

Additionally, are there elegant ways to use incomplete valid sets $$I_k \subseteq V_k$$, where if $$I_{k+1} := \{S \in \Omega^{k+1} \setminus F: s \in I^1 \cup \dots I^k, \forall s \subsetneq S\}$$ is empty, we can try to retroactively expand everything without mostly starting from scratch?

## Maximizing sum of numbers within a sequence

Write an algorithm that, given sequence seq of n numbers where 3 <= n <= 1000 and each number k in seq 1 <= k <= 200, finds maximum sum by repeatedly removing one number from seq, except for first and last number in seq, and adding its value to sum of two adjacent numbers. Algorithm ends when there are only two numbers left.

For example:
[2, 1, 5, 3, 4], sum = 0
[2, 1, 5, 3, 4], sum = 1 + 2 + 5 = 8, 1 removed
[2, 5, 3, 4], sum = 8 + 3 + 5 + 4 = 20, 3 removed
[2, 5, 4], sum = 20 + 5 + 2 + 4 = 31, 5 removed
[2, 4] only 2 numbers left so algorithm ends

So far I’ve written brute force algorithm checking all possible combinations but it’s not well suited for large sequences.

My question is, is there more efficient algorithm solving this problem?

## How to generate a list (or even a sequence or a string) containing {x[1]==0&&x[2]==0&&…&&x[10]==0?

I need to solve a bunch of systems of n equations in n unknowns where n varies. I can do them, one by one, but I wish to automate the process.

The first equation equals 1 and the rest all equal zero. I currently type in each set of equations manually as follows. I have the n unknowns in a list: list1={r,s,…,t}. I have the expressions for the LHS in a 2nd list: list2={expr1, …, exprn]. I write sol=Solve[list2[[1]]==1&&list2[[2]]==0 … &&list2[[n]]==0,list1]

I would like to just type Solve[expr,list] where expr is something like list2[[1]]==1&&list2[[2]]==0 … &&list2[[n]]==0. I experimented with Table, RowBox, BoxData, and even CellPrint but I don’t know enough to write an expression involving && and == that will work inside the Solve function.

## Form input – Follow expected sequence or offer field validation?

Validation opposed to expectations

My team proposed to change the usual sequence of an address form. They want to put the country before the other address fields so they can validate the following fields depending on the selected country. The con is however that this deviates from the mental model of the users and it will slow down the data entry and might be slightly annoying.

Some users will enter addresses a few times a day, others not so often.

What solution is the best?

## Is there any scenario whereby randomly shufflying a sequence improves it’s compressibility?

I’m performing some correlation assessment à la NIST Recommendation for the Entropy Sources Used for Random Bit Generation, § 5.1.

You take a test sequence and compress it with a standard compression algorithm. You then shuffle that sequence randomly using a PRNG, and re-compress. We expect that the randomly shuffled sequence to be harder to compress as any and all redundancy and correlations will have been destroyed. It’s entropy will have increased.

So if there is any auto correlation, $$\frac{\text{size compressed shuffled}} {\text{size compressed original}} > 1$$ .

This works using NIST’s recommended bz2 algorithm, and on my data samples, the ratio is ~1.03. This indicates a slight correlation within the data. When I switch to LZMA, the ratio is ~0.99 which is < 1. And this holds over hundreds of runs so it’s not just a stochastic fluke.

What would cause the LZMA algorithm to repetitively compress a randomly shuffled sequence (slightly) better than a non shuffled one?