Linear Partition problem (Dynamic Programming) in a circular array

I am practicing algorithms for the last couple of days and I came across this question, the gist of which is:

Given apples and oranges arranged in a circle indexed form 1 to n , where n is an odd number(we know which ones apple and which ones orange), divide these into k contiguous groups each havind odd number of fruits such that most of the groups have more apples than oranges. Also, the arrangement can be such that a group can contain fruits of indices (n-3, n-2, n-1, n, 1 , 2, 3).

This appears like the linear partition problem to me, but the circular arrangement confuses me. Also, I was thinking of masking Apples as 1 and Oranges as -1 so that it’s easy to calculate the which one is higher in the group( if sum is +ve, then apples are higher else oranges). Also, I observed that k must be an odd number as n is an odd and each of the k groups have odd fruits, so sum of odd number of odds is an odd number.

We have to maximize the sum of each of the groups in this case right?

It would be great if someone can help out.

Thanks a lot!!!

The Partition Problem: Reducing to prove NP-Completeness

I am struggling with the below problem. Curious to hear any guidance.

The Partition Problem is the following:

$ \textbf{Instance:}$ A (multi-)set of numbers $ S = \{a_1, a_2, \ldots , a_n \}$ .

$ \textbf{Question:}$ Can $ S$ be partitioned into two (multi-)sets $ A$ and $ B$ such that the sum of the numbers in $ A$ is equal to the sum of the numbers in $ B$ ?

Prove that the Partition Problem is NP-complete.

Things I could reduce from that I know of are 3-SAT, Vertex Cover, Subset Sum, Independent Set, and the clique problem. My assumption is that I should be reducing from the Subset Sum problem, but I struggle with that problem as well.

Anyone able to help shed some light on this? Also, any explanation in plain english (maybe with some notation) would be greatly appreciated as I’m struggling with the concepts here. Just mathematical notation alone might make it more difficult for me to understand at the moment.

Thank you in advance!

Computing the partition function

Are there in any relatively efficient algorithms to compute the partition function $ P(n)$ for a given $ n$ ? What’s the runtime complexity or class of this problem as a function of $ n$ ?

Perhaps I’m missing something trivial here, but other than listing some recurrent definitions of this function and some of their asymptotics, I don’t see this precise information (complexity class, and known algorithms with their runtime/space complexity) in either of these articles:

  • Partition(number theory)
  • Partition function (number theory)

Using DD to get the hash of a non-system partition encrypted by VeraCrypt

I am trying to use DD for Windows to obtain the hash of a non-system partition that was encrypted via Veracrypt, but have run into a bit of a problem.

The command I used to get the hash of the encrypted partition looks like this

dd if=\?\Device\HarddiskVolume11 of=hash_output.txt bs=512 count=1 

And this command (in theory) should create a file called hash_output.txt that contains the encrypted hash that should, for example, look something similar to this:


However, the output I am getting when issuing the DD command above looks more like this:

fb55 d397 2879 2f55 7653 24a3 c250 14d3 3711 7109 e563 617f ab73 f11a 3469 33bb 

Which is obviously not the hash I was expecting so I am hoping someone might be able to help me figure out what I am doing wrong.

Some things to note:

  • I am 100% positive that the drive I am selecting in the DD command is the right drive.
  • There is only 1 encrypted partition on the drive that spans the entire size of the drive.
  • There is no physical / functional damage to the drive which would cause this issue.
  • This on an external 1tb drive that is connected via usb 3.0 (I have tried other cables and ports).
  • The same DD command worked fine for a test drive that I encrypted using the same parameters that were set for this drive.

Why do b*trees partition 2 nodes into 3?

I’m writing a b*tree library in Rust. I’m thinking it might be better to make the purposeful decision to only implement half of the b* optimizations. (And not because it is easier, although it is.)

According to my reading B* is 2 optimizations (for insertion anyway):

  1. If a node is full but a sibling has free space, move something over to the sibling before adding, then no recursion is necessary. With circular buffers this is really fast.
  2. If a node is full and has a full sibling, split them into 3 nodes, distribute the existing elements as 2/3 2/3 2/3 among them.

Without the rule 2 optimization, when a node is full, it is just split into 2 nodes with a 1/2 1/2 distribution between them.

I see how theoretically rule 2 is interesting, it guarantees that every node in the tree is always 2/3 full (except for the root and for very small trees).

But for a downside, consider the use case where you usually add the nodes in order (for example, adding 1, 2, 3, 4, 5….mostly in order). When you keep rule 2 then the tree is almost always exactly 2/3 full and no better; because, after the sibling gets it allotted 2/3, it is never looked at again. But without rule 2, the tree (at least the leaves anyway) is nearly 100% full all the time. Because the last node which is full gets split into 2 nodes, they each get 1/2 of the elements, and the previous sibling remains full and untouched. On the subsequent insertions, due to rule 1, the last 2 nodes fill up to be 100% full again before splitting.

And about the case with random order insertions, with rule 1 and without rule 2, the worse case scenario (I think really rare and difficult to construct) is a 50% fill rate and the best case is still the 100% fill rate. The average I would guess is still the same as with rule 2, since the tree is still usually between 2/3 and 100% full.

For a tree with large nodes (approaching page size), a possible problem would be increasing the number of page loads because of the need to check siblings. But it seems to me that even though rule 2 reduces the variance in the density of the tree, it doesn’t do much to actually increase the absolute density of the tree, that is moreso accomplished by rule 1. So I would expect the average number of page accesses to be about the same or probably less.

Can someone give me a good reason to keep rule 2 when inserting into a B* Tree?

Prove Product Partition is NP-complete in the strong sense

I am trying to understand how to prove that the Product Partition problem is NP-complete in the strong sense. The problem is similar to the normal Partition problem, except in this case the product of the subsets is taken instead of the sum.

I have only managed to find one paper discussing the issue: paper

This proof is very complicated and tough to understand. Could someone provide a simpler explanation of the process?

How to perform Partition by in SQLAlchemy

I am trying to use SQL alchemy & wanted to see how to create the partition by on a column, but i couldn’t find anything for sqlalchemy in respect a declarative extention. For example, if I have the following SQL syntex

CREATE TABLE employees (     id INT NOT NULL,     fname VARCHAR(30),     lname VARCHAR(30)     job_code INT NOT NULL,     store_id INT NOT NULL ) PARTITION BY RANGE (store_id) (     PARTITION p0 VALUES LESS THAN (6),     PARTITION p1 VALUES LESS THAN (11),     PARTITION p2 VALUES LESS THAN (16),     PARTITION p3 VALUES LESS THAN MAXVALUE ); 

And I created a table via sqlalchemy as follows:

class Employee(Base):       __tablename__ = "employee"       __table_args__ = {'mysql_engine': 'InnoDB'}        emp_id = Column(Integer, nullable=False)       fname = Column(String,  nullable=False)       lname = Column(String,  nullable=False)       job_code = Column(Integer, nullable=False)       store_id = Column(Integer, nullable=False) 

How can i implement Partition in the above sqlalchemy code?

NP-hard or not: partition with irrational input

Original Problem

Given a set $ N=\{a_1,…,a_{n}\}$ with $ n$ positive numbers and $ \sum_i a_i=1$ , find a subset whose sum is $ x_*$ , where $ x_*$ is a known irrational number and $ x_*\approx 0.52$ .

I proved its hardness by the following arguments.


Given a set $ N=\{a_1,…,a_{n+2}\}$ with $ n+2$ numbers where

  • $ a_1,…,a_n$ are positive and rational
  • $ \sum_{i=1}^n a_i = .02$
  • $ a_{n+1}=x_*-0.01$
  • $ a_{n+2}=0.99-x_*$

determine whether we can find a subset of $ N$ , such that the sum of the subset is $ x_*$ . .


  • Since 𝑥∗ is irrational, the desired subset cannot contain both of the last two numbers.

  • Since the sum of any subset not containing the n+1st element is smaller than 𝑥∗, the desired subset must contain the n+1st element.

  • The remaining question is to find a subset of the first n numbers whose sum is .01

So the original problem is NP-complete.

My problem

Someone argued that since $ x_*$ is irrational, I can’t store irrational numbers in a machine properly and my proof is not correct. How to address it?

Possible to restore LUKS header from partition that uses the exact same password and keyfile?

I was in Windows 10 and it told me I needed to Initialize Disk so I clicked it and now my LUKS header has been overwritten. I created my encrypted partitions in Linux with cryptsetup command. However, I have another drive that uses the same password and keyfile as the one with the overwritten partition header, … and I was wondering if it would be possible to backup the good partition header and use it for the one that has been overwritten since both LUKS encrypted partitions were created the same time, and with the exact same password/keyfile?

If anyone has experience in my position, I would be immensely grateful if you could point me in the right direction, or just tell me if am I screwed.

Thanks so much…