Using DD to get the hash of a non-system partition encrypted by VeraCrypt

I am trying to use DD for Windows to obtain the hash of a non-system partition that was encrypted via Veracrypt, but have run into a bit of a problem.

The command I used to get the hash of the encrypted partition looks like this

dd if=\?\Device\HarddiskVolume11 of=hash_output.txt bs=512 count=1 

And this command (in theory) should create a file called hash_output.txt that contains the encrypted hash that should, for example, look something similar to this:

(Šö÷…o¢–n[¨hìùlŒ‡¬»J`<Q›þIšê1ªCúÍbÔcN„ÐŒ3+d.dWr€-¡tä66¶ˆÎ  

However, the output I am getting when issuing the DD command above looks more like this:

fb55 d397 2879 2f55 7653 24a3 c250 14d3 3711 7109 e563 617f ab73 f11a 3469 33bb 

Which is obviously not the hash I was expecting so I am hoping someone might be able to help me figure out what I am doing wrong.

Some things to note:

  • I am 100% positive that the drive I am selecting in the DD command is the right drive.
  • There is only 1 encrypted partition on the drive that spans the entire size of the drive.
  • There is no physical / functional damage to the drive which would cause this issue.
  • This on an external 1tb drive that is connected via usb 3.0 (I have tried other cables and ports).
  • The same DD command worked fine for a test drive that I encrypted using the same parameters that were set for this drive.

Why do b*trees partition 2 nodes into 3?

I’m writing a b*tree library in Rust. I’m thinking it might be better to make the purposeful decision to only implement half of the b* optimizations. (And not because it is easier, although it is.)

According to my reading B* is 2 optimizations (for insertion anyway):

  1. If a node is full but a sibling has free space, move something over to the sibling before adding, then no recursion is necessary. With circular buffers this is really fast.
  2. If a node is full and has a full sibling, split them into 3 nodes, distribute the existing elements as 2/3 2/3 2/3 among them.

Without the rule 2 optimization, when a node is full, it is just split into 2 nodes with a 1/2 1/2 distribution between them.

I see how theoretically rule 2 is interesting, it guarantees that every node in the tree is always 2/3 full (except for the root and for very small trees).

But for a downside, consider the use case where you usually add the nodes in order (for example, adding 1, 2, 3, 4, 5….mostly in order). When you keep rule 2 then the tree is almost always exactly 2/3 full and no better; because, after the sibling gets it allotted 2/3, it is never looked at again. But without rule 2, the tree (at least the leaves anyway) is nearly 100% full all the time. Because the last node which is full gets split into 2 nodes, they each get 1/2 of the elements, and the previous sibling remains full and untouched. On the subsequent insertions, due to rule 1, the last 2 nodes fill up to be 100% full again before splitting.

And about the case with random order insertions, with rule 1 and without rule 2, the worse case scenario (I think really rare and difficult to construct) is a 50% fill rate and the best case is still the 100% fill rate. The average I would guess is still the same as with rule 2, since the tree is still usually between 2/3 and 100% full.

For a tree with large nodes (approaching page size), a possible problem would be increasing the number of page loads because of the need to check siblings. But it seems to me that even though rule 2 reduces the variance in the density of the tree, it doesn’t do much to actually increase the absolute density of the tree, that is moreso accomplished by rule 1. So I would expect the average number of page accesses to be about the same or probably less.

Can someone give me a good reason to keep rule 2 when inserting into a B* Tree?

Prove Product Partition is NP-complete in the strong sense

I am trying to understand how to prove that the Product Partition problem is NP-complete in the strong sense. The problem is similar to the normal Partition problem, except in this case the product of the subsets is taken instead of the sum.

I have only managed to find one paper discussing the issue: paper

This proof is very complicated and tough to understand. Could someone provide a simpler explanation of the process?

How to perform Partition by in SQLAlchemy

I am trying to use SQL alchemy & wanted to see how to create the partition by on a column, but i couldn’t find anything for sqlalchemy in respect a declarative extention. For example, if I have the following SQL syntex

CREATE TABLE employees (     id INT NOT NULL,     fname VARCHAR(30),     lname VARCHAR(30)     job_code INT NOT NULL,     store_id INT NOT NULL ) PARTITION BY RANGE (store_id) (     PARTITION p0 VALUES LESS THAN (6),     PARTITION p1 VALUES LESS THAN (11),     PARTITION p2 VALUES LESS THAN (16),     PARTITION p3 VALUES LESS THAN MAXVALUE ); 

And I created a table via sqlalchemy as follows:

class Employee(Base):       __tablename__ = "employee"       __table_args__ = {'mysql_engine': 'InnoDB'}        emp_id = Column(Integer, nullable=False)       fname = Column(String,  nullable=False)       lname = Column(String,  nullable=False)       job_code = Column(Integer, nullable=False)       store_id = Column(Integer, nullable=False) 

How can i implement Partition in the above sqlalchemy code?

NP-hard or not: partition with irrational input


Original Problem

Given a set $ N=\{a_1,…,a_{n}\}$ with $ n$ positive numbers and $ \sum_i a_i=1$ , find a subset whose sum is $ x_*$ , where $ x_*$ is a known irrational number and $ x_*\approx 0.52$ .

I proved its hardness by the following arguments.

Instance

Given a set $ N=\{a_1,…,a_{n+2}\}$ with $ n+2$ numbers where

  • $ a_1,…,a_n$ are positive and rational
  • $ \sum_{i=1}^n a_i = .02$
  • $ a_{n+1}=x_*-0.01$
  • $ a_{n+2}=0.99-x_*$

determine whether we can find a subset of $ N$ , such that the sum of the subset is $ x_*$ . .

NP-complete

  • Since 𝑥∗ is irrational, the desired subset cannot contain both of the last two numbers.

  • Since the sum of any subset not containing the n+1st element is smaller than 𝑥∗, the desired subset must contain the n+1st element.

  • The remaining question is to find a subset of the first n numbers whose sum is .01

So the original problem is NP-complete.

My problem

Someone argued that since $ x_*$ is irrational, I can’t store irrational numbers in a machine properly and my proof is not correct. How to address it?

Possible to restore LUKS header from partition that uses the exact same password and keyfile?

I was in Windows 10 and it told me I needed to Initialize Disk so I clicked it and now my LUKS header has been overwritten. I created my encrypted partitions in Linux with cryptsetup command. However, I have another drive that uses the same password and keyfile as the one with the overwritten partition header, … and I was wondering if it would be possible to backup the good partition header and use it for the one that has been overwritten since both LUKS encrypted partitions were created the same time, and with the exact same password/keyfile?

If anyone has experience in my position, I would be immensely grateful if you could point me in the right direction, or just tell me if am I screwed.

Thanks so much…

Find the max partition of unique elements where each element corresponds to the set pool containing that element

Given a list of sets:

a b c -> _ c d   -> d b d   -> b a c   -> a a c   -> c 

The objective is to find the max partition of unique elements with each element corresponding to the set containing that element.

I was thinking about ordering the elements in n * log(n) based on occurrence in other groups and then iteratively start with the lowest, and reorder the list each time based on subtracting the occurrences of other elements within the lists containing the removed element. We are able to do so, as each unique element contains a set of pointers to the lists where the element is contained in.

I can store the unique elements with its occurrences in a Min-Heap, where each unique element has handler to the node within the Min-Heap, thus we can remove the min and also decrease the key one for others within the same list as the contained element in log(n) giving that we have it’s handler.

Is the approach feasible at all, if not, what approach I can make use of?

Partition into paths in a Directed Acyclic Graphs

I have a directed acyclic graph $ G=(V,A)$ , I want to cover the vertices of $ G$ with a minimum number of paths such that each vertex $ v_i$ is covered by $ b_i$ different paths.

When $ b_i=1$ for all the vertices, the problem can be solved in polynomial time. But I am searching for the complexity of the problem when $ b_i>1$ for at least one vertex $ v_i$ , do you know about any results that may help me?

Partition problem reduction to subset sum problem

Given an instance (S, k) of the Subset-Sum problem, where S is a set of integers and k is another integer, we transform it into an instance S’ = S ∪ { x, y } of the Partition problem, where x = sum(S) + k, y = 2sum(S) – k, and sum(S) = Σx∈S x. Prove that S’ can be constructed from S in polynomial time and there exists a subset W ⊆ S’ such that sum(W) = k iff S’ can be partitioned into X and Y such that sum(X) = sum(Y), where S’ = X∪Y and X∩Y = ∅

Deleted Partition and backing up data on Windows

so in order to make partition to install Ubuntu I have deleted my whole partition on hard disk and I thought all my data was lost. Then I come around notion that my data should be there even if I have deleted partition by mistake. So, I installed third party software and recovered data and transferred to another hard drive. Now, I can see that data is back uped but the size of the files are 0 kb. The directory structure is same as previous stored in drive. Any insights on this would be helpful and suggestion to retrieve this data again.