A polynomial time reduction and the size of problem (exact cover)

An exact cover problem is one of the NP-complete problems.

Given a family $ \mathbb{I}$ of subsets of a set $ [n]=\{1,\dotsc,n\}$ , whether there exists a subfamily $ \mathbb{I}’\subseteq \mathbb{I}$ such that sets in $ \mathbb{I}’$ are disjoint and $ \cup\mathbb{I}’ = \cup\mathbb{I} = [n]$ .

e.g., $ \mathbb{I}=\{\{1,2,3\},\{3,4\},\{2,4\},\{4,5\}\}$ and $ n=5$ , then $ \mathbb{I}’=\{\{1,2,3\},\{4,5\}\}$ .

I am trying to prove that a problem I am working on is NP-complete by showing a polynomial-time reduction from an arbitrary exact cover problem to my problem. What’s not obvious to me is that without constraining the size of $ \mathbb{I}$ , it is impossible(!) to reduce in a polynomial-time $ O(\mathrm{poly}(n))$ .

It seems that I am missing something about the polynomial-time reduction. Is it more like $ O(\mathrm{poly}(n + |\mathbb{I}|))$ or $ O(\mathrm{poly}(n + n|\mathbb{I}|))$ since each element in $ \mathbb{I}$ can have up to $ n$ elements?

Mount size calculations

In our latest session a party member purchased a baby triceratops. According to dnd sources baby triceratops is a medium sized being (according to TOA) and my party member is also medium size.

The player asked whether being mounted on the triceratops is considered a large creature or still medium. Are there any official rules according to the sizes of mounted beasts?

Thank you in advance.

Time complexity of algorithm inversely proportional to size of sub problem?

Let’s say I have an algorithm with time complexity $ T_n = T_\frac{n-1}2 + 1$ , $ T_0 = 0, T_1 = 1$ .

Assume (Induction hypothesis) $ T_n = C\log_2(n+1)$ for some $ C$ . $ T_1$ imposes $ C \geq 1$ .

Therefore from I.H. (plugging in the formula at $ \frac{n-1}2$ ):

$ $ T_n = C\log_2(\frac{n-1}{2}+1) + 1$ $ $ $ T_n = C\log_2(\frac{n+1}{2}) + 1$ $ $ $ T_n = C\log_2(n+1) – C + 1$ $ Setting $ C = 1$ gives

$ $ T_n = C\log_2(n+1)$ $

And so by induction

$ $ T_n = \log_2(n+1)$ $

This result makes very little sense to me. How come reducing the size of the subproblem from $ n/2$ to $ (n-1)/2$ increases the time it takes? (From Master’s theorem, $ T_n = T_{n/2} + 1 \Rightarrow T_n = \log_2(n)$ . Similarly, increasing to $ (n+1)/2$ gives $ T_n = \log_2(n-1)$ . What am I failing to see here?

Is there an error in the universal monster rules for table:natural attacks by size?

I am working on a druid character and just getting around to wild shapes natural attacks and I spotted what should be an error in its damage progression under the universal monster rules which conflicts with the newer damage dice progression rules.

Official Damage Dice Progression Chart $ $ \begin{array}{l|l} \text{Level} & \text{Dice}\ \hline 0 & 0 \ 1 & 1d1 \ 2 & 1d2\ 3 & 1d3\ 4 & 1d4\ 5 & 1d6\ 6 & 1d8\ 7 & 1d10\ 8 & 2d6\ 9 & 2d8\ 10 & 3d6\ 11 & 3d8\ 12 & 4d6\ 13 & 4d8\ 14 & 6d6\ 15 & 6d8\ 16 & 8d6\ 17 & 8d8\ 18 & 12d6\ 19 & 12d8\ 20 & 16d6\ \end{array} $ $

Here we have the chart for Bite

$ $ \begin{array}{l|l|l} \text{Size} & \text{Dice Level} & \text{Dice}\ \hline Fine & 1 & 1d1\ Diminutive & 2 & 1d2\ Tiny & 3 & 1d3\ Small & 4 & 1d4\ Medium & 5 & 1d6\ Large & 6 & 1d8\ Huge & 8 & 2d6\ Gargantuan & 9 & 2d8\ Colossal & 12 & 4d6\ \end{array} $ $

Here we can clearly see that huge to gargantuan is only 1 and not 2, and from gargantuan to colossal is 3 instead of 2. If gargantuan was changed from 9 to 10 then the chart would be correct, its the only value that’s off. So is this just a mistake or is this actually what its suppose to be?

Pathfinder Natural Weapon Damage by Size Inconsistency

Source: https://www.d20pfsrd.com/bestiary/rules-for-monsters/universal-monster-rules/#Natural_Attacks

Take for example a bite attack. According to the table, a small bite does 1d4 damage, medium 1d6, large 1d8, huge 2d6, garg. 2d8, and col. 4d6. Presumably it’d continue 4d8, 8d6, etc.

However, if you look at the FAQ entry right below it, it says that as you increase the damage, the progression would be 1d4, 1d6, 1d8, 2d6, 3d6, 4d6, 6d6, 8d6, 12d6, etc.

So what is this supposed to mean? Does it mean for example that a gargantuan entity would do 2d8 damage with a bite attack, but a huge one enlarged to gargantuan would do 3d6?

This seems it might be a duplicate of Is there an error in the universal monster rules for table:natural attacks by size? That question is 2 years old. Have we gotten any new information since then? Also, the newer chart is listed as a FAQ instead of an errata, which implies that the old chart might not be a mistake?

Maximum number of similar groups of a given size that can be made from a given array

I am given an array of numbers, not necessarily unique, and the size of a group. Let the array be denoted by $ B$ and the size of the group be $ A$ .

I need to find the maximum number of groups with the exact same contents and of size $ A$ that can be made from the elements of $ B$ . Once an element is used in a group it is exhausted. A group can have more than one element with the same value as long as the element is still available in $ B$ .

Example:

  1. If the input array is $ \{1, 2, 3, 1, 2, 3, 4\}$ , and the group size is $ 3$ the maximum number of groups that can be made is $ 2$ , which are $ \{1, 2, 3\}$ and $ \{1, 2, 3\}$ .
  2. If the input array is $ \{1, 3, 3, 3, 6, 3, 10\}$ , and the group size is $ 4$ the maximum number of groups that can be made is $ 1$ , which is $ \{1, 3, 3, 3\}$ .

What I have tried so far, is to frame some equations ( given below ) but after that, I am struggling to come up with an algorithm to solve them.

Let $ F_1$ be the frequency of the element $ B_1$ , $ F_2$ be the frequency of the element $ B_2$ and so on till $ B_n$ , where $ B_1 \dots B_n$ are distinct elements from the array $ B$ .

Now I need to choose $ n_1, n_2, \dots n_i$ such that

  1. $ n_1 + n_2 + \dots + n_i = A$
  2. $ k\cdot n_1 \leq F_1\text{ , } k\cdot n_2 \leq F_2\text{ , }\dots \text{ , }k\cdot n_i \leq F_i$
  3. $ k$ is the number of groups and we need to maximize it.

Length of $ B$ can be as large as $ 10^5$ and $ A$ can also be as large as $ 10^5$ .

Please help me find a greedy or dynamic approach to the problem.

P2P torrents: downloaded data greater than torrent size?

How can downloaded data be greater than torrent size? In this case I already downloaded all size (100%) but after few days – missing 0.01 % of data.

I try to found explanation on internet, some users talks about bad sectors in HDD, but this is happened to me few times; I checked my (external) HDD and it pass tests about errors (GSmartControl).

Is it possible that someone edits downloaded files, for example mp4 file – for reason to be able to install some monitoring tool on my PC? Or to detect my real IP (even I used VPN)?

image from bittorrent

Does the pigeonhole principle rule out the possibility of losslessly simulating a universe the size of our own?

Say you had a very powerful computer and wanted to run a completely lossless simulation of a universe approximately the same size as our own: $ 10^{80}$ particles.

Each particle in the simulation has properties like velocity, mass, charge, etc. Assuming that your program didn’t use any tricks (like compressing this simulated universe by storing groups of 1000 particles as if they were one), does the pigeonhole principle mean that you would need a computer made out of at least $ n$ particles to losslessly simulate a universe of $ n$ particles?

I say this because I don’t see how it’s possible to store all of the physical properties of a particle on a piece of hardware without using at least one actual, physical particle.

Am I right about this? Does this mean it would be impossible to ever hope we could create a high resolution, lossless simulation with a number of particles similar to the actual number of particles in our universe?