Does a troll die if its maximum hit points is zero?

The Troll has the Regeneration feature which states:

The troll regains 10 hit points at the start of its turn. If the troll takes acid or fire damage, this trait doesn’t function at the start of the troll’s next turn. The troll dies only if it starts its turn with 0 hit points and doesn’t regenerate.

I’m wondering what happens when even a trolls maximum hit points has been reduced to zero. I don’t know if this method works to kill a troll because I’m unsure whether they have actually regenerated or not. Does the troll die?

How to balance an encounter with Challenge Rating zero creatures against PCs?

The Homonculus is described as a Challenge Rating (CR) 0 creature with five hit points and a poison attack that if PCs fail a DC10 CON check by five causes unconsciousness for 1d10 minutes.

According to the MM (p.9):

A monster’s challenge rating tells you how great a threat the monster is. An appropriately equipped and well-rested party of four adventurers should be able to defeat a monster that has a challenge rating equal to its level without suffering any deaths. For example, a party of four 3rd-level characters should find a monster with a challenge rating of 3 to be a worthy challenge, but not a deadly one.

As we understand it, a CR1 creatures is considered an even match for four level 1 PCs. A CR 1/8 creature would need eight creatures to be an even match for level 1 PCs.

That math doesn’t seem to work out for CR0 creatures as it seems to imply an infinite number of creatures would be required to create an even match.

How does one calculate an even match with CR0 creatures? Given the Homonculi’s rather powerful poison attack, how many Homonculi would be an even match for a party of Level 1 PCs?

SharePoint 2013 Pages library “Most popular items” returning zero count (except homepage)

I’m new to SharePoint and have been unable to find an answer to this problem.

When selecting “Most Popular Items” on the Pages library the homepage is returning a count for “Recent” and “Ever”, but all other pages are returning a zero count. The homepage has a different content type and page layout from the other pages.

Videos library is returning zero count for all videos.

Documents library is recording as expected.

I’m not sure is this is related but sorting by Recent/Ever is broken on all libraries.

Zero server knowledge encryption for media server

I’m making a little media server, just for fun, but I want to know if what I’m thinking makes sense.

The idea is that the files will be stored on the server, encrypted. The file encryption key (henceforth FEK) will also be stored on the server, but it too will be encrypted using a 2nd key, derived from the user’s password (henceforth PDK).

It will work like this:

  1. There will be a 2-stage login. First, the user enters their username.
  2. If the user exists, the server will send back two fixed, per-user salts and a # of iterations.
  3. The client will use the first salt to hash their password with 1 iteration of sha256. They will send this to the server.
  4. The server takes this hashed password and rehashes with a 3rd salt and many iterations (PBKDF2) and uses this to authenticate the user.
  5. Meanwhile, the client uses the 2nd salt and # of iterations that the server previously sent, also with PBKDF2, to create the PDK.
  6. If the user is successfully authenticated, the server sends back the encrypted key (FEK).
  7. The client uses the PDK to decrypt the FEK.
  8. The decrypted FEK is used to encrypt and decrypt all files client-side.

Rationale: The password will be sent over HTTPS, but supposing my server is completely compromised (attacker has SSH access), then they could just wait for someone to login and get the plain-text password. One iteration of sha256 doesn’t provide a lot of protection, but this is a sort of worst-case scenario, and I don’t want to double the amount of time we’re waiting for PBKDF2 (once on client, again on server).

It still has to be rehashed on the server in case the attacker manages to download the database, but maybe doesn’t have full/persistent access. This would prevent them from cracking passwords too quickly.

I sent the 2nd salt and # of iterations to the client before they authenticate so that we can do (4) and (5) in parallel. Salt and # iterations don’t really need to be secret, do they?

Since the server never sees the plain-text password, it can never decrypt the FEK, which is the most important.

We use different salts to basically get 2 different keys out of 1 password. One is the hashed password for authentication, and the other is the PDK.

I also want to do file-deduplication using hashes. I realize this also exposes some information, so I plan on using the FEK for the file hashes too. Then I can only do per-user file deduplication, but that’s good enough.

If the user changes their password, I simply re-encrypt the FEK and send it back to the server for safe-keeping. No need to re-encrypt the files.

Does this all make sense? Any problems in my plan?

Oh, also I’m thinking aes-256-cbc with per-file IVs for the encryption, HMAC-SHA256 for the file hashes, PBKDF2 w/ sha256 and half a second worth of iterations for both the password hash and the PDK.

On the growth rate of Leela Zero compared to AlphaGoZero

There are not many sources online, but one reference from January says of Leela Zero (LZ) that:

The strength depends on the hardware and on thinking time, but from the thread “LeelaZero adventures on Fox”, and from petgo3’s rank on KGS, I guess that on medium hardware, and with relatively fast time settings, LZ is about professional strength but not top pro

At the time this was published LZ’s best network had had ~11.75 million training games.

In contrast, AlphaGo Zero (AGZ) had a total of ~29 million training games. Assuming that the number of training games were spaced out evenly, the number of LZ’s training games by January would correspond to about day 14 of AGZ training.

But by day 14 of training AGZ was very close to AlphaGo Master’s level, which is way above top pro level.

This doesn’t add up, LZ was made mimicking AGZ so the growth rate should be similar, but this makes it seem like it is considerably slower.

What factors could be causing this? Is it the hardware (like the Tensor units?) used for the runs or the depth with which the games are analyzed? Is it something having to do with the implementation of the neural network itself? Or… maybe… is it just that LZ has very bad luck (Averaged over millions of games this one seems very unlikely, but it’s a possibility)?

In what ways can a creature have zero Hit Points, be conscious, and be unstable?

The section on “Stabalizing a Creature” states:

You can use your action to administer first aid to an unconscious creature and attempt to stabilize it, which requires a successful DC 10 Wisdom (Medicine) check. A stable creature doesn’t make Death Saving Throws, even though it has 0 Hit Points, but it does remain Unconscious. 

This method of stabalizing seems to require that the creature be unconscious. Perhaps this is not the case, but is there ever a time where it matters – a time where a creature has 0 Hit Points, is conscious, yet unstable? I would like answers to use “official” content, no Unearthed Arcana, Homebrew, or Twitter/Stream material should be considered.

I have only found one time where this matters, which is a Zealous Barbarian Raging Beyond Death. In this case you cannot stabalize this Barbarian using a Medicine check.

Why my swap area is set to zero in Ubuntu Server 18.04.2?

By checking the swap size in my Ubuntu server I observed that is set to zero. Since it was installed in very basic configuration from AWS EC2, I’m not sure if I had to do additional steps to adjust swap size area.

I run the following commands and got the below results :

# grep Swap /proc/meminfo    SwapCached:            0 kB    SwapTotal:             0 kB    SwapFree:              0 kB  # swapon -s   #  free -m                  total        used        free      shared  buff/cache   available Mem:           7975         187        7059           0         728        7549 Swap:             0           0           0 

Is it normal to have a swap area set to zero ? If it is not, what should I do to fix it ?

Thanks !

Isn’t ZKP is a reduction to a hard problem, rather than true zero knowledge?

Take for example “Hamiltonian cycle for a large graph”. The proof works by starting with a graph G that contains a hamiltonian cycle, then constructing an isomorphic graph H, and then either showing the mapping between the graphs G and H or releaving the cycle in H.

It is said that we prove that that we know a hamiltonian cycle in G without revealing it.

But this assumes the verifier does not have unlimited computational power. If he had it, he could ask to reveal the cycle in H, and use his unlimited computational power to work out the isomorphism. I understand that if the verifier had unlimited power, he could find the cycle in G directly. But that’s not my point. What I find strange is that we are relying on “hard problems” in the proof itself.

Are there ZKP protocols that do not rely on hard problems? Hard problems are only hard according to the state of the art. It is not proven that NP is not P, therefore, in my mind, this sounds like security through obscurity in some sense.