How much health will a Life Domain cleric regain from the Blessed Healer feature using the Mass Healing Word spell?

This question could loosely be considered a follow up to “Can Goodberries heal a Life cleric when consumed by another.”

How much life would a level 6 Life Cleric with four party members gain from the Blessed Healer class attribute with a casting of Mass Healing Word?

Blessed Healer

Beginning at 6th level, the healing spells you cast on others heal you as well. When you cast a spell of 1st level or higher that restores hit points to a creature other than you, you regain hit points equal to 2 + the spell’s level.

Mass Healing Word

As you call out words of restoration, up to six creatures of your choice that you can see within range regain hit points equal to 1d4 + your spellcasting ability modifier. This spell has no effect on undead or constructs.

If the Cleric casts Mass Healing Word, targeting each of the 4 party members, how many times would the Cleric gain HP equal to 2 + Spell Level?

Taken further, would the Blessed Healer effect be prevented if the Cleric targeted himself with Mass Healing Word, in addition to the other 4 members of his party?

Is there an alternative word to be used in place of Humanoid?

I find the term Humanoid to be (obviously) human centric, and am looking for a more generic term to use in place of it to define all intelligent creatures that exist in standard society of these fantasy worlds. A few examples of what I’m looking for:

  • A small hamlet town consisting mostly of gnomes, halflings, dwarves, and a small spattering of humans likely wouldn’t refer to themselves as "humanoids," so what would they call themselves?
  • A human player character is new in town and walks up to an elven resident, the elf would find it quite rude to be asked "What humanoids make up the general population here?" I suppose ‘races’ or ‘species’ might work here, but I think those would also be taken offensively.
  • A Beholder looks down on the intelligent residents of the realm and laughs at "those pitiful humanoids!" What if the beholder had never met a human, only the more rare races; where did it get the term ‘humanoid’ then?

My campaign is DND 5e set in Eberron, but any term from any setting or TTRPG or otherwise would work.

Would a two word domain name with a shared letter between the words be treated the same for SEO as the fuller name?

I found a domain name (2 words) that is not available. Then I found almost the same domain name with a slight difference, the last letter of the first word is the same of the first letter of the second word.

Ex : jamessecret.com (not available)      jamesecret.com (available) 

In terms of SEO, is it seen as the same to Google? Would search engines interpret them the same?

Why is this language *not* pumpable? (language = arbitrary word followed by exact same arbitrary word)(pumping lemma for context-free-languages)

1 language = arbitrary word followed by exact same arbitrary word = u * u (with u being out of non-empty words of alphabet {0, 1} )
(sorry for the formatting, see screenshot-link for conventional/clear expression)

The purpose is to prove that the given language is not a context-free-language (using the pumping lemma).

So i thought that this language should definitely be pumpable, since you can choose x and z as corresponding parts in one of the u/"halves". So if word to be pumped is "0101" then you choose e.g. x=1 (0101) and z=1 (0101) and the remaining parts that stay static are assigned with u=0 (0101), y=0(0101), and v=empty.
If pumped zero-times/i=0 word reads: 00
i=1: 0101
i=2: 011011
i=3: 01110111
(and so on)

(For more complicated words you would need to use all three (u, y, v), but the basic principle would still be that x and z are assigned corresponding parts of the word and u,y,v would be assigned the remainder. (This doesn’t work for the extremely small words 00,01,10,11 that would become empty when zero-pumped, but as long as the exceptions are finite / of finite/fixed number this isn’t a problem AFAIU.)
Clearly this is wrong, but can anybody explain how/why?

Is the problem that determines whenever the word member $\in$ L(M) decidable or not?

Given a Turing machine M on alphabet {m,e,b,r} we’re asked to determine if member $ \in$ L(M). You must realize that M is not one specific machine and can be any turing Machine with the same alphabet. My goal is to determine whenever this problem is decidable or not.

My idea was to use mapping reducibility. The goal was to see if we can translate all problems from $ A_{TM}$ which is known to be undecidable into our current problem. This would make our current problem undecidable by contagion. However I’m struggling in doing so because I’m not sure if it’s possible. $ A_{TM}$ is defined as a Turing machine M that accepts the word w.

Any help to get unstuck would be appreciated.

What is the optimal algorithm for playing the hangman word game?

Suppose we are playing the game hangman. My opponent picks a word at random from the dictionary, which we both have access to during the game. Once my opponent has picked a word, I am given only the length of the word. I can guess a letter which I think will be in the word, and if it is in the word my opponent identifies all of the positions of that letter in that word. If I guess a letter which isn’t in the word, that’s counted as one mistake. If I can guess the word before too many mistakes I win.

My opponent’s goal should be to pick a word which maximizes the number of mistakes (incorrect guesses) I make before I can guess the word. My goal is to minimize them. (Traditionally, if the number of mistakes is > some number then I lose the game, but we can relax that constraint.)

Consider three algorithms for letter guessing. These are the main ones I’ve thought of, but there are probably more, and I welcome alternate ideas. In all three algorithms, the first step will be to gather the list of words which are the same length as the secret word. Then, for each letter in the alphabet, count the number of words in the dictionary which contain that letter. For example, maybe 5000 contain "a", 300 contain "b", and so on. Here it is in python:

    alphabet = list('abcdefghijklmnopqrstuvwxyz')      while not found:         probabilities = dict.fromkeys(alphabet, 0)          for word in dictionary:             for letter in word:                 if letter in alphabet:                     probabilities[letter] += 1          # now we have the letter frequencies  

After that is where the three algorithms diverge.

  1. In the first algorithm, we guess the letter which the most number of remaining words contain. So if 5000 remaining words contain "a" and no other letters are in that many words, we will pick "a" every time. If "a" is correct, this gives us positional information which we can filter the list further. For example, we might filter the list by all words that match ".a..". (Dots are unknown letters.) If "a" is incorrect, we filter out all words which contain "a". In the case where there is a tie and two letters are found in an equal number of words, letters are chosen alphabetically. So if we know the word matches ".ays", we’ll just guess words in alphabetical order.

  2. This is only slightly different from the first algorithm. Instead of always choosing alphabetical ordering, in the case of a tie we choose letters randomly. This has the benefit that our opponent doesn’t know what we will pick. In the first strategy, "rays" is always better than "days". This avoids that issue.

  3. In this case, we pick letters with a probability proportional to the number of words which contain that letter. At the beginning when we tallied the number of words which contain "a" and the number which contain "b" and so on, since "a" happened to be found in the most number of words, in strategies 1 and 2 we picked "a" 100% of the time. Instead, we will still choose "a" a plurality of the time, but a small number of times we’ll pick "z" even though a might be found in 10x more words. I have my doubts about this strategy being optimal but it was used in research in 2010.

So I have two questions:

  1. What is the optimal letter-guessing strategy which I should use assuming that my opponent knows I will use this strategy?
  2. For a given word, say "pays", what is the average number of mistakes M I should be expected to make? Is there a closed-form way to calculate M, as opposed to running this simulation many times and averaging the results?

Clarifications

  • Any English dictionary can be used. For example, this English dictionary with 84k words.. Subsets of dictionaries carefully chosen for ambiguity could also be interesting but are outside of the scope of this question.
  • There is no constraint on the word size as long as the word is in the dictionary. The guesser will know the word size before he begins guessing.
  • Only the total number of mistakes matters, which is independent of the word size. You don’t get more chances for longer words, or fewer chances for shorter ones.