What kind of smoothing was applied to these bigrams probabilities?

A certain program computes bigrams probabilities applying a smoothing factor of K=1 given the corpus 12 1 13 12 15 234 2526. It does the following operations; first computes an “unnormalized bigrams”:

{'12': {'1': 2.0, '15': 2.0}, '1': {'13': 2.0}, '13': {'12': 2.0}, '15': {'234': 2.0}, '234': {'2526': 2.0}}. All of those 2.0 values are from doing k+1.

Then shows the “normalized bigrams”:

{'12': {'1': 0.2, '15': 0.2}, '1': {'13': 0.25}, '13': {'12': 0.25}, '15': {'234': 0.25}, '234': {'2526': 0.25}}.
The operations are:

P(1|12)=2/(2+2+6)=0.2
P(15|12)=2/(2+2+6)=0.2
P(13|1)=2/(2+6)=0.25
P(12|13)=2/(2+6)=0.25
P(234|15)=2/(2+6)=0.25
P(2526|234)=2/(2+6)=0.25

I don’t know the logic behind these operations, Laplace smoothing would be for example, given P(1|12)=1/2, smoothed; (1+1)/(2+6)=0.25 then, shouldn’t be 0.25 instead of 0.2?
This is the stripped down code from the original one:

from __future__ import print_function from __future__ import division import re class LanguageModel:     "unigram/bigram LM, add-k smoothing"     def __init__(self, corpus):          words=re.findall('[0123456789]+', corpus)         uniqueWords=list(set(words)) # make unique         self.numWords=len(words)         self.numUniqueWords=len(uniqueWords)         self.addK=1.0          # create unigrams         self.unigrams={}         for w in words:             w=w.lower()             if w not in self.unigrams:                 self.unigrams[w]=0             self.unigrams[w]+=1/self.numWords          # create unnormalized bigrams         bigrams={}         for i in range(len(words)-1):             w1=words[i].lower()             w2=words[i+1].lower()             if w1 not in bigrams:                 bigrams[w1]={}             if w2 not in bigrams[w1]:                 bigrams[w1][w2]=self.addK # add-K             bigrams[w1][w2]+=1          #normalize bigrams          for w1 in bigrams.keys():             # sum up             probSum=self.numUniqueWords*self.addK # add-K smoothing             for w2 in bigrams[w1].keys():                 probSum+=bigrams[w1][w2]             # and divide             for w2 in bigrams[w1].keys():                 bigrams[w1][w2]/=probSum         self.bigrams=bigrams         print('Unigrams : ')              print(self.unigrams)         print('Bigrams : ')         print(self.bigrams)         if __name__=='__main__':      LanguageModel('12 1 13 12 15 234 2526') 

What are the practical usage of Linear Programming, when and where can it be applied

I am a student of Federal Polytechnic, Ilaro studying Computer Science. I did statistics as a borrowed course and in there, we were taught Linear Programming such transportation, simplex method, dual simplex, duality principles etc

I will like to know the exact place where these theory can be applied in real life scenarios.

Is a Bugbear’s Long Limbed reach also applied to shoves and grapples?

In this question, I have been told the difference between a “melee attack” (which, if I understand correctly, Shoves and Grapples are) and a “melee weapon attack” (which they are not).

A Bugbear’s Long Limbed feature gives him 5 feet of extra reach for melee attacks made on his turn.

So, is a Bugbear’s Long Limbed reach also applied to special attacks (Shoves & Grapples)?

How can ideas like Lagrange Multipliers and Penalty Method be applied for solving algorithms?

I have a programming assignment which I was told that is solvable with some DP algorithm. The question involves some $ k$ which is essentially a constraint. In particular the question is a variant of LIS problem where at most $ k$ exceptions (restarts) are allowed.

But I know that there is a better solution. My professor mentioned Lagrange Multipliers and giving a penalty for each restart. But after googling these terms I wasn’t able to find out something related to algorithms. I read about them on Wikipedia but I can’t figure out how to use them. Also every article is related to Calculus and function optimization.

Is there a keyword that can describe better what I want to read about?

If you cast Blindness/Deafness on the same creature twice, what conditions are applied?

After researching into how spell effects stack, I find some ambiguity regarding certain spells that have multiple possible effects.

Notably, this answer regarding stacking spell effects contains updated information from the DMG errata:

Combining Game Effects (p. 252). This is a new subsection at the end of the “Combat” section:

Different game features can affect a target at the same time. But when two or more game features have the same name, only the effects of one of them—the most potent one—apply while the durations of the effects overlap. For example, if a target is ignited by a fire elemental’s Fire Form trait, the ongoing fire damage doesn’t increase if the burning target is subjected to that trait again. Game features include spells, class features, feats, racial traits, monster abilities, and magic items. See the related rule in the ‘Combining Magical Effects’ section of chapter 10 in the Player’s Handbook.

There are, however, no references in deciding how to determine what “the most potent one” may be when not using raw numbers (such as with paladin auras).

This is also applicable to spells like contagion, which inflicts a “natural disease” (and you can be afflicted by multiple natural diseases).

If both effects of blindness/deafness cannot influence a character at the same time, how do you determine which one takes effect (assuming both casts are at the same spell level)?

If password expiration is applied, should door-locks expiration be applied too?

After reading some topics in there about password expiration, and also after reading this comment, a question shown up in my mind: if we apply password expiration for the safety of users, should our door locks’ key also expire?

By door lock, I mean any physical restriction access we might have, so lock(s) on the server room door, on the company’s building entries (including maybe the backdoor for fire troopers or so), vaults, etc.

For physical-key based door locks, this would mean issuing a new metal key every X months/days/whatever, get the old key back and provide the new key to users (assuming they still are allowed to open the door). Sounds pretty heavy and complex, but it might help against copied keys or so.

For electronic-based door locks, this would mean reissuing new passwords/key accesses so the RFID/whatever card would need an upgrade with the new access key. Sounds lighter to do, even tho it still require all employee with an allowed access to do the upgrade one way another. Here, I assume the electronic card holds a “session token” somehow, not a never-changing user ID that the lock would compare to a database of allowed users (in such case, the user ID itself on both card and DB would need to be rotated).

So, is such policy applied in some companies, standards, etc or is that just a dumb idea I got?

Can the Circle Of Wildfire’s Enhanced Bond be applied to attack rolls?

The Circle Of Wildfire UA has the feature Enhanced Bond, with this text:

The bond with your wildfire spirit enhances your destructive and restorative spells. Whenever you cast a spell that deals fire damage or restores hit points while your wildfire spirit is summoned, roll a d8, and you gain a bonus to one roll of the spell equal to the number rolled.

In addition, when you cast a spell with a range other than self, the spell can originate from you or your wildfire spirit.

Now initially I was reading this as only applying to the damage/healing rolls of the spell – but on a closer reading, the text doesn’t actually say that. It only says you can apply this to spells which have such rolls, but not that the roll you apply it to must be fire damage or healing – so am I correct in assuming that a Circle Of Wildfire druid could cast, say, Fire Bolt and since it is a spell dealing fire damage apply a d8 to the attack roll of that spell?

How do I call a system like a grammar, but where a rule has to be applied to all matches at once?

For example, given rules $ \{ a \to x, a \to y \}$ and input $ aa$ , I am usually allowed to derive strings $ \{ xx, xy, yx, yy \}$ . I would like to restrict this to only performing “consistent” rewrites, so that the language would be like $ \{ xx, yy \}$ . It is evidently possible to synchronize rewrites in distant parts of a sentence within the usual formal grammar setting, but I wonder if this possibility is better explored under a different name or in a different arrangement.

I notice that context-sensitive grammars pose trouble with this “consistency” condition. For example, given a ruleset $ \{ aa \to x\}$ and initial string $ aaa$ , I am not sure if I should allow anything to be derived. Then again, it is entirely possible that only some rules, and specifically some context-free rules, may be enhanced with consistency.

I am rather sure the system I have in mind defines a language, and even that I could with some thinking propose a formal way to rewrite a given grammar so that some select context free rules are made consistent. But I wonder if this is actually widely known under some other name.

Is Resistance/Vulnerability applied before or after Cutting Words?

College of Lore bards get the Cutting Words feature at 3rd level:

When a creature that you can see within 60 feet of you makes an attack roll, an ability check, or a damage roll, you can use your reaction to expend one of your uses of Bardic Inspiration, rolling a Bardic Inspiration die and subtracting the number rolled from the creature’s roll. You can choose to use this feature after the creature makes its roll, but before the DM determines whether the attack roll or ability check succeeds or fails, or before the creature deals its damage.

When a bard uses Cutting Words to reduce damage, is resistance/vulnerability applied before or after the damage is reduced?