Trying to Get Average of a Column in SharePoint – But One Row May Contain Multiple Counts

Ok so getting the average means dividing the sum by the count. Meaning I have a column with a count of 7 rows of numbers that all add up to the sum of 10. So I divide 7/10 to get 70%. Using a combination of calculated columns and workflows I am able to spit out the average number of a column.

HOWEVER, this one column has a slight twist, and I feel there’s a mathematical formula someone can give me to figure it out, but I can’t get it.

The issue is one of my single rows may represent up to two counts. The way this form was setup is a user can essentially enter two forms’ worth of info into one form. I have to parse out when they’ve entered only one form or two forms’ worth of info and if they’ve entered more than one form, I have to somehow add that to the count total to get an accurate average.

Right now I can create a calculated column to parse out whether there’s one or two forms entered and spit out a number 1 or 2 respectively. However, at that point I don’t know how to adjust the count to get an accurate average. Can anyone help? Thanks so much!

To clarify further, here’s a basic example of how my column turns out:

FORM COLUMN 1 1 1 0 2 1 1 —- Average: 100%

See that my column spits out an average of 100% because it’s dividing 7 by 7. However, the row item that states the number 2 is actually trying to say that’s two rows of info, so I need it to calculate the average as if the column looked like his:

FORM COLUMN ACCURATE 1 1 1 0 1 (split from the 2 items) 1 (split from the 2 items) 1 1 —- Accurate average: 87.5%

See how when you account the 2 as a COUNT of 2, you get the accurate average because it’s 8 items, but 7 are accurate so we get 87.5% accuracy.

Let me know if I need more clarification but hopefully someone can help me figure this out. Thanks so much!

Python Pandas NLTK: Adding Frequency Counts or Importance Scoring to Part of Speech Chunks on Dataframe Text Column

I did NLTK part of speech tagging followed by chunking on one column (“train_text”) inside my Pandas data frame.

Below is my code that ran successfully and sample output results.

def process_content():     try:         for i in train_text:             words = nltk.word_tokenize(i)             tagged = nltk.pos_tag(words)             # chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""             chunkGram = r"""Chunk: {<VB.?><NN.?>}"""             chunkParser = nltk.RegexpParser(chunkGram)             chunked = chunkParser.parse(tagged)              for subtree in chunked.subtrees(filter = lambda t: t.label() == 'Chunk'):                 print (subtree)      except Exception as e:         print(str(e))  process_content() 

Results: “xxx” stands for a word; in the first instance it is a verb and in the second instance it is a noun

(Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  

Now that I have the chunks of words, I want to find the 10 most frequently occurring or prominent Verb + Noun chunks. Is there any way I can attach a frequency or importance score to each chunk?

Python Pandas NLTK: Adding Frequency Counts or Importance Scoring to Part of Speech Chunks on Dataframe Text Column

I did NLTK part of speech tagging followed by chunking on one column (“train_text”) inside my Pandas data frame.

Below is my code that ran successfully and sample output results.

def process_content():     try:         for i in train_text:             words = nltk.word_tokenize(i)             tagged = nltk.pos_tag(words)             # chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""             chunkGram = r"""Chunk: {<VB.?><NN.?>}"""             chunkParser = nltk.RegexpParser(chunkGram)             chunked = chunkParser.parse(tagged)              for subtree in chunked.subtrees(filter = lambda t: t.label() == 'Chunk'):                 print (subtree)      except Exception as e:         print(str(e))  process_content() 

Results: “xxx” stands for a word; in the first instance it is a verb and in the second instance it is a noun

(Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  

Now that I have the chunks of words, I want to find the 10 most frequently occurring or prominent Verb + Noun chunks. Is there any way I can attach a frequency or importance score to each chunk?

Does hitting a creature with a magical creature counts as magical damage?

Half-Orc Barbarian Conan has been Enlarged, making him Large. During a fight against some Couatls, he managed to grappled one and beat it to death.

Now, having already something (the body of the dead Couatl) in his hands and being a tad affected by its current rage, Conan decides to strike a second Couatl with the first one. Laughs all around the table as the DM rules that he can indeed wield the corpse as an improvised weapon (bludgeoning), given the situation.

A Couatl is immune to non-magical bludgeoning, among other things. But given the fact that the first Couatl is a magical creature and has the Magic Weapons feature, does the damage counts as magical damage?

Magic Weapons: The couatl’s weapon attacks are magical.

If yes, would any “magical creatures” work for this purpose or only ones with the Magic Weapons feature?