Cat Scoring app – looking for a suggestions on imporing the code

Can you please review my code and suggest why it may or may not be a professional looking code. I am just starting out as a programmer I am able to write code but I don’t really know where I am right and where I am wrong.

Also, I have heard that using the MVC approach in code is a smart way. Please suggest what changes this code would have if it were to comply with the MVC approach.

This is code for cat scoring app, in case there is trouble running the snippet in StackExchange’s script runner.

codepen link for code

const imageBasePath = "https://raw.githubusercontent.com/smartcoder2/CatClickerApp/master/images/"; const imageNameArrary = [   "tom.jpg",   "jack.jpeg",   "zoe.jpeg",   "simba.jpg",   "george.jpeg" ]; let catScore = [0, 0, 0, 0, 0]; // this keeps the score of each cat. index of array determines the cat let htmlUpdate; let ddl; const imageVar = document.getElementById("cat-image"); const textVar = document.getElementById("show-click-value");  imageVar.addEventListener("click", incrementClickVar);  function incrementClickVar() {   ddl = document.getElementById("select-cat");   catScore[ddl.selectedIndex]++;   htmlUpdate =     catScore[ddl.selectedIndex] == 0 ?     "zero" :     catScore[ddl.selectedIndex];   textVar.innerHTML = htmlUpdate;   // }  function validate() {   ddl = document.getElementById("select-cat");   htmlUpdate =     catScore[ddl.selectedIndex] == 0 ?     "zero" :     catScore[ddl.selectedIndex];   textVar.innerHTML = htmlUpdate;   let selectedValue = ddl.options[ddl.selectedIndex].value;   imageVar.src = imageBasePath + imageNameArrary[ddl.selectedIndex]; }
.outer-box {   height: 100vh;   display: grid;   grid-template-columns: 1fr 1fr 1fr;   grid-template-rows: 1fr 1fr 1fr;   grid-gap: 2vw;   align-items: center; }  .outer-box>div, img {   max-width: 25vw;   min-height: 10vh;   max-height: 44vh;   justify-self: center; }  .outer-box>div {   text-align: center; }  #show-click-value {   font-size: 6vh; }
<html>  <head>   <link rel="stylesheet" href="styles\style.css" /> </head>  <body>   <div class="outer-box">     <div class="item1"></div>     <div class="item2">       <label class="cat-label" for="select-cat">Select a cat</label>       <select id="select-cat" name="cats" onchange="validate()">         <option value="Tom">Tom</option>         <option value="Jack">Jack</option>         <option value="Zoe">Zoe</option>         <option value="Simba">Simba</option>         <option value="George">George</option>       </select>       <br />     </div>     <div class="item3"></div>     <div class="item4"></div>     <img id="cat-image" src="https://raw.githubusercontent.com/smartcoder2/CatClickerApp/master/images/tom.jpg" alt="image not loaded" />     <div class="item6"></div>     <div class="item7"></div>     <div id="show-click-value">zero</div>     <div class="item9"></div>   </div>   <!-- srr-to-do-later: position the image and other elements properly -->  </body>  </html>

Strictly Proper Scoring Rules and f-divergences

Let $ S$ be a scoring rule for probability functions. Define

$ EXP_{S}(Q|P) = \sum \limits_{w} P(w)S(Q, w)$ .

Say that $ S$ is striclty proper if and only if $ S$ minimises $ EXP_{S}(Q|P)$ as a function of $ Q$ . Define

$ D_{S}(P, Q) = EXP_{S}(Q|P) – EXP_{S}(P|P)$ .

If $ S$ is the logarithmic scoring rule defined by $ S(P, w) = -ln(P(w))$ , then $ D_{S}(P, Q)$ is just the Kullback-Leibler divergence between $ P$ and $ Q$ , or equivalently, the inverse Kullback-Leibler divergence between $ Q$ and $ P$ . Note that the inverse Kullback-Leibler divergence is an $ f$ -divergence.

My question is this: is there any other strictly proper scoring rule $ S$ such that $ D_{S}(P, Q)$ is equal to $ F(Q, P)$ for some $ f$ -divergence $ F$ ?

I think that $ D_{S}(P, Q)$ is always a Bregman divergence, and Amari proved that the only $ f$ -divergence that is also a Bregman divergence is the Kullback-Leibler divergence (on the space of probability functions). Is this enough to imply that there are no other strictly proper scoring rules with this property?

Python Pandas NLTK: Adding Frequency Counts or Importance Scoring to Part of Speech Chunks on Dataframe Text Column

I did NLTK part of speech tagging followed by chunking on one column (“train_text”) inside my Pandas data frame.

Below is my code that ran successfully and sample output results.

def process_content():     try:         for i in train_text:             words = nltk.word_tokenize(i)             tagged = nltk.pos_tag(words)             # chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""             chunkGram = r"""Chunk: {<VB.?><NN.?>}"""             chunkParser = nltk.RegexpParser(chunkGram)             chunked = chunkParser.parse(tagged)              for subtree in chunked.subtrees(filter = lambda t: t.label() == 'Chunk'):                 print (subtree)      except Exception as e:         print(str(e))  process_content() 

Results: “xxx” stands for a word; in the first instance it is a verb and in the second instance it is a noun

(Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  

Now that I have the chunks of words, I want to find the 10 most frequently occurring or prominent Verb + Noun chunks. Is there any way I can attach a frequency or importance score to each chunk?

Python Pandas NLTK: Adding Frequency Counts or Importance Scoring to Part of Speech Chunks on Dataframe Text Column

I did NLTK part of speech tagging followed by chunking on one column (“train_text”) inside my Pandas data frame.

Below is my code that ran successfully and sample output results.

def process_content():     try:         for i in train_text:             words = nltk.word_tokenize(i)             tagged = nltk.pos_tag(words)             # chunkGram = r"""Chunk: {<RB.?>*<VB.?>*<NNP>+<NN>?}"""             chunkGram = r"""Chunk: {<VB.?><NN.?>}"""             chunkParser = nltk.RegexpParser(chunkGram)             chunked = chunkParser.parse(tagged)              for subtree in chunked.subtrees(filter = lambda t: t.label() == 'Chunk'):                 print (subtree)      except Exception as e:         print(str(e))  process_content() 

Results: “xxx” stands for a word; in the first instance it is a verb and in the second instance it is a noun

(Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  (Chunk xxx/VBN xxx/NN)  

Now that I have the chunks of words, I want to find the 10 most frequently occurring or prominent Verb + Noun chunks. Is there any way I can attach a frequency or importance score to each chunk?