Can others understand the content of the words embedded in the Message spell by the spellcaster, as well as those who respond to him?

Can others understand the content of the words embedded in the Message spell by the spellcaster, as well as those who respond to him?

Am I right that the meaning of the Message spell is transmitted through the verbal component, and for the answer the target must move its lips? (Can someone in this case understand the content transmitted by the spellcaster, by listening to sound or by reading lips, and the contents of the answer of the target, reading by its lips?)

Difficulty in understand the proof of the lemma : “Matroids exhibit the optimal-substructure property”

I was going through the text "Introduction to Algorithms" by Cormen et. al. where I came across a lemma in which I could not understand a vital step in the proof. Before going into the lemma I give a brief description of the possible prerequisites for the lemma.


Let $ M=(S,\ell)$ be a matroid where $ S$ is the ground set and $ \ell$ is the family of subsets of $ S$ called the independent subsets of $ S$ .

Let us have an algorithm which finds an optimal subset of $ M$ using greedy method as:

$ GREEDY(M,W):$

$ 1\quad A\leftarrow\phi$

$ 2\quad \text{sort $ S[M]$ into monotonically decreasing order by weight $ w$ }$

$ 3\quad \text{for each $ x\in S[M]$ , taken in monotonically decreasing order by weight $ w(x)$ }$

$ 4\quad\quad \text{do if $ A\cup\{x\} \in \ell[M]$ }$

$ 5\quad\quad\quad\text{then $ A\leftarrow A\cup \{x\}$ }$

$ 6\quad \text{return $ A$ }$


I was having a problem in understanding a step in the proof of the lemma below.

Lemma: (Matroids exhibit the optimal-substructure property)

Let $ x$ be the first element of $ S$ chosen by $ GREEDY$ for the weighted matroid $ M = (S, \ell)$ . The remaining problem of finding a maximum-weight independent subset containing $ x$ reduces to finding a maximum-weight independent subset of the weighted matroid $ M’ = (S’, \ell’)$ , where

$ S’ = \{y\in S:\{x,y\}\in \ell\}$ ,

$ \ell’ = \{В \subseteq S – \{x\} : В \cup \{x\} \in \ell\}$ ,

and the weight function for $ M’$ is the weight function for $ M$ , restricted to $ S’$ . (We call $ M’$ the contraction of $ M$ by the element $ x$ .)

Proof:

  1. If $ A$ is any maximum-weight independent subset of $ M$ containing $ x$ , then $ A’ = A — \{x\}$ is an independent subset of $ M’$ .

  2. Conversely, any independent subsubset $ A’$ of $ M’$ yields an independent subset $ A = A’\cup\{x\}$ of $ M$ .

  3. We have in both cases $ w(A) = w(A’) + w(x)$ .

  4. Since we have in both cases that $ w(A) = w(A’) + w(x)$ , a maximum-weight solution in $ M$ containing $ x$ yields a maximum-weight solution in $ M’$ , and vice versa.


I could understand $ (1),(2),(3)$ . But I could not get how the line $ (4)$ was arrived in the proof from $ (1),(2),(3)$ . especially the part in bold-italics. Could anyone please make it clear to me.

Cannot understand the relevance of $\binom{n-1}{2}$ subarrays in The Maximum Sub-array Problem

I recently came across the sentence in the Book Introduction to Algorithms section 4.1 The maximum sub-array problem:

We still need to check $ \binom{n-1}{2} = \Theta(n^2)$ subarrays for a period of $ n$ days.

Here $ n$ is the number of days taken as an example to show the changes in price of stock.

We can consider this is the size of an array A.

Where we are provided with an Array A and we need to find the net change is maximum from the first day to the last day.

To explain more specifically it means for an array $ A$ of size $ n$ we need to check $ \binom{n-1}{2}$ subarrays.

But, I cannot understand how we need $ \binom{n-1}{2}$ sub-arrays?

If we take an array of size 5 could someone please explain to me why we need only 6 sub-arrays. Won’t the sub-arrays be:

[1...5] [1...4] [1...3] [1...2]  [2...4] [2...5]   [3...5] [4...5] 

Please correct me if I am wrong. References: Maximum Subarray Problem

The Maximum Sub-array problem Thank you.

How to understand mapping function of kernel?

For a kernel function, we have two conditions one is that it should be symmetric which is easy to understand intuitively because dot products are symmetric as well and our kernel should also follow this. The other condition is given below

There exists a map $ φ:R^d→H$ called kernel feature map into some high dimensional feature space H such that $ ∀x,x′$ in $ R^d:k(x,x′)=<φ(x),φ(x′)>$

I understand that this means that there should exist a feature map that will project the data from low dimension to any high dimension D and kernel function will take the dot product in that space.

For example, the Euclidean distance is given as

$ d(x,y)=∑_i(x_i−y_i)^2=<x,x>+<y,y>−2<x,y>$

If I look this in terms of the second condition how do we know that doesn’t exist any feature map for euclidean distance? What exactly are we looking in feature maps mathematically?

Can Dragonbait read or write Common, since he is able to understand it?

In D&D 5e, Tomb of Annihilation includes Dragonbait, a saurial. His stat block claims that he can understand Common but not speak it, due to the strange way in which saurials communicate. From Tomb of Annihilation, p. 218:

Languages understands Common but can’t speak

Is there any evidence, either in 5e (which I assume is just what’s presented in Tomb of Annihilation) or anything from previous editions of D&D, that suggests that Dragonbait (whether it’s about Dragonbait specifically or saurials generally) can read or write Common, since he can apparently understand it?

How to understand definition of $\Pi_k$ in arithmetical heirarchy

Am reading a text about computability theory, and according to the text, at each level $ k$ of the arithmetical hierarchy, we have two sets, $ \Sigma_k$ and $ \Pi_k$ , where $ \Pi_k$ is defined as:

$ $ \Pi_k=co-\Sigma_k $ $

So that for $ k=0$ , we have the class of decidable sets and $ \Sigma_0=\Pi_0$ , and for $ k=1$ , we have $ \Sigma_1$ as the class of computably enumerable (c.e.) sets and $ \Pi_1$ as the class of not computably enumerable sets (not c.e.)….

Let $ L(M_e)$ denote the language recognized by Turing Machine $ M_e$ with Godel number $ e$ . I came across the following language $ E$ , where:

$ $ E=\{e|L(M_e)=\Sigma^*\}$ $

i.e. $ E$ is the language of all Turing Machine codes $ e$ that are computably enumerable. By a diagonalization argument, it can be shown that $ E$ is not c.e. This implies that:

$ $ E \in \Pi_1 $ $

However, if $ E \in \Pi_1$ , it means that $ E = co-A$ , for some $ A \in \Sigma_1$ , using the definition in the above statement… However, the complement of $ E$ is:

$ $ \overline{E}=\overline{\{e|L(M_e)=\Sigma^*\}} $ $

which (I guess) means that $ \overline{E}$ is the language of all Turing Machines $ e$ such that on some inputs, $ e$ diverges… However, it has been shown that $ \overline{E} \equiv_m K^{2}$ , i.e. $ \overline{E} \equiv_m K^K$ , so that, where given two sets $ A$ and $ B$ , we have $ A \equiv_m B$ iff $ A \leq_m B$ and $ B \leq_m A$ , and $ \leq_m$ refers to a many-to-one reduction:

$ $ \overline{E} \equiv_m K^K \in \Sigma_2 $ $

Given that $ \Sigma_2 \neq \Sigma_1$ , it looks like that $ \overline{E}$ is not computably enumerable… But doesn’t this contradict the definition of $ \Pi_1$ which states that the complement of a not c.e. set is c.e. ?

I think am missing something in my understanding of the definitions …

How to make Google understand that is not self serving reviews?

I run a blog where I write reviews of restaurant and/or pubs, and also user can leave rating (aggregateRating)

For years, my reviewRating was indexed fine.

Some months ago, in serp reviewRating has been replaced in favor of aggregateRating, I think because of this google rule

Ratings must be sourced directly from users. [*]

Now, also aggregateRating was removed, I suppose because of this?

Pages using LocalBusiness or any other type of Organization structured data are ineligible for star review feature if the entity being reviewed controls the reviews about itself. For example, a review about entity A is placed on the website of entity A, either directly in their structured data or >through an embedded third-party widget. [*]

My blog has a lot of reviews of many different places. How can I make google understand that these are not self serving reviews?

This is an example of my page markup:

{  "@context": "http://schema.org/",  "@type": "LocalBusiness",  "name": "Resturant Name",  "description": "orem ipsum dolor sit amet, consectetur adipiscing elit. Nunc eu eros sed eros gravida fermentum non sed...",  "image": {    "@type": "ImageObject",    "url": "https://i.picsum.photos/id/310/700/525.jpg",    "width": 700,    "height": 525  },  "Review": {    "@type": "Review",    "name": "Resturant Name",    "reviewBody": "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc eu eros sed eros gravida fermentum non sed ante. Maecenas malesuada orci sapien, vitae hendrerit mauris eleifend in. Integer facilisis dignissim scelerisque. Nam quis dictum metus. .",    "author": {      "@type": "Person",      "name": "Jhon Doe"    },    "datePublished": "2013-11-08T14:41:19+01:00",    "dateModified": "2020-06-02T21:24:19+02:00",    "reviewRating": {      "@type": "Rating",      "ratingValue": "4.3",      "bestRating": 5,      "worstRating": 1    }  },  "aggregateRating": {    "@type": "AggregateRating",    "ratingValue": 3.4,    "ratingCount": 32,    "bestRating": 5,    "worstRating": 1  },  "address": "Street Address",  "priceRange": "€€",  "telephone": "12346789" } 

When tested with Structured Data Testing Tool I’ve no error and the previews shows aggregateRating indeed

What if I also add “publisher” property? Would be it helpful?

[*] from google technical guidelines

I am unable to understand the logic behind the code (I’ve added exact queries as comments in the code)

Our local ninja Naruto is learning to make shadow-clones of himself and is facing a dilemma. He only has a limited amount of energy (e) to spare that he must entirely distribute among all of his clones. Moreover, each clone requires at least a certain amount of energy to function (m) . Your job is to count the number of different ways he can create shadow clones. Example:

e=7;m=2

ans = 4

The following possibilities occur: Make 1 clone with 7 energy

Make 2 clones with 2, 5 energy

Make 2 clones with 3, 4 energy

Make 3 clones with 2, 2, 3 energy.

Note: <2, 5> is the same as <5, 2>. Make sure the ways are not counted multiple times because of different ordering.

Answer

int count(int n, int k){     if((n<k)||(k<1)) return 0;     else if ((n==k)||(k==1)) return 1;     else return count(n-1,k-1)+ count(n-k,k);   // logic behind this? }  int main() {     int e,m;            // e is total energy and m is min energy per clone     scanf("%d %d", &e, &m);     int max_clones= e/m;     int i,ans=0;     for(i=1;i<=max_clones;i++){         int available = e - ((m-1)*i);   // why is it (m-1)*i instead of m*i         ans += count(available, i);     }     return 0; } 

Not able to understand this problem ( home page – latest posts) not showing properly

Hi i am working on my friend’s site it is using gwangi theme (https://community.gwangi-theme.com/) and my website is https://www.tamilpeoples.ml/

it is working fine in localhost (xampp) but after i upload it to my hosting server only home page shows extra contents below footer. ( if i de activate 4 plugins) it is working fine but without these plugins functions are not working. somebody help plz

Need help to understand the math behind the logic for scheduling problem using Reinforcement Learning

I am working on a problem for scheduling VMs considering efficient resource and energy utilisation and I came across this paper. I understand RL and how Q-Learning works which they are trying to use in paper. However, I am not able to achieve an intuitive understanding of the algorithm suggested (page 3).

I understand that equal importance has been given to utilisation and power consumption but with reverse, let’s say signs. But Step-3 is not intuitive. Can someone help me get a better understanding of the same algorithm?