## Improved Divine Smite Differentiation

Improved Divine Smite (PHB, p. 85) says in part:

… Whenever you hit a creature with a melee weapon, the creature takes an extra 1d8 radiant damage. If you also use your Divine Smite with an attack, you add this damage to the extra damage of your Divine Smite

Emphasis to show the part I’m focusing on.

So it was my understanding that Improved Divine Smite deals an extra 1d8 Radiant damage whenever I swing on a creature with a Melee weapon and hit them, no matter what else I’m adding to the weapon attack (such as Searing Smite, Divine Smite, poison I put on my sword before combat, etc). This definition makes a purpose of differentiating what happens if I also use Divine Smite with my attack.

I am more than likely confused or reading too far into the definition but I’d like to know why the book is making such a differentiation. What is it saying? That if I make a weapon attack without Divine Smite, the extra (non magical?) 1d8 radiant is added to the weapon damage, but if I do include a Divine Smite on the end, then the extra (now magical?) 1d8 radiant is added to the smite damage instead?

Does that change anything at all? Is this differentiation important to some sort of tactic or resistance I’m not considering?

## Numerically stable reverse automatic differentiation of power(x, y)?

I would like to compute the adjoints $$\bar x$$ and $$\bar y$$, from a reverse automatic differentiation perspective, of the expression $$x^y$$. The adjoint $$\bar{x^y}$$ is already known; and we can assume $$x \geq 0$$ and $$y \geq 1$$.

The "easy" solution consists of rewriting the expression $$x^y=e^{y \ln x}$$, and proceed piece-wise from there. However, this solution proves to be unstable numerically, it gives:

$$\bar y = \bar{x^y} x^y \ln x$$

and

$$\bar x = \bar{x^y} x^{y-1} y$$

Is there any way to rewrite and/or approximate those expressions with something that is not going to numerically diverge when $$x \approx 0$$?

## Newton method using automatic differentiation

I wrote a large Matlab code which solves partial differential equations. Now I would like to test the code on nonlinear problems, for which I need the Newton-Raphson iteration for systems of nonlinear algebraic equations.

How do I employ automatic differentiation in Matlab to use the Newton-Raphson iteration?

## Double integral differentiation

Could somebody tell me how do I get the right side from the left side – which rule of differentiation of the double integral produces this?

\begin{align*}\frac{\mathrm{d}}{\mathrm{d}x} \int_{v=x}^\infty &\int_{u=0}^x f(v, u)\cdot g (v)\cdot g(u)\, \mathrm{d} u\, \mathrm{d} v =\ &=\int_0^\infty g(v)\;\mathrm{d}v\cdot \frac{\mathrm{d}}{\mathrm{d}x} \int_0^x f (v, u)\; g(u)\; \mathrm{d}u + \int_x^\infty \;g(u)\;\mathrm{d}u\cdot \frac{\mathrm{d}}{\mathrm{d}x} \int_0^x f (v, u)\; g(v)\; \mathrm{d}v\end{align*}

My ignorance is probably quite profound, and this is something really simple, but I cannot see. It’s been quite a while since I last dabbled in double integrals.

When I type

d/dx(integral from v=x to infinity integral from u=0 to x f(u,v)g(u)g(v) du dv)

into WolframAlpha, I get

\begin{align*}\frac{\mathrm{d}}{\mathrm{d}x} \Bigg(\int_{v=x}^\infty \int_{u=0}^x &f(v, u)\, g (v)\, g(u)\, \mathrm{d} u\, \mathrm{d} v\Bigg) =\ &=\int_x^\infty g(v) g(x) f(x,v) \mathrm{d}v – \int_0^x g(u) g(x) f(u,x) \mathrm{d}u,\end{align*}

which is correct, but I first need to understand how to arrive at the first formula, in order to apply the Leibnitz rule.

If someone knows how to align these equations here beautifully, do tell.

## Partial differentiation of the general function homogeneous of degree n [on hold]

If $$f$$ is homogeneous of degree $$n$$, $$f(tx,ty) = t^{n}f(x,y)$$
show that $$f_{x}(tx,ty) = t^{n-1}f_{x}(x,y)$$

My proof went a little wrong as follow:
$$u=tx, v=ty \quad f_{x}(tx,ty) = \frac{\partial f(u,v)}{\partial u} \cdot \frac{\partial u}{\partial x}+\frac{\partial f(u,v)}{\partial v}\cdot \frac{\partial v}{\partial x} = f_{u}(u,v) \cdot t$$…….(1)
$$\quad\quad\quad\quad\quad\quad\frac{\partial}{\partial x}(t^nf(x,y))=t^nf_{x}(x,y)$$…….(2)
(1)=(2)$$\qquad f_{u}(u,v)=t^{n-1}f_{x}(x,y)$$
$$\quad \quad \quad\quad \,f_{u}(tx,ty) = t^{n-1}f_{x}(x,y)$$

On the last line, the footnote on the left side is supposed to be $$x$$, however, I get $$u$$.

## Differential operator with dependence on differentiation variable

I was wondering if it would make sense to define a (generic) total differential operator as follows:

$$\frac{d}{d\alpha} = \frac{\partial}{\partial\alpha} + A \, d\alpha \tag{1}\label{1}$$

where $$\alpha$$ is the differentiation parameter and $$A$$ is a generic term, not explicitly dependent on $$\alpha$$.

The problem with this is that I’m not sure whether it is mathematically correct to have a differential operator that is linearly dependent on the differential of the derivation parameter itself, i.e. $$d\alpha$$. Or rather, that includes a term directly proportional to $$d\alpha$$.

I know of the theta operator defined as:

$$\theta = z\frac{d}{dz}$$

but I think this is different and probably not comparable to the case reported in equation $$\eqref{1}$$.

Thanks for any help.

## Partial Differentiation 0/0

Let: $$f(x,y)=x^2y\sin(\frac{y}{x}),x\neq0$$ $$f(x,y)=0,x=0$$ Partial differentiation is obvious for $$x\neq0$$, however, for x = 0 and the derivative over x, one gets: $$\lim_{h\to 0}\frac{f(h,y)}{h}$$ which is not constant = 0. Do I need to use L’Hospital? If yes differentiating over which variable? I’m new to this topic, thank you for your help!

## Using differentiation to find the gradient at the normal

I have a point $$Q$$ that lies on the curve $$C$$, where $$C=\frac{3x^2}{4}-4x-10$$. The point $$Q$$ is such that the gradient of the normal to $$C$$ at $$Q$$ is $$-2$$, how can I find the $$x$$-coord of $$Q$$?

So far I have tried differentiation however I am not very good at it since the fraction coefficient confuses me!

## What’s the correct partial differentiation of this function?

Suppose I have a function;

$$F= \sum \| x_i – \mu – V_q \lambda_i \|^2$$

where $$V_q \in R^{p \times q}$$, $$x_i \in R^p$$, $$\lambda_i \in R^q$$

and I wish to maximize it with respect to $$\mu$$ and $$\lambda_i$$

then shouldn’t $$\mu^{\ast} = \frac{1}{N} \sum (x_i – V_q \lambda_i)$$ ?

and shouldn’t $$\lambda_i^{\ast} = V_q^{T)(x_1 – \mu)$$ ?

My book gives the result as $$\mu^{\ast} = \bar{x}$$ $$\lambda_i^{\ast} = V_q^{T} (x_i – \bar{x})$$

It doesn’t give me new definition of $$\bar{x}$$ so I suppose it is $$\bar{x} = \frac{1}{N} \sum x_i$$

The book is Elements of Statistical Learning, page 535

## Sub Banach spaces (Banach algebras) of the disc algebra which are invariant under the differentiation operator

Is there a complete classification of all Banach subspace of the disc algebra $$\mathcal{A}(\mathbb{D})$$ which are invariants under the differentiation operator? Is the a complete classification of such Banach subspace for which the differentiation is a bounded operator/

Is there a complete classification of all Banach sub algebra of $$\mathcal{A}(\mathbb{D})$$ which are invariant under the differentiation operator?

Having these questions in my mind, I arrived at this question

Is there a holomorphic function on open unit disc with this property?