Does this method attempting to maximizing Assassin damage work?

I have a plan on how to get a lot of damage out with the assassin from Dnd 5e but I wanted to run the order of actions by you guys to see if it is correct or not.

Assassin 17/warlock 3

  • Apply serpent venom poison to rapier
  • Casts Invisibility via warlock invocation “Shroud of Shadow” (Xanathar’s Guide to Everything)
  • Sneaks to target
  • Casts Hex on the target which is a bonus action and chooses constitution so the enemy gets a disadvantage on constitution saving throws
  • Invisibility drops since Hex was cast
  • Casts Green-Flame Blade for melee attack

If the attack hits then the enemy will have to make a constitution saving throw to prevent death strike and the poison however they will have disadvantage from hex. They will then take:
(2d6 hex + 18d6 sneak attack + 6d8 fire + 6d6 poison + 5 dex modifier + 2d8 rapier)*2

Is the enemy surprised even though I used hex before my attack? Is there anything wrong with this order of events? Is my calculation also correct?

Maximizing integer sets intersection (with integer delta)

There are two sets of integers with different numbers of items in them.

X= { x_0, x_1, ... x_n }, x_0 < x_1 < ... < x_n Y= { y_0, y_1, ... y_m }, y_0 < y_1 < ... < y_m 

And there is a function of a single integer defined as

F(delta) = CountOfItems( Intersection( X, { y_0+delta, y_1+delta, ... y_m+delta) ) ) 

That is – i add delta integer to every element of the Y and then count how many same integers are in the X and the modified Y.

And then the problem – find delta that maximizes F(delta).

max( F(delta) ), where delta is integer 

Is there some “mathematical” name for such task and optimal algorithm for this? Obviously i can use brute-force here and enumerate all possible combination – but it does not work for big n and m.

Maximizing a nonnegative linear function over adjacency matrices with node degree constraints

Suppose $ A$ is an $ n$ -by-$ n$ symmetric matrix whose entries are all nonnegative. $ A_{ii} = 0$ for all $ i$ . We want to find an $ n$ -by-$ n$ binary ($ 0/1$ valued) matrix $ X$ that maximizes

$ $ \sum_{ij} A_{ij} X_{ij}$ $

under the constraints that

  1. $ X$ is symmetric ($ X^\top = X$ );
  2. Each row of $ X$ can have at most $ k$ ones (the rest being zero);
  3. The total number of $ 1$ in $ X$ is at most $ m$ .

Here $ k \le n$ and $ m \le n^2$ . I can think of a dynamic programming solution if 2 and 3 are the only conditions. But the symmetry in condition 1 makes it much harder. Does there exist a polynomial algorithm which can achieve multiplicatively constant approximation bound (under conditions 1, 2, 3)? Ideally the constant is universal, not dependent on $ n$ , $ k$ , or $ m$ .

If not, is there any hope for the combination of conditions 1 and 2? The combination of 1 and 3 is trivial to handle.

Thank you.

Maximizing healing and damage for a Cleric

I am planning this character out for levels 1-20. We’re starting at level one and we are continuing until we’re done or dead.

I’m having some trouble deciding on my feats, and spells to maximize damage and healing output.

I understand that warpriest clerics are given armor proficiency to help make creating a 2nd edition battle medic super easy, but I’m trying to balance my character perfectly between doing damage with my deity’s weapon and healing my allies. I’m having a lot of trouble judging which trade-offs to take so that I can hit harder while still being a viable healer.

As a character, I’ve chosen Cleric, Warpriest, Dwarf, Medium Armror, Shelyn, and Glaive.

Right now, my plans for feats are leaning toward Domain initiate with Zeal, communal healing, divine weapon, but the details are starting to get fuzzy. It’s all the different numbers floating around that are confusing. I really just want to increase the amount of healing from my spells and the amount of damage from my glaive. What is the best way to balance these two paths?

Does maximizing damage override or interact with damage reduction?

There are features such as the Evocation Wizard’s Overchannel which states:

Starting at 14th level, you can increase the power of your simpler spells. When you cast a wizard spell of 1st through 5th-level that deals damage, you can deal maximum damage with that spell.

However there are also things such as the Battle Master Fighter’s Parry Maneuver which states:

When another creature damages you with a melee attack, you can use your reaction and expend one superiority die to reduce the damage by the number you roll on your superiority die + your Dexterity modifier.

If a Wizard uses Overchannel on a spell does that force the Battle Master to roll a 1 on their superiority die, as this maximizes the spell’s damage?

Another example of such a feature is the enlarge/reduce spell which states:

[…] The target’s weapons also shrink to match its new size. While these weapons are reduced, the target’s attacks with them deal 1d4 less damage […]

Would the 1d4 reduction be minimized if a Wizard used Overchannel on a spell such as booming blade?

What are the Effects of “Maximizing” Damage on an Effect?

In 5th Edition D&D, there’s a few circumstances where a character’s damage might be “maximized”.

For example, the Evocation Wizard’s Overchannel ability:

Starting at 14th level, you can increase the power of your simpler spells. When you cast a wizard spell of 1st through 5th-level that deals damage, you can deal maximum damage with that spell.

Overchannel, Player’s Handbook, pg. 118

Or an entry on the Wild Surge table:

33-34     Maximize the damage of the next damaging spell you cast within the next minute.

Wild Magic Surge, Player’s Handbook, pg. 104

The way I see this, there’s two valid ways to treat this effect:

  1. Treat the damage dice as though each die rolled its respective maximum value
  2. Treat the damage as though it is the sum of the maximum values that each possible die could have rolled

These two effects might seem similar, and in most situations they are, but there’s a few circumstances where they might be different. For example, for an Attack-Roll based spell, the damage of interpretation 1 is doubled on a crit, because you’re doubling the quantity of dice that are being used to calculate damage; but under interpretation 2, it would not be, because critical hits do not double flat damage modifiers, and taking the maximum value of all the rolled dice would turn it into a flat modifier.

Conversely, there are spells which depend on a specific value rolled on the damage dice to change its behavior, like with Chaos Bolt:

You hurl an undulating, warbling mass of chaotic energy at one creature in range. Make a ranged spell attack against the target. On a hit, the target takes 2d8 + 1d6 damage. Choose one of the d8s. The number rolled on that die determines the attack’s damage type, as shown below.

If you roll the same number on both d8s, the chaotic energy leaps from the target to a different creature of your choice within 30 feet of it. Make a new attack roll against the new target, and make a new damage roll, which could cause the chaotic energy to leap again.

Chaos Bolt, Xanathar’s Guide to Everything, pg. 151

Under interpretation 1, the Chaos Bolt always deals Thunder damage, and always leaps to a new target on a successful hit, because each of the d8s are being treated as having each rolled 8. Under interpretation 2, however, the d8s are rolled, and then ignored for the purpose of calculating the total damage, because the damage is simply being set to the maximum possible value of 22, without setting the values of the individual dice.

So which is it? Is there rules support to show that Maximizing Damage should be handled one way or the other?

Additionally, since I’ve raised the spectre of an attack-roll based spell like Chaos Bolt, the issue of the Attack Roll itself also needs to be raised: should the Attack Roll be treated as an Automatic hit (or crit!) because failing to do so will result in the spell dealing less than maximum damage? Or is “Maximizing” damage meant to only mean damage after a successful attack roll, which therefore means the attack roll cannot be overridden?

Maximizing sum of numbers within a sequence

Write an algorithm that, given sequence seq of n numbers where 3 <= n <= 1000 and each number k in seq 1 <= k <= 200, finds maximum sum by repeatedly removing one number from seq, except for first and last number in seq, and adding its value to sum of two adjacent numbers. Algorithm ends when there are only two numbers left.

For example:
[2, 1, 5, 3, 4], sum = 0
[2, 1, 5, 3, 4], sum = 1 + 2 + 5 = 8, 1 removed
[2, 5, 3, 4], sum = 8 + 3 + 5 + 4 = 20, 3 removed
[2, 5, 4], sum = 20 + 5 + 2 + 4 = 31, 5 removed
[2, 4] only 2 numbers left so algorithm ends

So far I’ve written brute force algorithm checking all possible combinations but it’s not well suited for large sequences.

My question is, is there more efficient algorithm solving this problem?

Maximizing a function over a subset of data?

I seek an algorithm for maximizing an accuracy function for $ N/2$ out of total $ N$ sets of samples:

  • Function: F1-score (binary classification, neural networks)
  • Samples: targets (0 or 1), and predictions $ \Re\in(0,1)$

The key problem is to find such a subset for an optimal prediction threshold, $ P$ – i.e., the roundoff value, which defaults to, but isn’t always optimally, 0.5. Thus, it’s a likely dual-optimization.

What is a compute-efficient, scalable algorithm for determining, or accurately approximating, such a subset and threshold?

Solutions considered:

  • Brute force: $ N\choose N/2$ , not scalable
  • Leave-one-out: find best $ P$ for $ N$ , leave out sample w/ highest F1-score, repeat for $ (N-1)$ samples – and on and on. Problem: best $ P$ for subset $ n$ may not be best $ P$ for subset $ (n+1)$

Below are my Python scripts & toy data, for reference:

Toy data, N=10:

import numpy as np  targets = np.random.randint(0,2,(5,32))  # 0's and 1's array w/ shape (10,32) preds   = np.random.randn(5,32)          # float32 array     w/ shape(10,32) preds   = np.abs(preds)/np.max(np.abs(preds)) # scaled to (0,1) 

F1-Score Script (simplified):

def f1_score(targets,preds_probabilities,pred_threshold=0.5):     preds = np.array([(pred > pred_threshold) for pred in preds_probabilities])      TP=[]; TN=[]; FP=[]     for (t, p) in zip(targets, preds):         TP.append(1 if (t == 1) and (p == 1) else 0)         TN.append(1 if (t == 0) and (p == 0) else 0)         FP.append(1 if (t == 0) and (p == 1) else 0)     TP, TN, FP = np.sum(TP), np.sum(TN), np.sum(FP)      precision   = TP / (TP + FP) if not (TP == 0 and FP == 0) else 0     recall      = TP / (TP + FN) if not (TP == 0 and FN == 0) else 0     return precision*recall/(precision + recall) 

Prediction threshold search (brute force):

def get_best_predict_threshold(targets,preds,search_interval=0.01):     th, best_th, best_acc = 0, 0, 0      while (th >= 0) and (th < 1):         acc = f1_score(labels,preds,th)         if acc >= best_acc:             best_th = round(th,2)                        best_acc = acc         th += search_interval      return best_th 

maximizing matrix average over a set of vectors

Given a symmetric n by n matrix A, find column vector V with elements V(k) = exp(i*alpha(k)), where k=1,2,…,n, i is imaginary unity, and alpha(k) are unknown real numbers, which maximizes V’AV, where V‘ is the Hermitian conjugate of V.

I am a physicist, and don’t know if there is a good algorithm to solve this problem. It arises in master equation for density matrices.

Any advice will be appreciated.

Thanks in advance

How do the balls maximizing the maximal function depend on their centers?

Let $ \mu$ be a finite Borel measure on $ \mathbb R^N$ and let $ f\in L^1(\mu)$ be a non-negative function. Let $ M_\mu f$ denote the maximal function of $ f$ relative to $ \mu$ , i.e. $ (M_\mu f)(x)=0$ if $ \mu(B(x,r))=0$ for some $ r>0$ and $ (M_\mu f)(x) = \sup_{0<r<\infty} \frac{1}{\mu(B(x,r))} \int_{B(x,r)}f \, d\mu$ otherwise. (Here $ B(x,r)$ denotes the open ball of radius $ r$ centered at $ x$ .)

Suppose that $ a>0$ and $ K\subset \mathbb R^N$ is a compact such that $ M_\mu f > a$ on $ K$ . Then for each $ x\in K$ there exists $ r_x>0$ such that $ $ \frac{1}{\mu(B(x,r_x))} \int_{B(x,r_x)}f \, d\mu > a. \tag{1}$ $

Question. Is it possible to choose for each $ x\in K$ the radius $ r_x>0$ in such a way that (1) holds and the mapping $ x\mapsto r_x$ is continuous or at least Borel?

This question is inspired by another recent question about Besicovich type covering theorem.