Does this character concept involving never taking a long rest and converting spell slots to sorcery points (aka coffeelock) violate RAW?

Does the following, very cheesy character concept, violate any RAW? Please cite rules or official rulings in your answer. (Apart from RAW, I expect my DM to disallow or limit the concept, in the interest of balance. That is not part of my question.)

Elf. Multiclass: Sorcerer 2+ / Warlock 1+ / Bard 1

  • Never takes a long rest. Ever. See question, Must 5e elves take a long rest? Specifically, whether adventuring or not, she makes sure that every 8 hour block includes more than 2 hours of combat or strenuous activity, to ensure that no interpretation of long rest rules would allow a long rest to be automatically triggered.
  • Converts warlock spell slots into sorcery points. See @JeremyECrawford’s tweet.
  • Converts sorcery points into sorcery spell slots (or into spellcasting spell slots, once also multiclassing Bard) via Flexible Casting
  • Spell slots created from sorcery points disappear upon long rest, as per Flexible Casting and a tweet from @JeremyECrawford; therefore these created spell slots will not disappear until used, e.g. for a character taking no long rests
  • Spell slots created from sorcery points are in addition to, and not restoration of the sorcerer’s spell slots which refresh on a long rest. This is not 100% clear from RAW or clarifications. But:
    (a) Flexible Casting uses the phrase, “additional Spell Slots”;
    (b) the rule stating that created spell slots disappear on long rests is superfluous if created spell slots can only replace expended spell slots — to have meaning it must be possible to create spell slots which are not replacements;
    (c) flexible casting does not use the word “recover”, which is the word used for wizards’ Arcane Recovery
  • Restores warlock spell slots on a short rest, and repeats the cycle above, converting warlock spell slots to sorcery points to sorcerer (or spellcasting) spell slots
  • During periods of downtime, takes as many short rests per day as permissible, to build up a stockpile of created sorcerer spell slots
  • Stockpiling requires using bonus actions out of combat, discussed elsewhere
  • Stockpiling requires having short rests on downtime days, discussed in a comment below
  • While adventuring, during combat, uses created spell slots to cast spells, and/or uses flexible casting to convert those spell slots back into sorcery points
  • While adventuring, after combat, will use created spell slots with Bard spells to restore hits points, since restoring hit points via long rest is unavailable, and via hit dice is mostly unavailable

I’m pretty sure this is not RAI, but does it violate RAW in some way?

Problem plotting expression involving Generalized hypergeometric functions $_2F_2 \left(.,.,. \right)$

I’m trying to plot a graph for the following expectation

$ $ \mathbb{E}\left[ a \mathcal{Q} \left( \sqrt{b } \gamma \right) \right]=a 2^{-\frac{\kappa }{2}-1} b^{-\frac{\kappa }{2}} \theta ^{-\kappa } \left(\frac{\, _2F_2\left(\frac{\kappa }{2}+\frac{1}{2},\frac{\kappa }{2};\frac{1}{2},\frac{\kappa }{2}+1;\frac{1}{2 b \theta ^2}\right)}{\Gamma \left(\frac{\kappa }{2}+1\right)}-\frac{\kappa \, _2F_2\left(\frac{\kappa }{2}+\frac{1}{2},\frac{\kappa }{2}+1;\frac{3}{2},\frac{\kappa }{2}+\frac{3}{2};\frac{1}{2 b \theta ^2}\right)}{\sqrt{2} \sqrt{b} \theta \Gamma \left(\frac{\kappa +3}{2}\right)}\right)$ $ where $ a$ and $ b$ are constant values, $ \mathcal{Q}$ is the Gaussian Q-function, which is defined as $ \mathcal{Q}(x) = \frac{1}{\sqrt{2 \pi}}\int_{x}^{\infty} e^{-u^2/2}du$ and $ \gamma$ is a random variable with Gamma distribition, i.e., $ f_{\gamma}(y) \sim \frac{1}{\Gamma(\kappa)\theta^{\kappa}} y^{\kappa-1} e^{-y/\theta} $ with $ \kappa > 0$ and $ \theta > 0$ .

This equation was also found with Mathematica, so it seems to be correct. I’ve got the same plotting issue with Matlab.

Follows some examples, where I have checked the analytical results against the simulated ones.

When $ \kappa = 12.85$ , $ \theta = 0.533397$ , $ a=3$ and $ b = 1/5$ it returns the correct value $ 0.0218116$ .

When $ \kappa = 12.85$ , $ \theta = 0.475391$ , $ a=3$ and $ b = 1/5$ it returns the correct value $ 0.0408816$ .

When $ \kappa = 12.85$ , $ \theta = 0.423692$ , $ a=3$ and $ b = 1/5$ it returns the value $ -1.49831$ , which is negative. However, the correct result should be a value around $ 0.0585$ .

When $ \kappa = 12.85$ , $ \theta = 0.336551$ , $ a=3$ and $ b = 1/5$ it returns the value $ 630902$ . However, the correct result should be a value around $ 0.1277$ .

Therefore, the issue happens as $ \theta$ decreases. For values of $ \theta > 0.423692$ the analytical matches the simulated results. The issue only happens when $ \theta <= 0.423692$ .

I’d like to know if that is an accuracy issue or if I’m missing something here and if there is a way to correctly plot a graph that matches the simulation. Perhaps there is another way to derive the above equation with other functions or there might be a way to simplify it and get more accurate results.

Solving integral involving absolute value of a vector

I am trying to integrate the following in mathematica:
$ \int_0^r \frac{exp(-k_d(|\vec{r}-\vec{r_j}|+|\vec{r}-\vec{r_i}|)}{|\vec{r}-\vec{r_j}|\times|\vec{r}-\vec{r_i}|}r^2dr$ .
I have first defined, the following functions,
$ \vec p(x,y,z)= (x-x_j)\hat i + (y-y_j)\hat j+(z-z_j)\hat k$
Similarly,
$ \vec q(x,y,z)= (x-x_i)\hat i + (y-y_i)\hat j+(z-z_i)\hat k$ .
And,
$ \vec r(x,y,z)=x\hat i + y\hat j+z\hat k $
Then I clicked the integration symbol in the classroom assistant panel and typed the integrand in the $ expr$ portion. While typing this, I have used $ Abs$ to take modulus of the functions $ \vec p(x,y,z)$ and $ \vec q(x,y,z)$ . I have included the limits as $ 0$ to $ Abs(r)$ and the $ var$ as $ r$ in the integration symbol. But when I press( Shift + Enter ) no output value is shown . Can anyone tell me where I have made mistake ?

Expression involving square roots not simplifying

I have a relatively simple expression here that is not simplifying:

$ $ \frac{2 s_0 \left(\sqrt{\gamma ^5 s_0}+\sqrt{\gamma ^9 s_0}\right)+\sqrt{\gamma ^3 s_0}+2 \sqrt{\gamma ^7 s_0}+\sqrt{\gamma ^{11} s_0}+\sqrt{\gamma ^7 s_0^5}}{\gamma \left(\gamma ^2+\gamma s_0+1\right){}^2} $ $

$  Assumptions = {(s0 | \[Gamma]) \[Element] Reals, \[Gamma] > 0,     s0 > 0}; (Sqrt[s0 \[Gamma]^3] + 2 Sqrt[s0 \[Gamma]^7] + Sqrt[s0^5 \[Gamma]^7] +    Sqrt[s0 \[Gamma]^11] +    2 s0 (Sqrt[s0 \[Gamma]^5] + Sqrt[s0 \[Gamma]^9]))/(\[Gamma] (1 +      s0 \[Gamma] + \[Gamma]^2)^2) // Simplify (Sqrt[s0 \[Gamma]^3] + 2 Sqrt[s0 \[Gamma]^7] + Sqrt[s0^5 \[Gamma]^7] +     Sqrt[s0 \[Gamma]^11] +     2 s0 (Sqrt[s0 \[Gamma]^5] + Sqrt[s0 \[Gamma]^9]))/(\[Gamma] (1 +       s0 \[Gamma] + \[Gamma]^2)^2) == Sqrt[s0 \[Gamma]] // Simplify 

The output is:

(Sqrt[s0 \[Gamma]^3] + 2 Sqrt[s0 \[Gamma]^7] + Sqrt[  s0^5 \[Gamma]^7] + Sqrt[s0 \[Gamma]^11] +   2 s0 (Sqrt[s0 \[Gamma]^5] + Sqrt[s0 \[Gamma]^9]))/(\[Gamma] (1 +     s0 \[Gamma] + \[Gamma]^2)^2) True 

Why is Mathematica not simplifying to this much simpler form $ \sqrt{s_0 \gamma}$ , I think my assumptions should be enough. I can do the simplification by hand

Clarification of the proof involving the regularity condition in Master Theorem

I was going the text Introduction to Algorithms by Cormen Et. al. Where I came across the following statement in the proof of the third case of the Master’s Theorem.

(Master theorem) Let $ a \geqslant 1$ and $ b > 1$ be constants, let $ f(n)$ be a function, and let $ T (n)$ be defined on the nonnegative integers by the recurrence( the recursion divides a problem of size $ n$ into $ a$ problems of size $ n/b$ each and takes $ f(n)$ for the divide and combine)

$ T(n) = aT(n/b)+ f (n)$ ;

where we interpret $ n/b$ to mean either $ \lceil b/n \rceil$ or $ \lfloor b/n \rfloor$ . Then $ T(n)$ has the following asymptotic bounds:

  1. If $ f(n)=O (n^{log_ba – \epsilon})$ for some constant $ \epsilon > 0$ , then $ T(n)=\Theta (n^{log_ba})$ .

  2. If $ f(n)=\Theta (n^{log_ba})$ , then $ T(n)=\Theta (n^{log_ba}lg n)$

  3. If $ f(n)=\Omega (n^{log_ba + \epsilon})$ for some constant $ \epsilon > 0$ , and if $ af(n/b) \leqslant cf(n)$ for some constant $ c < 1$ and all sufficiently large n, then $ T(n)=\Theta (f(n))$ .

Now in the proof of Master’s Theorem with $ n$ as exact power of $ b$ the expression for $ T(n)$ reduces to :

$ $ T(n)=\Theta(n^{log_ba})+\sum_{j=0}^{log_bn -1} a^jf(n/b^j)$ $

Let us assume,

$ $ g(n)=\sum_{j=0}^{log_bn -1} a^jf(n/b^j)$ $

Then for the proof of the 3rd case of the Master’s Theorem the authors prove that,

If $ a.f(n/b)\leqslant c.f(n)$ for some constant $ c<1$ and for all $ n\geqslant b$ then $ g(n)=\Theta(f(n))$

They say that as, $ a.f(n/b)\leqslant c.f(n) \implies f(n/b)\leqslant (c/a).f(n)$ then interating $ j$ times yeilds, $ f(n/b^j)\leqslant (c/a)^j.f(n)$

I could not quite get the mathematics used behind iterating $ j$ times.

Moreover I could not quite get the logic behind the assumption of $ n\geqslant b$ for the situation that $ n$ should be sufficiently large.(As the third case of the Master’s Theorem says).

Moreover in the similar proof for the third case of the general master theorem( not assuming $ n$ as exact powers of $ b$ ) there again the book assumes that $ n\geqslant b+b/(b-1)$ to satisfy the situation of sufficiently large $ n$ .

I do not quite understand what the specific value has to do and why such is assumed as sufficiently large $ n$

(I did not give the details of the second situation as I feel that it shall be something similar to the first situation)

Complicated problem involving mathematical induction


We are collecting donations to buy a new chair. We receive $ m$ donations in total $ d_1, d_2, …, d_m \in \mathbb{N}$ ($ m \geq 1$ and every donation $ d_i$ is whole-numbered, i.e $ \in \mathbb{N}$ ). A chair costs $ c$ dollars, where $ c \in [m].$ (Notation: $ [m]$ $ :=$ $ \{1, 2, …, m\}$ ).

Prove using mathematical induction over $ m$ , that there exists two numbers $ k, l$ with $ k \leq l$ such that the sum of the amount of donations $ \sum _{s=k} ^l {d_s}$ is exactly sufficient to purchase $ x$ chairs ($ x \in \mathbb{N}$ ) without any money being left.

I’m unable to find the correct approach to solve this. I’m sure the Pigeonhole Principle will be useful here, but I don’t know how to correctly apply it within the induction proof. Can someone point me in the right direction?

Decidability of equality, and soundness of expressions involving elementary arithmetic and exponentials

Let’s have expressions that are composed of elements of $ \mathbb N$ and a limited set of binary operations {$ +,\times,-,/$ } and functions {$ \exp, \ln$ }. The expressions are always well-formed and form finite trees, with numbers as leaf nodes and operators as internal nodes, binary operations having two child sub-expressions and the functions one. A value of such an expression is interpreted to mean some number in $ \mathbb R$ .

There are two limitations on the structure of the expressions: the divisor (the right-hand sub-expression) of $ /$ can’t be 0 and the argument of $ \ln$ must be positive.

I have two questions about these kind of expressions:

  • Is it possible to ensure “soundness” of such an expression, in the sense that the two limitations can be checked in limited time?

  • is an equality check between two such expressions decidable?

These questions seem to be connected in the sense that if you’re able to check equality of the relevant sub-expression to zero, you can decide whether a division parent expression is sound, and it doesn’t seem hard to check whether a sub-expression of $ \ln$ is positive or negative if it’s known not to be zero.

I know that equality in $ \mathbb R$ is generally not decidable, whereas equality in algebraic numbers is. However, I wonder how inclusion of {$ \exp, \ln$ } changes the result.

(A side note: I posted an earlier version of a question with similar intent here, but it turned out to have too little thought behind it, and had unnecessary (= not related to the meat of the question) complications with negative logarithms.)

Bit manipulation involving and operator

Given a special function

F(X,Y,Z) = (X ∧ Z)⋅(Y ∧ Z) where ∧ is bitwise AND operator; X,Y,Z are non-negative integers; and ‘.’ represents product

We want to maximize the function F(X,Y,Z) for given X and Y by choosing an appropriate Z. Additionally we have been given limits L and R for Z

To summarize, we need to find a non-negative integer Z (L≤Z≤R) such that F(X,Y,Z) = maxL ≤ k ≤ R(F(X,Y,k)) If there is more than one such value of Z, we should find the smallest one in the range [L,R]

Note: X, Y, L and R are chosen in such a way that maxL≤k≤R(F(X,Y,k)) never exceeds 262

Example: For X, Y, L, R = 7, 12, 4, 17 respectively, answer = 15

I understand the problem, but need help to find an optimal solution for this.

Decidability of equality of expressions involving exponentiation

Let’s have expressions that are finite-sized trees, with elements of $ \mathbb N$ as leaf nodes and operators and the operations {$ +,\times,-,/$ , ^} with their usual semantics as the internal nodes, with the special note that we allow arbitrary expressions as the right-hand side of the exponentiation operation. Are the equality between such nodes (or, equivalently, comparison to zero) decidable? Is the closure under these operations a subset of algebraic numbers or not?

This question is similar to this: Decidability of Equality of Radical Expressions but with the difference that here the exponentiation operator is symmetric in the type of the base and the exponent. That means that we could have exponents such as $ 3^\sqrt 2$ . It isn’t clear to me, whether allowing exponentiation with irrationals retains the algebraic closure.

This question is also similar to Computability of equality to zero for a simple language but the answers to that question focused on the transcendental properties of combinations of $ \pi$ and $ e$ , which I consider out of scope here.