Most effective way of improving survivability for an Ancestral Guardian Barbarian?

The Path of the Ancestral Guardian Barbarian (Xanathar’s Guide to Everything, p. 9-10) is an extremely powerful barbarian. It pretty much makes your allies invulnerable against an enemy boss.

But it does nothing for your own health. It will greatly incentivize enemies to take you down first to be rid of your annoying Guardian benefits.

Assuming I’m currently level 3 as an Ancestral Guardian, and leveling soon to 4, my stats are average (point buy), and I have no healing from allies, what is the best way to maximize my survivability for fights to come?

  • I’m willing to look into multiclassing, if there’s a valid strategy there.
  • I don’t expect anyone else to grab any healing abilities.
  • I’m not interested in specific magical items. (Potions and other common/uncommon magic items are fine)
  • I expect the campaign to last until about level 10.
  • By character level 10, I would like to have at least 6 levels of Barbarian
  • Emphasis on surviving against bosses, if possible.
  • Expected about 2 combat encounters per day.

If it helps refine your answer, my party consists of:

  • A Champion Fighter (who is very cowardly and selfish, doesn’t tank much)
  • A Fey Warlock (fairly standard, supportive player)
  • An Evocation Wizard (who lives to blow stuff up)
  • A Ranger/Rogue (who uses stealth and long range)

Improving mesh and NDSolve solution convergence

I have developed the code below to solve two PDEs; first mu[x,y] is solved for, then the results of mu are used to solve for phi[x,y]. The code works and converges on a solution as is, however, I would like to decrease the size of a, b, and d even further. To accurately represent the physical process I am trying to simulate, a, b, and d would need to be ~100-1000x smaller. If I make them smaller, I don’t believe the solution has actually converged because the values for phi along the right boundary change significantly with a change in mesh size (i.e. if I make them smaller and the code below produces a value of phi=-0.764 at the midpoint between y2 and y3 along the right boundary, a change in size1 to 10^-17 and size2 to 10^-15, changes that value of phi to -0.763, and a change in size2 to 10^-16 changes that value again to -0.860), but I cannot make the mesh size any smaller without Mathematica crashing.

Are there any better ways to create the mesh that would be less computationally taxing and allow it to be more refined in the regions of interest? Or are there any ways to make the code in general less computationally expensive so that I can further refine the mesh?

ClearAll["Global`*"] Needs["NDSolve`FEM`"] (* 1) Define Constants*) e = 1.60217662*10^-19; F = 96485; kb = 1.381*10^-23; sigi = 18; sigini = 0; sigeni = 2*10^6; T = 1000; n = -0.02; c = 1;  pH2 = 0.2; pH2O = 1 - pH2; pO2 = 1.52*^-19; l = 10*10^-6; a = 100*10^-7; b = 50*10^-7; d = 300*10^-7; y1 = 0.01; y2 = 0.5*y1; y3 = y2 + a; y4 = y3 + d; y5 = y4 + b; mu1 = 0; mu2 = -5.98392*^-19; phi1 = 0;  (* 2) Create mesh*) m = 0.1*l; size1 = 10^-16; size2 = 10^-15; size3 = 10^-7; mrf = With[{rmf =       RegionMember[       Region@RegionUnion[Disk[{l, y2}, m], Disk[{l, y3}, m],          Disk[{l, y4}, m], Disk[{l, y5}, m]]]},     Function[{vertices, area}, Block[{x, y}, {x, y} = Mean[vertices];      Which[rmf[{x, y}],        area > size1, (0 <= x <= l && y2 - l <= y <= y2 + l),        area > size2, (0 <= x <= l && y3 - l <= y <= y3 + l),        area > size2, (0 <= x <= l && y4 - l <= y <= y4 + l),        area > size2, (0 <= x <= l && y5 - l <= y <= y5 + l),        area > size2, True, area > size3]]]]; mesh = DiscretizeRegion[Rectangle[{0, 0}, {l, y1}],     MeshRefinementFunction -> mrf];  (* 3) Solve for mu*) bcmu = {DirichletCondition[mu[x, y] == mu1, (x == 0 && 0 < y < y1)],    DirichletCondition[     mu[x, y] ==       mu2, (x == l && y2 <=  y <=  y3) || (x == l && y4 <= y <= y5)]}; solmu = NDSolve[{Laplacian[mu[x, y], {x, y}] ==       0 + NeumannValue[0, y == 0 || y == y1 ||         (x == l && 0 <= y < y2) || (x == l &&            y3 < y < y4) || (x == l && y5 < y < y1)], bcmu},     mu, {x, y} \[Element] mesh, WorkingPrecision -> 50];  (* 4) Solve for electronic conductivity everywhere*) pO2data = Exp[(mu[x, y] /. solmu)/kb/T]; sige0 = 2.77*10^-7; sigedata = Piecewise[{{sige0*pO2data^(-1/4), 0 <= x <= l - m},     {sige0*pO2data^(-1/4), (l - m < x <= l && 0 <= y < y2)},     {(sigeni - sige0*(pO2data /. x -> l - m)^(-1/4))/m*(x - (l - m)) +        sige0*(pO2data /. x -> l - m)^(-1/4), (l - m < x <= l &&         y2 <=  y <= y3)},     {sige0*pO2data^(-1/4), (l - m < x <= l && y3 < y < y4)},     {(sigeni - sige0*(pO2data /. x -> l - m)^(-1/4))/m*(x - (l - m)) +        sige0*(pO2data /. x -> l - m)^(-1/4), (l - m < x <= l &&         y4 <= y <= y5)},     {sige0*pO2data^(-1/4), (l - m < x <= l && y5 < y <= y1)}}];  (* 5) Solve for phi*) Irxn = -(2*F)*(c*pO2^n ); A = (Irxn - sigi/(4*e)*(D[mu[x, y] /. solmu, x] /. x -> l))/(-sigi); B = sigi/(4*e)*(D[mu[x, y] /. solmu, x] /.        x -> l)/(sigi + sigedata /. x -> l - m); bcphi = DirichletCondition[phi[x, y] == phi1, (x == 0 && 0 < y < y1)]; solphi = NDSolve[{Laplacian[phi[x, y], {x, y}] ==       0 + NeumannValue[0,         y == 0 ||          y == y1 || (x == l && 0 <= y < y2) || (x == l &&            y3 < y < y4) || (x == l && y5 < y < y1)] +        NeumannValue[-A[[1]], (x == l && y2 <= y <= y3)] +        NeumannValue[-B[[1]], (x == l && y4 <= y <= y5)], bcphi},     phi, {x, y} \[Element] mesh, WorkingPrecision -> 50];  (* 6) Print values to check for convergence*) P[x_, y_] := phi[x, y] /. solphi; P[l, (y3 - y2)/2 + y2] P[l, (y5 - y4)/2 + y4] 

Improving QuickSort Algorithm with pivot as first element

I was trying to improve the algorithm since its the most effective and known algorithm among many others, I came across ” Quicksort algorithm with an early exit for sorted subfiles 1987 by University of Tulsa, Roger L. Waiwright” check it out its interesting, do you guys know any other ways/researches ? I think reducing the memory would work by reducing the amount of arrays and working on one array idk how I am going to do that.

doing bubble sort or selection sort for large arrays isn’t helpful and checking if a big array is sorted using them would increase the complexity. P.S: I am just learning and studying not doing a research etc.

Quicksort algorithm with an early exit for sorted subfiles

Improving binary recursion calculation

I am trying to write a program in Python for the infamous egg drop puzzle using recursion. In case you do not know the problem statement, here it is:

https://code.google.com/codejam/contest/dashboard?c=32003#s=p2

One solution to this puzzle would be to use a recursive function that returns the maximum floors of a building that would be allow for $ Solvable(F, D, B)$ to be true, represented as such, as function $ f$ :

$ f(D, B)$ $ =$ $ 1$ $ +$ $ f(D – 1, $ $ B – 1)$ $ +$ $ f(D – 1$ , $ B)$

… where $ D$ is the number of drops left, and $ B$ is the number of breaks allowed. This solution makes use of how an egg can either break on $ f(D – 1, $ $ B – 1)$ floors below, and $ f(D – 1$ , $ B)$ floors above.

As you can see, this results in the formation of a binary recursive function as shown above. Combined with the fact that we know that $ f(1, B)$ $ =$ $ 1$ for all values of $ B$ , and how $ f(D, 1)$ $ =$ $ D$ for all values of $ D$ , this problem should be no problem to solve for a program that can handle recursion very well. However, Python has a knack for being unable to handle recursion as well as some other languages.

As such I would like to know whether the maximum floors returned by function $ f$ can be determined for a set of values where $ 1<=D, B<=200,000,000$ in Python.

Here is a list of the techniques I have tried so far, to little avail:
1. Memoisation (Caching values of $ f(D, B)$ )
2. Since we know that $ f(d, b)$ for $ d <= b$ is equal to $ f(d, b)$ where $ d = b$ , with $ d$ held constant, we can reduce the number of cached pairs of $ d$ and $ b$
3. When $ d = b$ , $ f(d, b)$ is equivalent to $ 2^d – 1$ , thus removing the need for the binary recursion for $ d = b$

Improving time complexity from O(log n/loglog n) to O((log ((nloglog n)/log n))/loglog ((nloglog n)/log n))

Suppose I have an algorithm whose running time is $ O(f(n))$ where $ f(n) = O\left(\frac{\log n}{\log\log n}\right)$

And suppose I can change this running time in $ O(1)$ steps into $ O\left(f\left(\frac{n}{f(n)}\right)\right)$ , i.e. I can get an algorithm whose running time is $ O(g(n)) = O\left(\frac{\log\frac{n}{\frac{\log n}{\log\log n}}} {\log\log\frac{n}{\frac{\log n}{\log\log n}}}\right) = O\left(\frac{\log\frac{n\log\log n}{\log n}} {\log\log\frac{n\log\log n}{\log n}}\right)$ .

I’m pretty sure that $ g(n) < f(n)$ for big enough $ n$ (by using wolfram alpha) but wasn’t able to prove it.

My questions are:

  1. Is $ g(n) < f(n)$ in fact true (starting from some n)?

  2. Is $ g(n)$ asymptotically better the $ f(n)$ , i.e. is $ g(n) = o(f(n))$

  3. Assuming this is asymptotically better, I can do this step again and further improve the running time of the algorithm. Meaning that in 1 more step I can make my algorithm run in time of $ O\left(\frac{n}{f\left(\frac{n}{f(n)}\right)}\right)$ , and I can repeat this process as many times as I want. How many times should the process be repeated to get the best asymptotically running times and what will it be? obviously repeating it $ O(f(n))$ times will already have a running time of $ O(f(n))$ only for the repetition of this process and will not improve the overall algorithm complexity.

Improving bad spells: witch bolt

Witch bolt is situational, mediocre, or borderline useless, depending on who you ask. In play, I have seen it consistently chosen by new players (or players inexperienced with casters), who are frequently disappointed with the spell’s performance.

In this question, I will look at what makes the spell unique, how it falls short, and at my attempt to bring it in line with other spells. First, the original:

Witch Bolt

1st-level evocation

Casting Time: 1 action
Range: 30 feet
Components: V, S, M (a twig from a tree that has been struck by lightning)
Duration: Concentration, up to 1 minute

A beam of crackling, blue energy lances out toward a creature within range, forming a sustained arc of lightning between you and the target. Make a ranged spell attack against that creature. On a hit, the target takes 1d12 lightning damage, and on each of your turns for the duration, you can use your action to deal 1d12 lightning damage to the target automatically. The spell ends if you use your action to do anything else. The spell also ends if the target is ever outside the spell’s range or if it has total cover from you.

At Higher Levels. When you cast this spell using a spell slot of 2nd level or higher, the initial damage increases by 1d12 for each level above 1st.

Notable problems

  • Witch bolt‘s damage starts bad and scales worse. This answer covers the math nicely. Its damage comes up short in every practical situation. To make matters worse, only its initial damage scales. A 9th-level witch bolt (which hurts more to type than to be targeted by) does the same damage on subsequent rounds as a 1st-level witch bolt.
  • Witch bolt only works against one creature. Hex and hunter’s mark last through the entire fight, if not through multiple fights. You can transfer them from one target to the next. If your witch bolt target dies, the spell is done. That’s not the only way the spell could end, because…
  • Witch bolt ends if your target takes a leisurely stroll out of range. Or behind a wall. Or a window. Its range is 30 feet. Many creatures don’t even need to try very hard to escape. It also ends if you spend your action doing anything else.

Unique features

  • Witch bolt is the only 1st-level concentration spell that deals damage directly. Hex, hail of thorns, and hunter’s mark all require a separate attack to actually deal their damage.
  • Witch bolt is also the only spell in which one successful attack roll causes damage over more than two rounds. Booming blade and Melf’s acid arrow have lingering damage, but do not last as long.

Goals

  1. Keep what makes the spell unique. Removing concentration or automatic damage may make it easier to improve, but then it would no longer feel like witch bolt.
  2. Bring its damage in line, without making it overpowered. Witch bolt‘s automatic damage on subsequent rounds presents a unique balancing challenge. Done properly, a damage-focused caster should seriously consider (but not always select) an improved witch bolt, particularly in Tier 1 and Tier 2.
  3. Reduce the spell’s “noob trap”-ness. Improving its damage will help, but the finicky “stay within 30 feet, maintain line-of-effect, and don’t do anything else” leads to a lot of new player “gotcha” moments. It reads like Palpatine blasting Luke in Return of the Jedi. It plays like scuffing your wool socks on the carpet and chasing your brother around the house. It even includes falling on the hardwood floors (missing your attack roll).

Once more, with usefulness

With those goals in mind, here is my improved version of the spell:

Witch Bolt (improved)

1st-level evocation

Casting Time: 1 action
Range: Self
Components: V, S, M (a twig from a tree that has been struck by lightning)
Duration: Concentration, up to 1 minute

For the spell’s duration, you are surrounded by crackling, blue energy. Make a ranged spell attack against one creature of your choice within 30 feet of you. On a hit, the target takes 1d12 lightning damage.

On each of your turns until the spell ends, you can use your action to target the same creature or a different one. If you target a creature that you have already hit with this casting of witch bolt, you may cause the target to take 1d12 lightning damage automatically, without making an attack roll.

At Higher Levels. When you cast this spell using a spell slot of 2nd level or higher, the damage dealt on a hit increases by 1d12 for each level above 1st. Additionally, the automatic damage dealt increases by 1d12 for every two slot levels above 1st.

This new witch bolt‘s targeting was inspired by eyebite. Both have a range of Self and allow you to spend an action each turn to choose a new or existing target. The spell no longer stops when a creature dies or leaves range. The caster merely needs to move back into range to continue using it.

Additionally, the automatic damage is increased every other spell level (like spiritual weapon). I also considered giving the automatic damage full scaling, but that may make it too strong when combined with the other improvements.

Does increasing subsequent turn damage and improving targeting meet my goals for witch bolt? Are there any feats or class features that throw its improved damage and targeting out of line with other spells? Has it become less of a noob trap?

Improving Lie Detection and Credibility Assessment Rules

Many systems have two or more skills/traits/other numeric values that can be pitted against each other in situations where side A tries to assess side B’s credibility, where side B may or may not be lying. Among many systems, these skills/traits/values may carry such names as Empathy/Kinesics/Body Language/Detect Lies/etc. and Subterfuge/Acting/Deception/etc. respectively.

Most of the RP-immersion-oriented/associative/character-stance systems I’ve seen use those two values in an opposed roll of some sort. Usually, if A wins, the referee tells A’s player whether B appears to be lying or not. If B wins, no such information is given. For the purposes of the question, how the ‘win’ is determined is of little concern: some systems count the number of successes scored, some compare margins of success and failure, some have other methods. The point is that in the end of a roll-off, one of the participating characters is deemed the winner. (Also, for the sake of simplicity, let’s not consider ties and critical victories/losses/successes/failures.)

This works OK even with open rolls during some sort of hostile negotiation, where B is already assumed to be interested in concealing some information, and it’s more a matter of where B tries to mislead A.

However, the above framework breaks down if B is telling the truth and wants to convince A, since in that case suddenly B is interested in having a low trait (or foregoing the roll entirely, if permitted), thus allowing A’s lie-detection ability to inform A of the truthfulness involved.

Not only does this produce perverse incentives, but if foregoing a roll is permitted (including by deliberately failing, making A the automatic or near-guaranteed winner), it also results meta hints: a target that doesn’t resist lie detection is immediately more trustworthy, while one which does is immediately suspicious to the player even if the character doesn’t know the difference. These factors mean that the mechanic is hostile to attempts to build/play an honest-looking good liar.

I’m looking for an alternative approach to using such skills that can be either used when making a system from scratch, or for houseruling the procedure for making such skills (or similar traits) in systems that use them. These are the improvements I’m seeking and the pitfalls I’m trying to avoid:

  • Minimise perverse incentives (essential), even if one cannot actually follow them after character creation.
  • Minimise possibilities and temptations for metagame ways of figuring out whether a character is lying (essential).
  • Avoid increasing requirements for the amount of secret rolls (if possible). In general, making B’s roll secret is more acceptable than A’s roll, but keep in mind that in the default interpretation above, secrecy of B by itself doesn’t solve the prior two issues.
  • Avoid excessive complexity (if possible), such as having too many rolls for obfuscation purposes.

Does a design pattern exist for resolving lie detection roll-offs in a way that addresses the above concerns?

New developer here, tips/suggestions to improving UI/UX on personal website using ReactJS [on hold]

I am an amateur web developer, and am teaching myself React. Currently, I have been building a personal website hosted on github pages –

https://roy-05.github.io/website/#/

Now my primary goal has been to make the page functional, and far as I understand, it is functional. But the page does not really “look’ or “feel” good – that is, I wouldn’t want to spend time on this site at all. I’m now working on making it responsive and improving UI/UX but am having a hard time there. As someone who has not made a website ever before I cannot conceive ideas that will make the website look appealing/aesthetic. So I was hoping for help in –

  1. As I want to complete the mobile design first, switching over to mobile view (iPhone 6/7/8) in devTools, what suggestions (font, style, colors, item-positioning, etc) do you have to make the site look more appealing to a user.

As this is a personal website, I think the essential factor is that it should be “eye-catching” but I don’t understand how to make it so. Is there any online resource/books I can study to get a better understanding of aesthetic web design? I appreciate your help in developing (no pun intended :P) a budding programmer and please do not hesitate to criticize and highlight the site’s shortcomings.

50 dofollow High Quality USA PR9 backlinks for improving your website for $5

I will manually create 50 USA Page Rank authority trust profile backlinks for your website. That means your website will be on the most trusted and best-ranked sites on the planet. Benefits: 100% panda, penguin, and hummingbird safe100% Manual Work100% Google safe100% customer satisfaction100% Do followHelp to increase your rankingsFast express delivery100% Quality serviceFree revision and replacement until satisfactionDetails report excel sheet Order now!! And get your website ranked Top! Thank you so much for visiting my Service…

by: SpeedSeo
Created: —
Category: Link Building
Viewed: 146


Improving on Monte-Carlo

Can I improve on a Monte-Carlo search for the problem, described?

enter image description here

So I have a graph/network consisting of segments a1, a2, …, b1, b2, …, and c1, c2, …

For all the underlying segments there is some weighting e.g. a1 = 2, b3 = 0, c3 = 4

In addition I have a matrix of the distances from each segment to all other segments in the network e.g.

|from/to | a1  | a2  | a3  | b1  | b2  | b3  | c1  | c2  | c3  | |--------|-----|-----|-----|-----|-----|-----|-----|-----|-----| |   a1   |  0  | 4   | 6   | 84  | 82  | 80  | 150 | 148 | 146 | |   a2   | ... | 0   | ... | ... | ... | ... | ... | ... | ... | |   a3   | ... | ... | 0   | ... | ... | ... | ... | ... | ... | |   b1   | ... | ... | ... | 0   | ... | ... | ... | ... | ... | |   b2   | ... | ... | ... | ... | 0   | ... | ... | ... | ... | |   b3   | ... | ... | ... | ... | ... | 0   | ... | ... | ... | |   c1   | ... | ... | ... | ... | ... | ... | 0   | ... | ... | |   c2   | ... | ... | ... | ... | ... | ... | ... | 0   | ... | |   c3   | ... | ... | ... | ... | ... | ... | ... | ... | 0   | | ...    | ... | ... | ... | ... | ... | ... | ... | ... | ... | 

I want to place n agents (in the image n = 3) such as to cover as much of the segments by weighting as possible, within a distance of 50. And be able to optimise for any parameter combination e.g. any n and any distance.

So far I have tried:

  • a greedy approach: placing an agent where most segments are covered (local optimum), then placing the next agent to cover most segments etc up to n.
  • a monte-carlo approach: selecting n random segments and evaluating, repeating many times and choosing the best solution.

In reality this network, and the number of agents n may be much larger and more complex.

I’m wondering what other approaches might work, better than a monte-carlo?