Mount size calculations

In our latest session a party member purchased a baby triceratops. According to dnd sources baby triceratops is a medium sized being (according to TOA) and my party member is also medium size.

The player asked whether being mounted on the triceratops is considered a large creature or still medium. Are there any official rules according to the sizes of mounted beasts?

Thank you in advance.

Using Alias in Calculations

i know that this Problem was adressed before but my SQL knowledge is not that good to figure it out. Maybe you can help me.

select ProductionLine, Product, POId, BatchNumber, Sum(Prime) as Prime, Sum(Offspec) as OffSpec, Sum(InQ) as InQ, Sum(Prime) + Sum(Offspec) + Sum(InQ) as Total

from @t2

group by ProductionLine, BatchNumber, POId, Product;

I now need a Column PercentagePrime where the column Prime is defides by the alias Total

I hope you can help me with this

Can this AC chart be used in DPR calculations?

Inspired by this, I just processed this list of monsters to get the average AC in monsters, by CR. It comprises all the SRD monsters, I believe.

enter image description here

Can this graph be used in DPR charts?

One way would be to consider the character’s level to match the CR of enemies. So, for example, a level 10 character would calculate its accuracy against an average of 18 AC. With a +7 to hit, it would mean 50% hit chance, so an average damage of 30 per round would equate to 15 average DPR.

If a CR=level matching isn’t adequate, what would be? Can we use standard rules for encounter creation to build such a chart?

Two processes doing extensive calculations – I want one to get ~100% of processor time – how?

I am running Ubuntu basic server with two processes: – process 1 – performing calculation 100% of uptime, and which I use to share computing power to community (it’s running @ prio 19) – process 2 – performing calculations for 5-10mins, from time to time, which I use to compute for me (it’s running @prio -19)

I want process 2 to be given with 100% of computing power (process 1 is at that moment should get close to 0% of CPU). But best what I get is 50% of CPU for process 1 and 50% of CPU for process 2 (checked with htop).

I don’t want to manually stop/start any process when I need computing power (both processes must be running all the time); 100% of CPU for process 2 must be given automatically.

What should I do to achieve my goal? Thanks.

Solar Power Calculations

Good Afternoon, I have calculated the Incident beam radiation for a given locations local sunlight hours.

Rise = Sunrise[GeoPosition[{57.7053, -3.33917}],     DateRange[DateObject[{2019, 1, 1}], DateObject[{2019, 12, 31}]]]; Sunrises = DateString[#, {"Hour", "Minute"}] & /@ Rise["Values"] Sets = Sunset[GeoPosition[{57.7053, -3.33917}],     DateRange[DateObject[{2019, 1, 1}], DateObject[{2019, 12, 31}]]]; Sunsets = DateString[#, {"Hour", "Minute"}] & /@ Sets["Values"];   Sunrises[[1]] Sunsets[[1]]  n = Table[0 + i, {i, 1, 1}];(*Days of the Year*)  LSH = Table[i + 0.01, {i, 8.93, 15.616667, 0.01}]; (*Local Solar Hour*) H = 360/24*(LSH - 12);(*Hour Angle*) L = 57.7053;(*Latitude*)  \[Delta] =   23.45*Sin[360/365*(n - 81) Degree](*Solar Declination of the sun*);  \[Beta] = (Cos[L Degree]*Cos[First[\[Delta]] Degree ]*  Cos[H Degree ]) + ((Sin[L Degree]*Sin[First[\[Delta]] Degree]));  Arc\[Beta] = ArcSin[\[Beta] ]*180/Pi; A = 1160 + 75 Sin[360/365*(n - 275) Degree](*Extraterrestial Flux*);  m = 1/(Sin[  Arc\[Beta] Degree])(*Air Mass Ratio for every hour of the day*); k = 0.174 + 0.035*Sin[360/365*(n - 100) Degree];(*Optical Depth*)  IB = First[A]*E^(-First[k]*m); (*Direct Beam radiation Wm^-2*) test = Transpose[{LSH, IB}];  ListLinePlot[test, PlotRange -> {{8.5, 16}, {0, 600}}] 

From the ListLinePlot, it should show solar radiation for the daylight hours of the location. Where sunrise is 0856 and sunset is 1537 (Considering time of year) However the graph shows a sudden increase before the given sunset time where the graph should have a bell shape resemblence between 0856 and 1537(8.933333 and 15.616667 in numerical format) I’ve been going over this code for days trying to solve this so any help would be appreciated.

Luke

How Babbage’s Analytic engine made calculations of unlimited extent?

Brian Randell, in his article The COLOSSUS, explains how computing ushered in the 20th Century, by the development of the Colossus computer. He cites a passage from the Passages from the Life of a Philosopher, illustrating how Charles Babbage described the Analytical Engine :

That the whole of the conditions which enable a finite machine to make calculations of unlimited extent are fulfilled in the Analytical Engine.

I couldn’t get how calculations of unlimited extent can be performed by the Analytical engine as described by Babbage. An example would be helpful too.

What’s the best kind of test for complex calculations without access to external resources?

I have two libraries that handle the mapping from one family of objects to another one. I had to create a middle set of objects for other transformations.

So, the NativeConverters libray converts elements NativeElement to MiddleElement, and the ViewModelConverters library converts MiddleElement to ViewModelElements.

I have unit tests (with NUnit) for both NativeConverters and ViewModelConverters. So the single conversion works well.

Now, I want to test the whole process: given a converter from NativeConverters and another one from ViewModelConverters, I want to test that a NativeElement gets converted correctly into a ViewModelElement.

I don’t need access to DB, file system or whaterver, so I’m not sure that Integration tests are the best choice. But I’m not testing a single method, so it shouldn’t be a unit test.

What kind of test do you think could best fit this case?
Do you know any library for C#?

Understanding CR calculations

I’ve been working on a tool to help me design monsters, specifically doing CR calculations as described in the DMG. In order to test it, I’ve tried it out on some monsters in the Monster Manual, which of course are giving me different results.

I just want to say up front that I’m fully aware that the DMG method doesn’t align with what’s actually in the MM, and that a different calculation was used for it. And for my purposes that’s ok, but it does make testing it rather difficult. I also understand that playtesting can influence the CR.

So what I want to check is: For these 3 test monsters, is my calculation correct, or have I overlooked something? And if it is correct, is there any widely accepted or documented reason for these monsters specifically to have different CRs?


Wolf (MM CR: 1/4, calculated CR: 1/2)

HP: 11
AC: 13
Damage per round: 7
Attack bonus: 4 + 1 (for Pack Tactics) = 5

Defensive CR: 1/8
Offensive CR: 1
Average CR: 0.625 rounded = 1/2


Wight (MM CR: 3, calculated CR: 2)

HP: 45 x 2 (for damage resistances) = 90
AC: 14
Damage per round: 2 x 6 (longsword) = 12
Attack bonus: 4

Defensive CR: 2
Offensive CR: 1
Average CR: 1.5 rounded = 2


Planetar (MM CR: 16, calculated CR: 15)

HP: 200 x 1.25 (for damage resistances) = 250
AC: 19 + 2 (magic resistance) + 2 (3 saving throws) = 23
Damage per round: 2 x 43 (angelic greatsword) = 86
Attack bonus: 12

Defensive CR: 12 + 3 (adjust for AC) = 15
Offensive CR: 13 + 2 (adjust for attack) = 15
Average CR: 15

Are calculations or checks made in the for loop lifecycle/breakpoint penalising?

Consider the following segment of code:

    for (let i = 0; i < (maxResults || currentResults.length); i += 1) {       const item = currentResults[i];       if (results.has(item)) {          // do something       }     } 

Does the i < (maxResults || currentResults.length) piece of code get optimised away at any point before runtime? (perhaps by transpilation?)

Would the severity of the calculation made in the for loop’s breakpoint hinder performance at all or is it calculated once for the entirity of the loop?

e.g. We could write it:

    const max = maxResults || currentResults.length;     for (let i = 0; i < max; i += 1) {       const item = currentResults[i];       if (results.has(item)) {          // do something       }     } 

If calculation was taken once per loop.

Is it good practice to create a static class for database tables calculations?

I need to implement some system (using python pandas dataframe in my case) that parses raw data, then adds on calculated data, and then validates that calculated data (boolean output on some columns)

I couldn’t make this problem simpler than what is below, so please bear with me.

Assumption: this is and always will be single threaded

The flow I think of implementing:

  1. Read raw data into a db
  2. Query the db, to obtain result (some other table)
  3. Parse the result (map or reduce operation, or multiple operations)
  4. Put parsed result back into original table, or into a new table.
  5. Validate result

I was thinking how to design this, and came up with the following:

  1. create a class that would hold all types of tables. It would allow for read and update operations only, on needed columns.
  2. create a static class for calculations on tables. Each query would take as arguments the tables it requires to make its calculations and return a new table.
  3. Create validator interface which has a method that takes in a table and outputs whether it is valid or not.

Now, I would do something like

db = DB() df_raw = raw_parser.parse_raw() db.add_raw(df_raw) calculation1 = Calculations.calculation1(df_raw) calculation2 = Calculations.calculation2(df_raw, calculation1) calculation3 = Calculations.calculation3(calculation1 , calculation2) db.add_calculation3(calculation3) validator3 = Calculation3Validator(db.get_calculation3()) validator3.validate() 

Is this a problem?

This doesn’t seem very object oriented to me, because all the calculations sit together, statically, in Calculations.

Maybe it would have been smarter to somehow create a class for each calculated column, and assign it responsibility for calculating its own properties? Seems like too much of an abstraction, but I am not sure.


Question

Is it ok to have a static class holding queries that accept and output tables, and that holds no state? Does it hold a risk of becoming a god class that can’t be separated?

Is there a standard way of achieving my use case which I am very far from?


Sorry for the length of this question, I wanted to be very clear in my intent.