Various official sources include hex maps with scales on them: the Sword Coast from Dragon of Icespire Peak says "1 hex = 5 miles." Is that 5 miles measured across the center of each hex, or an edge length? I can’t seem to find any clarification in the DMG or otherwise.
In D&D 5e dragon scale mail grants many buffs, one of them is advantage on saving throws against dragon breath weapons. Would this apply to the dragon head breath weapon from a chimera?
As the title says I need to decompose 4×4 TRS transformation matrices and extract the proper scale vectors and the proper rotation vectors (or rotation quaternions).
I know how to extract those information when the upper 3×3 matrix determinant is not negative. The problem is that those matrices can have that negative determinant and as far as I could understand, this negative determinant indicates a flip or mirrored transformation.
What do I have to do to extract the proper values for scale and rotation in those cases?
The heat metal spell can target "a manufactured metal object", including "a suit of heavy or medium metal armor".
A suit of Dragon Scale Mail is clearly a manufactured object, but if it’s made from metallic dragon scales, is it metal, and thus a valid target? Are bronze dragon scales actually made of bronze, or are they just colored like that?
I’m just assuming chromatic dragon scales are inarguably organic.
I just advanved a Canoloth (MM 3, p. 200) as I wanted to use it against a lvl 14 party. It is originally CR 5 and I wanted to raise it to CR 8. So I added 4 HD (CR+2) and the elite array (CR+1). This raises a lot of stats, especially saves, but against a lvl 14 party Spell Resistance 18 is miserable. Effectively the Canoloth lost half its Spell Resistance compared to basic Canoloth matched against a lvl 11 party. So would you judge that raising SR to 21 would raise the advanced Canoloths CR above 8 or should this just be part of the advancement to CR 8?
Hi have a problem with the Meta Scraper Add On.
Imgur trying to scrape 1533 Keywords and I get only 500 results.
Afterwards I renewed the proxies, put delay on 30 seconds and switched connections to 1 and I get 900 results, but it doesn’t stop (start button greyed out), it simply does not go on.
Afterwards I checked the proxies and all of them passed both tests.
What am I doing wrong ?
Hello may name is Chris
I am completely new to programming (but I love it!! I am learning Swift).
I am currently looking for a solution to the mentioned problem. I’m not sure if it is a ML task or if it is better to do it in the traditional way – maybe by comparing and interpolating RGB values. How I imagine that might be easier to understand in the video.
VIDEO (is there a better way to provide a video?)
The color values below were taken before the adjustment and assigned to the assigned values 0, 2, 5, 10, 25, 50 and 100. The round section is taken from a photo (by light and shadow you can see that not every color value brings a perfect match. But that shouldn’t be a problem (I hope so) by comparing the pixels and evaluating the color value with the most matching colour at the end). My spontaneous manual evaluation gives a value that should be around 19-20.
I have already thought back and forth whether it is possible to do the whole thing using color values (e.g. R G and B) but that is not so easy since there should also be many different results depending on the color format I am working with.
On the other hand, I don’t know how to solve it using ML. It would be easy to recognize the exact color values 0, 2, 5, 10, 25, 50 and 100 – but how can I have intermediate values determined? What do you think? Which approach is the better one? If you have a tip for me, that I know in which direction I have to think further would be great! And if you have an idea how I can work on my problem I would be so grateful!!! … otherwise I think I will need the coming days to find some solution.
Thank you in advance! Chris
I am working on a campaign and there is a very good chance that the PC’s may find themselves involved in a large mass battle, a large siege or possibly both if things go slightly sideways in regards to there actions and success through the campaign (of course they may just as equally decide to get out of dodge if it all goes sideways but that is the joy of running a free from open campaign).
I have experience of a system for large battles from 1st edition legend of the 5 rings, a combination of the players attempting to complete heroic actions to sway the battle, combined with dice rolling the skill of the generals behind the scenes to gauge the flow of the battle. Are there any official published 5th edition rules or suggestions as to how to run these kind of large scale battles for D&D?
Feats which grant damaging effects that scale based on character level seem to be uncommon, but they definitely exist–for example, for monk-type (Superior Unarmed Strike) and rogue-type (Craven) builds. I can’t find any designed for blaster mage-type builds, however.
Reserve feats effectively scale based on caster level, since their damage is based on your highest available spell level, which isn’t really the same as scaling based on character level. The Shape Soulmeld feat can get you Dissolving Spittle, which you can kinda force to scale off of character level, but only by buying more essentia feats. That’s closer to what I want, but it’s pretty feat-expensive.
So… Is there any other feat that can give heavily-multiclassed character some sort of pseudo-eldritch-blast that actually remains (at least somewhat) useful at higher levels?
Do the maps of cities in the 3rd Edition Forgotten Realms Campaign Books accurately reflect the statistics of those cities?
I recently decided to use the city of Everlund from the Forgotten Realms campaign setting as a model for a location in a one-shot scenario – the setting was not exactly the same as Faerun, but it was a similar generic setting of my own creation with a similar version of the city as the focus of the story. Upon comparing both the map and the statistics provided in the book (the information of which is also available online), however, I found that the numbers did not seem to line up to me:
The population seems far too large, and the physical city and number of possible residences far too small, for them to match relative to each other. I have not counted the exact number of buildings, but estimate a couple hundred – a number that would suggest around 100 people per house. Even with the expectation that people live in larger families and with a higher density-per-house than our modern real-world, this seems like an absurd number. I might expect it to be something more like an order of magnitude smaller. Making sure that this was not just an isolated case, I found that most other cities with drawn maps seemed to have a similar situation.
I’ve considered that the creators might have intended these maps to not be taken literally, but rather abstractly – each square not representing a literal single building, but rather the general shape of areas being taken as general districts. This does not really seem to make sense, though, when you consider what they have detailed – individual bridges, roads, larger key buildings, and an important river, all drawn to scale and with an appropriate measuring ruler in the bottom corner. If this is meant to be taken abstractly, it is certainly hard to wrap my head around.
Are the maps simply too small for their statistics? Are they actually reasonable, contrary to my beliefs? Are they meant to be literally accurate or interpreted more broadly?