Why is my algorithm version so slow with this input?

I’ve written my F# code to solve Advent of Code Day 18 Part 1. While it seems to work fine with other simple demo inputs, it stuck with the following input

################# #i.G..c...e..H.p# ########.######## #j.A..b...f..D.o# ########@######## #k.E..a...g..B.n# ########.######## #l.F..d...h..C.m# ################# 

Tehre is reference solution in Python which is correct and quick, but I fail to see where is the fundamental difference between the two algorithms, behind the other minor code differences (also because the languages are different).

I’ve tried with concurrency vs a queue, with a tree vs a grid/map (see my github history) but with no luck until now.

The principal part of the code is described below. It should fall under a breadth first search (BFS) algorithm.

Here is how the single step by which I elaborate the solution.

let next (step:Step) (i:int) (solution: Solution) (fullGrid: Map<char,Map<char,Grid>>) : Solution =     let branches = solution.tree.branches     let distance = solution.tree.distance + branches.[i].distance     let area = branches.[i].area       //let newbranches, back, keys =     match step with     | SpaceStep ->         failwith "not expected with smart grid"     | KeyStep ->         let keys = area :: solution.keys         let grid = fullGrid.[area]         let tree = grid2tree area distance keys grid         {keys=keys; tree=tree} 

The fullGrid is supposed to contain the matrix of distances. The wrapping solver is simply a recursion or queue based version of the BFS.

let findSolution (keynum:int) (solution: Solution) (fullGrid: Map<char,Map<char,Grid>>) : Solution option =     let mutable solution_queue : queue<Solution> = MyQueue.empty     solution_queue <- enqueue solution_queue solution     let mutable mindistance : int option = None     let mutable alternatives : Solution list = List.empty      while (MyQueue.length solution_queue > 0) do         let solution = dequeue &solution_queue         let solution = {solution with tree = grid2tree solution.tree.area solution.tree.distance solution.keys fullGrid.[solution.tree.area]}         let branches = solution.tree.branches         if  (branches = [||] ) then              if solution.keys.Length = keynum              then updateMin &mindistance &alternatives solution         else         match mindistance with         | Some d when d < solution.tree.distance + (solution.tree.branches |> Array.map (fun t -> t.distance) |> Array.min) -> ()          | _ ->         let indexes =             [|0..branches.Length-1|]             |> Array.sortBy(fun idx -> ((if isKey branches.[idx].area then 0 else 1) , branches.[idx].distance))         for i in indexes do             if branches.[i].area = '#' then                  failwith "not expected with smart grid"              else             if branches.[i].area = Space then                 failwith "not expected with smart grid"             else             if (Char.IsLower branches.[i].area) then                 let solutionNext = next KeyStep i solution fullGrid                 if solutionNext.keys.Length = keynum                 then  updateMin &mindistance &alternatives solutionNext                 else                 solution_queue <- enqueue solution_queue solutionNext             else             if (Char.IsUpper branches.[i].area) then                 failwith "not expected with smart grid"      match alternatives with     | [] -> None     | alternatives ->         alternatives |> List.minBy(fun a -> a.tree.distance) |> Some 

How do I fix ‘Bad protocol version identification’ errors?

I’m a beginner at using SSH, and I’m trying to connect to a VM from a tablet using an app. The app says that it connects successfully, but soon, it loses the connection to the server.
I have installed OpenSSH, and when I check the systemctl status, I receive the following log:
Bad Protocol Version Identification ‘0.0,0.0,0.0,0.0,
Bad Protocol Version Identification ‘0.0,0.0,0.0,0.0,

Accepted password from [my VM username] from 127.0.0.1 port 58982 ssh2
Bad Protocol Version Identification ‘0.0,0.0,0.0,0.0,
Bad Protocol Version Identification ‘0.0,0.0,0.0,0.0,
I’ve set a port redirect already, but it’s not working. I wonder if it’s an issue with the app or with my SSH settings.
Any help is appreciated.

Is this homebrew Dragon Rider ranger archetype balanced? [Version 2]


Introduction

This is a follow up question to: Is this homebrew Dragon Rider ranger archetype balanced?

If you are not familiar with that question, I’d recommend reading it before this one, since this question won’t make much sense otherwise (and I don’t want to repeat loads of it here, since this post is long enough already).

From Weaveworker89’s answer to my previous question, I can see that the damage output of my previous version was too high, largely owing to the fact that the dragon companion can make its attacks using your bonus action, and that the dragon can start making two attacks/Multiattack after you reach level 11 (which also means that a Dragon Rider ranger would never use their bonus action for anything else ever again, which reflects bad design on my part).

Changes

The changes I plan on making are to adjust Draconic Bond from this:

You can use your bonus action to verbally command it to take the Attack, Dash, Disengage, Dodge or Help action.

to this:

You can use your action to verbally command it to take the Attack, Dash, Disengage, Dodge or Help action. Once you have the Extra Attack feature, you can make one weapon attack yourself when you command your dragon to take the Attack action.

Basically, the same as the RAW Beastmaster (PHB, p. 94). For the context of this question, assume that the Dragon Rider archetype from my previous question includes the above change (almost like “errata”, I guess).

Problem

However, I imagine the damage output will still be too high, even with this change, but I’m reluctant to drop the Draconic Fury feature (which is the same as the Beastmaster’s “Bestial Fury” feature). Instead, I’d rather take a look at weakening the dragon itself so that before and after reaching level 11 and getting Draconic Fury, it’s damage output is still within reason.

I’m happy enough with the damage output of the pseudodragon that you get between levels 3 and 6, it’s the wyrmling that you get between levels 7-14 and the young dragon you get at levels 15+ that I’m concerned with. I’m also generally happy enough with the rest of my archetype, which is why I’m focusing on the dragon’s CR as the thing to bring the damage output down, and therefore the whole archtype into balance.

Question

What CR wyrmling/young dragon stats should I pick such that the damage output (bearing in mind that it is not triggered from a bonus action anymore, but rather sacrifices one of your attacks) is on par with, say, a Gloom Stalker ranger? I’d like answers to consider the damage output of my archetype at levels 7-10 (Wyrmling, one attack), levels 11-14 (Wyrmling, two attacks) and 15+ (Young Dragon with Multiattack).

Note that, in my previous version, I outlined that each variation of wyrmling/young dragon are essentially all homoginised into the same CR creature, based on the stats of one specific dragon (i.e. a Blue Dragon Wyrmling and a Young Red Dragon), but with minor tweaks such as different elemental damage resistance, different breath weapon damage, and different speeds and other minor features such as Amphibious or Ice Walk, as befits the chosen dragon’s colour.

So what I’m really looking for is a specific statblock of a specific Wyrmling/Young Dragon (ideally chromatic, so that we don’t have to worry about extra breath weapons, but I can work with metallic dragon statblocks and just explicitly exclude any extra breath weapons), which I can then swap some elemental damage and speeds around to match the flavour of the chosen dragon colour, such that its damage output is balanced for this archetype.

maximum coverage version of dominating set

The dominating set problem is :

Given an $ n$ vertex graph $ G=(V,E)$ , find a set $ S(\subseteq V)$ such that $ |N[S]|$ is exactly $ n$ , where $ $ N[S] := \{x~ | \text{ either $ x$ or a neighbor of $ x$ lies in $ S$ }\}$ $

My question is if the following (new problem) has a definite name in literature, and if not what should be the most appropriate name.

New Problem: Given an $ n$ vertex graph $ G=(V,E)$ and an integer $ k$ , find a set $ S(\subseteq V)$ of size $ k$ such that $ |N[S]|$ is maximized.

For the second problem, some of the names I have seen in the literature are maximum-graph-coverage; partial-coverage; k-dominating-set, (however, the exact same names are also used in other contexts).

Is it “reasonable” for me to start skipping any PHP version ending in “.0”?

Recently, the PHP team released PHP 7.4.0. I was running 7.3.12 at the time. I immediately upgraded and important stuff broke.

The most serious new bug (which I actually noticed) was that it started ignoring fgets(STDIN);, meaning all my security measures where I require the administrator (me) to press Enter before the PHP CLI scripts perform various “potentially dangerous” operations were bypassed!

You can imagine how scary that was for me.

I reverted back to 7.3.12 and waited for 7.4.1 to be released, which had fixed that bug. I worked to fix the other, less serious breakage. (Something related to booleans and ifs.)

This made me think. Looking back, this is not the first time that a PHP version ending with “.0” has caused issues for me. For this reason, I have made the decision to simply ignore any future version ending with “.0”, because, in spite for countless alphas/betas/RCs, they never seem to be “actually ready”. The most likely reason is that those alpha/beta/RC versions are all tried by a very small group of people, and only when a “sharp” version is released, you get “actual, real-world feedback”.

For this reason, I now view PHP versions ending with “.0” as “dangerous test versions, unsuitable for production”. I will not be installing those versions ever again.

Of course, this is quite selfish of me. I’m basically relying on everyone else “biting the bullet” for me. If I had the time and energy, I would be far more involved and help test early versions in proper test environments and whatnot, but, as many times as I have tried to set up even one test environment for critical, basic tests before upgrading the “real thing”, it always ends with me giving it up because it’s such a massive chore to maintain, not to mention it’s often impossible to replicate something which relies on all kinds of external connections/data all the time; you cannot replicate that inside a black box.

So, is it “reasonable” for me to do this? Obviously, if everyone did this, the .0 versions would just be another “Release Candidate” and the .1 versions would start being the new .0s… I don’t have a good answer to this myself. What do you all do to avoid this horrible breakage when upgrading to a new “major” (7.3 => 7.4 or 5.x => 7.x, etc.) PHP version?

Custom version control

I develop an open source power systems software and I would like to include a simple collaborative mode so that more than one person could work on building a power system model.

So far I have a table-based file format which is composed of many csv files bundled into a .zip file.

My thinking is that when a synchronisation event is triggered the following should happen:

  • Import the remote file into a model
  • compare the imported model with the current model
    • Omit the equal fields
    • Omit the additions from the current model to the remote
    • Flag the conflicts
    • Flag the additions in the remote model that are not present in the current model.
  • Ask the user about what to do with the conflicts (overwrite theirs, overwrite mine, omit for now)

I would like to know about other approaches about how to do the comparison part and corrections and suggestions to what I have came up with.

thanks.

How to write the invariants for one version of binary search insertion point (or leftmost entry) algorithm?

If we compare the binary search algorithm (leftmost or insertion point) on Wikipedia:

Algorithm 1:

function binary_search_leftmost(A, n, T):     L := 0     R := n     while L < R:         m := floor((L + R) / 2)         if A[m] < T:             L := m + 1         else:             R := m     return L 

with the one on Rosetta Code:

Algorithm 2:

BinarySearch_Left(A[0..N-1], value) {   low = 0   high = N - 1   while (low <= high) {       // invariants: value > A[i] for all i < low                      value <= A[i] for all i > high       mid = (low + high) / 2       if (A[mid] >= value)           high = mid - 1       else           low = mid + 1   }   return low } 

This binary search is the somewhat more complicated type of binary search, which is to find the leftmost index, or it can be the insertion point (so it is not the “exact match” binary search), as follows:

If the numbers in the array are:

11 22 22 33 44 55 66 

then if the target is 3, then the result should be 0 (3 inserted at index 0).

If the target is 59, then the result should be 6 (inserted at where 66 is), and if the target is 77, then the result should be 7 (inserted to the right of the maximum index, which is adding it as the new, final element).

I found it very easy to establish the invariants and correctness for Algorithm 1:

function binary_search_leftmost(A, n, T):     // Invariant:     // [L, R] inclusive is where the answer could be.     //   so note R is not n - 1 like the "exact match" case, but n,     //   because we can go "one step to the right" to insert     //   as index n     L := 0     R := n      // Invariant:     // note that we keep on looping when L < R, meaning the range     // [L, R] is not "closed".  When L == R, then we have reached     // the answer. Note that unlike the exact match case, here we     // always have an answer, so when L == R, then we already have     // an answer. (because [L, R] is where the answer is, and if     // L === R, we already have the answer either L or R and it cannot     // be anything else)     while L < R:         // Invariant:         // It might be better to use m := L + floor((R - L) / 2)         // here, because of the possible overflow bug. Here we consider         // if L and R differ by 1, such as L == 35 and R == 36, then         // m becomes min(L, R), and down below, since there can be         // only two cases: L := m + 1 which is 36, or R = m which is 35,         // that means the range [L, R] always shrinks.  If L and R          // differ by 2 or more, then it is obvious that [L, R] will         // always shrink.  When L and R differ by 0, the loop will not         // continue. So here we established that there will be no         // infinite loop         m := floor((L + R) / 2)         // if target is, say 55, and A[m] is 25 or 54, now we are         // disqualifying 25...A[m], so we set the lower bound L         // to m + 1 (so we disqualify m as well)         if A[m] < T:             L := m + 1         // Here it is T <= A[m]         // if target is, say 55, and A[m] is 55, 56, 100, or 1000         // now our answer could still be m, because m could be the         // leftmost (we don't know yet), so we don't disqualify m         // and so we disqualify m + 1 and all the way to the end         // of array, so we set R = m         else:             R := m     // Finally, when L == R, we could return either L or R     // In fact, it may be good to assert L == R here     return L 

However, for Algorithm 2, I found it quite difficult to establish invariants. It will be kind of like [low, high + 1] could be where the answer falls into. And if I imagine a high2 === high + 1, then maybe everything can fall into place, with

high = N - 1  high2 = high + 1 = N - 1 + 1 = N   (same as Algorithm 1)  while (low <= high) while (low < high + 1)             (same as Algorithm 1, see below) while (low < high2)                (same as Algorithm 1)  high = mid - 1 high2 = high + 1 = mid - 1 + 1 high2 = mid                        (same as Algorithm 1) 

but the line

mid = (low + high) / 2 

becomes

mid = (low + high2 - 1) / 2 

(I think it is assuming integer arithmetics) so then it is shifting the answer to 0.5 to the left sometimes (and therefore 1 less if taking the floor). But if we look at [low, high] === [35, 37] or [35, 36] or [35, 35], they still work out well. Overall, the invariants seem somewhat awkward and unclear. Are there actually some ways to establish good invariants and correctness for Algorithm 2?

What spellcasting ability is used for this version of Detect Thoughts

The Detect Thoughts spell states:

…the creature can use its action on its turn to make an Intelligence check contested by your Intelligence check if it succeeds, the spell ends.

This spell is given to the Githzerai race in MToF pg. 96 at 5th level. It is listed as using Wisdom as the spellcasting ability.

…When you reach 5th level, you can cast the detect thoughts spell once with this trait…Wisdom is your spellcasting ability for these spells…

Does this spell still need the check made by the Githzerai to be Intelligence? I assume it would.

Is my nerfed version of the Healing Spirit spell in line with the relative power level of other sources of healing?

I have concerns about the immense healing potential that the Healing Spirit spell can put out.

As it appears in Xanathar’s Guide to Everything (page 157), the Healing Spirit spell is able to heal any creature that starts its turn within its effect, or passes through it during their turn. In practice, this spell is limited by the mobility of the party that tries to take advantage of it, but in my experience, the spell is still able to dramatically improve the hitpoint recovery of the whole party, along with minimizing the impact of Unconsciousness or Death Saving Throws. In my experience, campaigns where Healing Spirit is accessible tend to trivialize/obsolete use of features like Hit Dice, Second Wind,

For reference, at its base level, it’s capable of healing any creature that runs through it for 1d6 hit points once per turn for 10 turns. So even if you only hit a single creature, you’re still confidently healing an average of 35 hitpoints to a single creature, and much more than that on more than one creature. And all of that scales with the level of the spell. The spell, once cast as a Third Level Spell, is strictly better than the Paladin-exclusive spell Aura of Vitality (also a third level spell), which has action-economy restrictions and does not scale with level, and unlike Aura of Vitality, Healing Spirit is accessible to a full-spellcaster capable of upcasting the spell and casting the spell many more times in a day.

It’s also been my experience that banning “out of combat” use of the spell doesn’t help much, since it only takes a single combat encounter being dragged out to permit optimal use of this spell.

I could choose to ban the spell entirely, but I want to consider a compromise first before I resort to that. So for my campaigns, I’m proposing the following version of the spell designed to keep some of its power as a healing spell while tempering its more ludicrous features:

Healing Spirit

2nd-level conjuration
Casting Time: 1 bonus action
Range: 60 feet
Components: V, S
Duration: Concentration, up to 1 minute

You call forth a nature spirit to soothe the wounded. The intangible spirit appears in a space that is a 5-foot cube you can see within range. The spirit looks like a transparent beast or fey (your choice).

Until the spell ends, whenever you or a creature you can see moves into the spirit’s space for the first time on a turn or starts its turn there, you may use your reaction to cause the spirit to restore 1d6 hit points to that creature. The spirit can’t heal constructs or undead.

As a bonus action on your turn, you can move the spirit up to 30 feet to a space you can see.

The two changes are the following:

  • Instead of being a free action, healing with this feature now requires the use of the spellcaster’s Reaction
  • The ability to gain benefits by upcasting the spell have been removed

So in general, this is a significant nerf to the spell. It becomes unable to be upcast, and it can only heal one creature per turn. It puts the spell more in line with the relative power of spells like Aura of Vitality while still making it accessible very early for low level druids. I think the spell will still be competitive in terms of raw healing output and for helping keep allies alive, while keeping it from trivializing all other possible sources of healing.

Does this version of the spell bring it more in line with the relative power level of other sources of Healing? Or are my assumptions/observations about the power of the original spell off-base?