## Algorithm for optimal spacing in intervals

Is there an algorithm to optimally space points within multiple intervals? Optimal in this case means maximizing the smallest distance between any two points so that each pair of points has at least distance X. For example, in the intervals (1,3) and (5,7) you can space out three points with a distance of at least 2 (at 1,5, and 7). But you can’t space out three points with a distance of at least 3. Is there an easy way to do this with a program?

## Query: Given a graph, is edge x in an optimal TSP tour?

Consider the decision problem that when given a graph, we need to decide if a particular edge belongs to any optimal solution to the traveling salesman problem on that graph.

It may be argued that the complexity of this problem is strictly greater than any co-NP problem. The idea is that it’s perhaps impossible to come up with a counter-example, since we need e.g. an optimal tour candidate before we can consider a counterexample (but no optimal tour candidate is given in this problem statement).

On the other hand, it may be argued that the complexity of this problem is strictly smaller than any P-space-complete problem, as our problem may be seen as

$$\exists$$“a tour A containing x” $$\forall$$“tours B”: (some formula stating that A <= B)

whereas some probably “minimal” P-space-complete problem has O(n) alternations of $$\exists$$ and $$\forall$$: the quantified boolean formula problem (QBF).

Based on this argument, is it reasonable to expect that there’s a complexity class between co-NP and PSPACE? Does this particular class have a name? Can we expect to find arbitrarily more such classes by adding another one alternation of $$\forall$$ or $$\exists$$ to the previously found such class?

## Optimal TSP Path with Branch and Bound

I just wanted a bit of clarification on the above picture. I understand the general idea of building out a tree in DFS order and stopping once you come across a number bigger than you got before. For example, when the value of the path becomes $$14$$ or $$inf$$ the algorithm stops because there was already a path of value $$11$$. But, I am quite confused with regards to where these numbers are coming from (the lower bounds on the cost). For example, the path from vertex $$A$$ to vertex $$B$$ has length $$1$$, but in the tree, the path from $$A$$ to $$B$$ has a lower bound of $$10$$.

So, I would greatly appreciate if anyone could let me know where the numbers in the branch-and-bound search tree are coming from!

## Optimal algorithm for making queries to a database

There is a database of, let’s say, 500k English two-word combinations (e.g. “clover arc”, “minister horse”). I can search for an arbitrary string and I will get a list of the alphabetically first 1000 entries containing this string; the time each query takes is proportional to the number of results it returns, plus some constant overhead. I have a certain dynamic number of the unique results I want to get (e.g. 400k, 490k, 499k) and I want to spend as little time as possible sending queries to get them. By what algorithm should I craft my queries to achieve this?

One possible naive approach would be as following:

1. Search for every single letter.
2. Check which queries have maxed out the 1000 result limit.
3. For each of those, make 26 new queries, appending every letter of the alphabet to them.
4. Go to 2, until all queries give fewer than 1000 results.

However, this is obviously quite suboptimal, since every time we expand the tree the previous results get essentially obsoleted – almost all of them (except for those where the letter combination was at the end of the word) will be present across the queries generated from it, plus there will wasted overhead time on impossible combinations (e.g. we had a maxed-out query of “qu” and on the next level we’ll be requesting “quq” and “qux”, which will certainly not give any results).

How would you approach this?

(I apologize in advance if this is the wrong SE to ask this kind of question, but I couldn’t find a better match.)

## Optimal substructure of rod cutting?

How do you show the optimal substructure of the rod cutting problem(defined as in Optimal substructure and dynamic programming for a variant of the rod cutting problem). I am trying to follow the guideline steps

So suppose someone told us one of the cuts of the rod $$R$$ in an optimal solution OPT for $$R$$.

This cut of OPT paritions $$R$$ into two smaller rods $$R_1$$ and $$R_2$$, one of length $$i$$, and the other of length $$n-1$$.

Here is the only argument that I can think of: (Again, trying to follow steps 3 and 4 now but the argument does not seem like one that would suffice)

Suppose that $$R_1$$ is not an optimal solution in itself, then OPT would not have given an optimal cut in the first place which contradicts our starting supposition that OPT is an optimal solution.

Also, the argument seems too “easy” to appear like something that is not just informal.

How would you prove the optimal substructure?

## Optimal strategy for tossing three dependent coins

Suppose that I have three correlated coins. The marginal probability of Head of coin $$i$$ is denoted by $$p_i$$.

The conditional probability of head for coin $$i$$ given the outcomes of coin $$j$$ and $$k$$ is denoted by $$p_i|x_j,x_k$$, where $$x_j,x_k\in\{H,T\}$$. We can similarly construct the conditional probability of $$i$$ given $$x_j$$.

Each coin can be tossed at most once and you receive a $1 for a head and -$ 1 for a tail. You don’t have to toss all the coins, and your objective is to maximize the total reward.

What would be the optimal sequence of tossing coins in this case?

If the coins are independent of each other, the order wouldn’t matter. The optimal strategy should be “flip coin i if $$p_i>\frac{1}{2}$$“. For the case of two coins, it can be shown that it is always the best to flip the coin with a higher marginal $$p_i$$. However, this doesn’t have to be optimal for three coin cases. I’ve been thinking about this problem for a quite long time but can’t come up with a general solution or an intuition that might help..

## How can I prove that my greedy algorithm for least guards is optimal?

This is the problem:

An art gallery hired you to put guards so they can monitor artworks in a hallway. The goal is to minimize the amount of guards needed in this hallway. Each guard has a range of 10 meters (5m on the right, 5m on the left with him being in the middle).

No matter where the artworks are put in the hallway we want the minimal amount of guards as possible.

My Greedy Algorithm:

Start at the beginning of the hallway. Find the position of the closest artwork from the entrance. Put the guard at position closest artwork + 5m ahead. Eliminate all the artworks covered by the guard. Re-do the process but this time start where the range of the previous guard ended 

Now how do I write a proof for this? How do I prove that my algorithm is indeed optimal? Which mathematical proof techniques can I use?

## What is the most optimal build for keeping an infinite Crab Swarm apocalypse at bay?

My friends and I were discussing a meme we saw when our imaginations took us way too far, and now I’m curious about how many Crab Swarms it would take to kill the most efficient Crab Swarm killer, and who the most efficient Crab Swarm killer could be.

Setup:

You are an adventurer who happened upon some hijinks and now suddenly, you’re in the middle of a Crab Swarm apocalypse. That is,

• You’re in the center of a 20sqx20sq (100ft x 100ft) flat square plain;
• You have one week to prepare;
• For purposes of this theoretical, you may assume you have any necessary resources in infinite amounts.
• and, after that week, Crab Swarms begin to appear from all directions in an infinite stream.
• There is nothing special about any individual Crab Swarm; each is exactly as described.
• They are all hostile against you, specifically, and will do anything within their crablike powers to murder you.
• The stream will not be stopped and cannot be halted until you are dead.

Specifically, I am interested in two scenarios: a level 5 adventurer (because Crab Swarms are CR4, so one level 5 adventurer should theoretically be able to defeat a Crab Swarm); and any arbitrary level 20 adventurer (for whom 250 CR4 Crab Swarms would make a CR20 encounter). What are the most effective builds at these levels for eliminating Crab Swarms and/or prolonging survival?

Caveats:

Spells like Teleport or maintaining indefinite amounts of Rope Tricks, while technically valid for the definition of prolonging survival, are not in the spirit of the scenario, and shouldn’t be considered. Running away is not an option.

By “murder”, I don’t necessarily believe that killing is required. Simply teleporting them to another Plane via skills like, say, Initiate of the Seventhfold Veil’s Violet Veil skill is an equally valid strategy (as well as being hilarious in concept).

I am open to basically any valid Pathfinder solution to this problem, from published books. Psionics, Path of War, whatever, bring it on.

## Cover marked squares with rectangles with optimal tradeoff

I’m trying to solve this problem where I have to cover all the squares in a N x N grid with rectangles with a tradeoff between cost and computation. These are the constraints:

• Fields are rectangular, but may be of any width or height
• Rectangles may cover any cell whether marked or not
• Must be rectangular
• Must be axis-aligned with the grid (i.e., not diagonal)
• Rectangles cannot overlap
• Each rectangle costs 10 + 5 per unit of area covered

Some possible solutions are given below:

I would ideally want to create a random array and test out the code for this. Grid can be any size with any number of marked squares, and the tradeoff is cost vs computation

## What is the optimal way to maneuver into and out of the Healing Spirit spell to maximize healing?

The healing spirit spell states:

You call forth a nature spirit to soothe the wounded. The intangible spirit appears in a space that is a 5-foot cube you can see within range. The spirit looks like a transparent beast or fey (your choice).

Until the spell ends, whenever you or a creature you can see moves into the spirit’s space for the first time on a turn or starts its turn there, you can cause the spirit to restore 1d6 hit points to that creature (no action required). The spirit can’t heal constructs or undead.

As a bonus action on your turn, you can move the spirit up to 30 feet to a space you can see.

I’m wondering what the optimal way to maneuver into and out of the spell for maximized healing is. Note that the following questions already exists:

• How does the spell Healing Spirit work?

Established there are some facts of how this sort of spell works:

1. Creating the spell on top of a creature does not restore any hit points to them
2. Moving the spell onto a creature with your bonus action does not restore any hit points to them
3. You don’t need to end your turn in the healing spirit’s space, you only have to move through it.

For the purposes of this question I am not interested in class features that modify healing like the Life Domain Cleric’s Discipline of Life and Supreme Healing features or the Warlock’s Gift of the Ever-Living Ones Eldritch Invocation. I am only interested in ways to maneuver into and out of the space most effectively.

Another way to think of this is the following: What is the maximum number of times a creature can be healed by this spell per round?

## Rules/Constraints:

1. From the section on “Moving Around Other Creatures”:

You can move through a nonhostile creature’s space. […] Remember that another creature’s space is difficult terrain for you.

Whether a creature is a friend or an enemy, you can’t willingly end your move in its space.

2. This is a party of four, and they do not have any mounts available unless they summon them.

3. You only “move into the spirit’s space” when you use your own movement to enter said space; being grappled and dragged into the space, being hurled into it by thunderwave, and being carried into it on a mount do not count.