How are Fighters Linear but Wizards Quadratic?

The phrase “Linear Fighters, Quadratic Wizards” gets bandied about a lot, but I’ve found I don’t have a good way to explain it to newer players.

The tier system post has some examples of how wizards are better than fighters in specific situations, but I don’t find the examples very satisfactory: the wizards in the examples seem to mostly rely on cheesy abuses that wouldn’t happen in an actual game. For example the post says a wizard can kill a dragon using shivering touch from Frostburn, or using mindrape and love’s pain from the Book of Vile Darkness, but many games won’t allow those books.

In an actual play scenario, with no access to any expansion books, and assuming a group of characters that aren’t grossly evil: what sorts of trends make wizards (or, more generally, full spellcasters) more powerful than non-spellcasting classes? At what character level does this start to happen, and what spells available at that level are responsible for the change?

I’m interested in responses pertaining to both 3.5e and Pathfinder; if there are important differences between the two, I’d be interested in hearing about those as well.

Maximizing a nonnegative linear function over adjacency matrices with node degree constraints

Suppose $ A$ is an $ n$ -by-$ n$ symmetric matrix whose entries are all nonnegative. $ A_{ii} = 0$ for all $ i$ . We want to find an $ n$ -by-$ n$ binary ($ 0/1$ valued) matrix $ X$ that maximizes

$ $ \sum_{ij} A_{ij} X_{ij}$ $

under the constraints that

  1. $ X$ is symmetric ($ X^\top = X$ );
  2. Each row of $ X$ can have at most $ k$ ones (the rest being zero);
  3. The total number of $ 1$ in $ X$ is at most $ m$ .

Here $ k \le n$ and $ m \le n^2$ . I can think of a dynamic programming solution if 2 and 3 are the only conditions. But the symmetry in condition 1 makes it much harder. Does there exist a polynomial algorithm which can achieve multiplicatively constant approximation bound (under conditions 1, 2, 3)? Ideally the constant is universal, not dependent on $ n$ , $ k$ , or $ m$ .

If not, is there any hope for the combination of conditions 1 and 2? The combination of 1 and 3 is trivial to handle.

Thank you.

Linear Actuator 8000N

Input option24/12VDC
Limited switchWith
Hall sensorOptions
Inner tube materialStainless steel
Duty cycle10%
Stroke options0—2000mm
NoiseLess than 48dB
ColorBlack or grey
Operation temperature-25℃~+60℃
Customized order example:
A — B — C — D — E — F — G
Input Load Unload speed Stroke Retracted length Cable length
G: customer’s special requirement
Lmin = mini retracted length
Lmin means the length when linear actuator 100% fully retracts, meanwhile, it will touch the tail limited switch.
Lmax = Max extended length
Lmax means the length when linear actuator 100% fully extends, meanwhile, it will touch the head limited switch.
LA20—4000N Retracted length:
Stroke ≤110 mm = 283mm (retracted length)
110 mm < stroke = S+175mm (retracted length)
LA20—6000N/8000N Retracted length:
Stroke ≤110mm = 325mm (retracted length)
110 mm <stroke = S+175mm (retracted length)
Speed vs load
Current vs loadLinear Actuator 8000N

Floyd’s cycle detection algorithm, why is it linear time, and how do you prove that tortoise and hare will meet?

I haven’t been able to find a full proof of Floyd’s cycle detection algorithm. All proofs that I have been able to find just explain why the distance from the start of the graph to the start of the cycle is equal to the distance that hasn’t been traveled within the cycle.

But 1) how do we prove that tortoise and hare will meet inside the cycle? And 2) how do we prove that this algorithm is linear time? Also, 3) when proving x mod L = z, where x is the distance from the start of the graph to the beginning of the cycle, z is the distance that hasn’t been covered within the cycle, and L is the length of the cycle, why do we assume that x>=z?

Linear algorithm to measure how sorted an array is

I’ve just attended an algorithm course, in which I’ve seen many sorting algorithms performing better or worse depending on how much the elements of an array are sorted already. The typical example are quicksort, performing in $ O(n^2)$ time, and mergesort which operates in linear time on sorted arrays. Vice versa, quicksort performs better in case we are dealing with an array sorted from the highest to the lowest value.

My question is if there is a way to measure in linear time how sorted the array is, and then decide which algorithm is better to use.

Need help with adding elements to hashtable with linear probing

Here is an example problem which I have having trouble figuring out. The red text is the answer.

enter image description here

I get how the values are added before the hashtable is resized… that is common sense. (Insert 0 at index 3, 5 at index 1, etc.)

But when the table is resized, each element has a new position. HOW is 1’s new index 0? HOW is 5’s new index 7? How did each element of the array get assigned their new index upon table resize?

Any help would be appreciated.


A linear time algorithm to find the array of indices of bigger numbers

Suppose you have an array nums of integers, unsorted, containing the values from 1 to n. You have a second array, call it firstBigger, initially empty, which you want to populate with integers such that firstBigger[i] contains j if j is the least index such that i < j and nums[j] > nums[i]. A brute force search of nums runs in n^2 time. I need to find a linear time search.

For example, if nums = [4,6,3,2,1,7,5] and we use 1-indexing, then firstBiggest = [2,6,6,6,6,None,None].

I have considered first computing the array of differences in linear time. Certainly anywhere in the array of differences with a positive value, this indicates a place in firstBigger where it should store i+1. But I’m not seeing how to fill any other coordinates efficiently.

I might have gotten close when I started thinking of analyzing the array end-to-start. The last nth coordinate of firstBigger is going to be None, the n-1th has to be directly compared to the nth. As we proceed backward, if the number at i is smaller than at i+1 we make this assignment. Otherwise we look up the first number bigger than the one at i+1. If that’s still too small, again look up the first number bigger than that.

On average this does better than the naive algorithm, but in the worst case it’s still n^2. I can’t see any room to optimize this.

Time complexity of linear programming

I have a linear program with $ n$ variables, $ m$ constraints and $ O(nm)$ bit total length (the constraint matrix contains only zeros and ones). The time complexity for solving the linear program is known to be polynomial $ O(n^a m^b)$ for some integers $ a$ and $ b$ . What is the best known pair $ a, b$ where the value of $ a$ is the minimal possible?