The problem

What are the best practices to overcome that certain “complexity paralysis” that may strike a GM when trying to learn and immerse himself in a setting that has a lot of intricate background information?

An example

For example, let’s say you’re a GM who decides to give a try to running Shadowrun for the first time in your life (no matter how experienced you are in other games and/or settings.) You settle on an edition — only to realize that the game has been around for decades, and it has so much background info it could fill half the Encyclopædia Britannica (well, not literally, obviously, but you get the point).

Hesitatingly, you decide to get some focus, cut away a huge chunk of the looming material, and set your story in Redmond, Seattle. Sure, you get an official handbook for Seattle, and read through it (quickly, because gaming night is upon you) — but in doing so, you discover that there are tons of relevant, related sourcebooks (like… on magic, critters, the matrix, cyberware, etc.).

Sure, you can ignore the sourcebooks and go for a minimal approach… but even so, when you start designing your adventure (already feeling out of touch with the world of Shadowrun simply because of knowing how much you don’t know), you realize that besides the setting info, there are tons of in-game factors to consider, think through, and work into even the simplest campaign. Corporate politics and workings, magical aspects and relationships, shadow politics, gang politics and workings, the nuances of running the shadows, and so on. Of course, without reading as much as possible of the official sourcebooks, you’ll have no idea about the existence of a ton of factors — as if the sketchy stuff you learned from the core rulebook and the city sourcebook weren’t complicated enough.

And, by this time, you’re gripped by the title’s setting complexity paralysis. You’d love to run the game, but you feel you have no real idea how stuff should work, and no time (let alone a reliable entry point) into the hypercomplicated, cross-referenced lore. And with that, you return to running something you know, be it a world you’ve been following since its inception or one that you’ve built yourself. You skip running Shadowrun.

Mind you, I’m definitely not looking for answers focused on Shadowrun. It’s just an example. (Sure, it’s okay if you use it as an example for general suggestions.) I could practically have brought up quite a number of other worlds. Star Wars EU. WoD & nWoD. The Forgotten Realms. Warhammer FRP. And so on, and so on.

Summary

What I’m looking for is methods that help you, the GM, through this “setting complexity paralysis”, this disheartening and disappointing block that hits you when you face a huge amount of background material without which your game won’t feel authentic, just a bad copy, an alternate universe of an alternate universe.

Computational complexity in Boolean network

An Boolean control networks can be expressed as $$$$\label{ControlBN} \left\{\begin{array}{l}{x_{1}(t+1)=f_{1}\left(x_{1}(t), \cdots, x_{n}(t), u_{1}(t), \cdots, u_{m}(t)\right),} \ {x_{2}(t+1)=f_{2}\left(x_{1}(t), \cdots, x_{n}(t), u_{1}(t), \cdots, u_{m}(t)\right),} \ {\vdots} \ {x_{n}(t+1)=f_{n}\left(x_{1}(t), \cdots, x_{n}(t), u_{1}(t), \cdots, u_{m}(t)\right),} \ \end{array}\right.$$$$ where $$x_i,~i=1,\dots,n,$$ are state nodes, $$x_i(t)\in\{0,1\},\,i=1,\cdots,n$$ are the value of the state node $$x_i$$ at time t. $$u_i,~i=1,\dots,m$$ are control nodes, $$u_i(t)\in\{0,1\},\,i=1,\cdots,m,$$ are the value of the state node $$u_i$$ at time t, and $$f_i:\{0,1\}^{n+m}\rightarrow \{0,1\},\,i=1,\dots,n$$ are Boolean functions.

Consider the above system, Denote its state space as $$\mathcal{X}=\{(x_1,\cdots,x_n)|x_i\in\{0,1\},i=1,\cdots,n\}.$$

Given initial state $$x ( 0 ) = x^0\in \mathcal{X}$$ and destination state $$x^d\in \mathcal{X}$$. Destination state $$x^d$$ is said to be reachable from the initial state $$x^0$$ at time $$s>0,$$ if there exists a sequence of controls $$\{u(t)|t=0,1,\cdots,s-1\}$$, where $$u(t)=(u_1(t),\cdots,u_m(t))$$, such that the trajectory of the above system with initial value $$x^0$$ will reach $$x^d$$ at time $$t=s.$$

The above system is said to be controllable, for any $$x^0,x^d\in \mathcal{X},$$ $$x^d$$ is reachable from $$x^0.$$

$$M$$-step Controllability Problem is defined as

Input: Given an Boolean Control Networks with $$n$$ state variables $$x_1,\cdots,x_n,$$ $$m$$ controls $$u_1,\cdots,u_m,$$ Boolean function $$f_1,\cdots,f_n:\{0,1\}^{n+m}\rightarrow \{0,1\}.$$ Given constant $$M.$$

Problem: for any destination state $$x^d$$ and initial state $$x^0$$, whether or not there exists a sequence of controls $$\{u(0),\cdots,u(M-1)\}$$ such that $$x^d$$ is reachable from $$x^0$$?

In order to solve this problem, I convert the problem into logical form as following:

$$\begin{equation*} \begin{split} &\forall x_1(0)\cdots\forall x_n(0)\forall x_1(M)\cdots\forall x_n(M)\exists u_1(0)\cdots\exists u_m(0)\exists x_1(1)\cdots\exists x_n(1)\cdots \exists x_1(M-1)\cdots\exists x_n(M-1)\ &\exists u_1(M-1)\cdots\exists u_n(M-1)~~\bigwedge_{i=1}^{n}(f_i(x(0),u(0))\leftrightarrow x_i(1))\wedge \bigwedge_{i=1}^{n}(f_i(x(1),u(1))\leftrightarrow x_i(2))\wedge \cdots\wedge \ &\bigwedge_{i=1}^{n}(f_i(x(M-1),u(M-1))\leftrightarrow x_i(M)).\ \end{split} \end{equation*}$$

According to such expression, I can prove the upper bound of the problem. But I have no idea about how to prove it is $$\Pi_2^p$$-hard.

Time complexity of pairs in array double loop

I know, that the following is: O(n^2),

int count = 0; for(int i = 0; i<array.length(); i++) {    for(int j = i+1; j<array.length(); j++) {        if(array[i] == array[j]) {            count = count + 1;        }    } } 

But, should something like count = count + 1; be taken into account? For predicting or making up a complex time equation, or using sum notation,

n + n-1 + n-2 + n-3 + (…) + 1 

In what cases is solving Binary Linear Program easy (i.e. **P** complexity)? I’m looking at scheduling problems in particular

In what cases is solving Binary Linear Program easy (i.e. P complexity)?

The reason I’m asking is to understand if I can reformulate a scheduling problem I’m currently working on in such a way to guarantee finding the global optimum within reasonable time, so any advice in that direction is most welcome.

I was under the impression that when solving a scheduling problem, where a variable value of 1 represents that a particular (timeslot x person) pair is part of the schedule, if the result contains non-integers, that means that there exist multiple valid schedules, and the result is a linear combination of such schedules; to obtain a valid integer solution, one simply needs to re-run the algorithm from the current solution, with an additional constraint for one of the real-valued variables equal to either 0 or 1.

Am I mistaken in this understanding? Is there a particular subset of (scheduling) problems where this would be a valid strategy? Any papers / textbook chapter suggestions are most welcome also.

Complexity of “does a directed graph have

I am studying the complexity classes and I am confused what is the complexity of the following problem: "Does a directed graph G has at most 1 Hamiltonian cycle?"

So, from my knowledge and understanding:

1. Existence of Hamiltonian cycle is a NP(C) problem.
2. Existence of exactly $$k$$ ($$k>1$$) Hamiltonian cycles in a graph belongs to PSPACE.
3. The stated problem would be a complement of "Does a directed graph G has at least 2 Hamiltonian cycles?"

However, I still fail to see to which class the mentioned problem belongs. Originally I though it would also belong to PSPACE, but not to any space in PSPACE. However, I have consulted my teacher and she said that, while it belongs to PSPACE, it also belongs to some subset of PSPACE. Which subset that would be? Does it belong to co-NP, and the problem from 3. then belongs to NP? If yes, how to prove it (because for now I’m just guessing it based on my teachers’ hint)? If no, where does it belong?

What is the complexity of $i^i$?

What is the complexity of the following algorithm in Big O:

for(int i = 2; i < n; i = i^i) {     ...do somthing } 

I’m not sure if there is a valid operator to this type of complexity. My initial thought was as follows:

After $$k$$ iterations we want: (using tetration?)

$${^{k}i} = n \implies k=\log\log\log…_k\log{n}\implies\mathcal{O(\log\log\log…_k\log{n})}$$ (where we have k times the log function) but i’m not sure if this is evan a valid way of writing this. Anyway, we have a complexity that that includes $$k$$, which does not seems right to me.

I recently came across a function called the strawman algorithm which the pseudo code looks like this:

StrawmanSubarray(A):   Initialize bestsum = A[0]        For left=0 to n-1:         For right = left to n-1:            Compute sum A[left] + . . . + A[right]            If sum > bestsum: bestsum = sum 

The time complexity is Θ (n^3), and I don’t quite get where is the 3rd n comes from to get Θ (n^3)?

what would be the time complexity of DBSCAN algorithm?

what would be the time complexity of DBSCAN algorithm if we use it for graph(sparse) clustering O(n^2) or O(nlog(n))?

Bubble Sort: Runtime complexity analysis like Cormen does

I’m trying to analyze Bubble Sort runtime in a method similar to how Cormen does in "Introduction to Algorithms 3rd Ed" for Insertion Sort (shown below). I haven’t found a line by line analysis like Cormen’s analysis of this algorithm online, but only multiplied summations of the outer and inner loops.

For each line of bubblesort(A), I have created the following times run. Appreciate any guidance if this analysis is correct or incorrect. If incorrect, how it should be analyzed. Also, I do not see the best case where $$T(n) = n$$ as it appears the inner loop always runs completely. Maybe this is for "optimized bubble" sort, which is not shown here?

Times for each line with constant run time $$c_n$$, where $$n$$ is the line number:

Line 1: $$c_1 n$$

Line 2: $$c_2 \sum_{j=2}^n j$$

Line 3: $$c_3 \sum_{j=2}^n j – 1$$

Line 4: $$c_4 \sum_{j=2}^n j – 1$$ Worst Case

$$T(n) = c_1 n + c_2 (n(n+1)/2 – 1) + c_3 (n(n-1)/2) + c_4 (n(n-1)/2)$$

$$T(n) = c_1 n + c_2 (n^2/2) + c_2 (n/2) – c2 + c_3 (n^2/2) – c_3 (n/2) + c_4 (n^2/2) – c_4 (n/2)$$

$$T(n) = (c_2/2+c_3/2+c_4/2) n^2 + (c_1 + c_2/2+c_3/2+c_4/2) n – c_2$$

$$T(n) = an^2 + bn – c$$

Finding the Time Complexity – Worst Case (Big-Θ) – Array List, BST

Hi I’m a bit confused on how to find the time complexity of the following in the worst case in terms of big-Θ, I’ve figured out 1 and 2.

What is the worst-case time complexity, in terms of big-Θ, of each of these operations: (1) insert an element in the array list = Θ(1) (2) remove an element from the array list (e.g. remove an occurrence of the number 5) = Θ(n)

(3) remove the second element from the array list (i.e. the one in position 1)

(4)count the number of unique elements it contains (i.e. the number of elements excluding duplicates; e.g.[6,4,1,4,3] has 4 unique elements)

Suppose you have an initially empty array list with an underlying array of length 10. What is the length of the underlying array after:

(5) inserting 10 elements in the array list (6) inserting 10 more elements in the array list (i.e. 20 elements in total) (7) inserting 10000 more elements in the array list (i.e. 10020 elements in total)

What is the worst-case time complexity, in terms of big-Θ, of each of these operations on binary search trees: (8) add an element in the tree (assuming that the tree is balanced) (9) add an element in the tree (without assuming that the tree is balanced) (10) find the largest element in the tree (assuming that the tree is balanced) After each operation, we should still have a valid heap.