how to reduce the time complexity of this code?

I have a graph G=(V,E). A list of nodes NODE subset of V. I want to find out all the neighbor nodes of each node in NODE and add edge if those neighbors have distance greater than 2. Can anyone here please help me to reduce the time complexity of this code:

import networkx as nx import random  G = nx.erdos_renyi_graph(30, 0.05)  node=[] for j in range(5):          node.append(random.randint(1,30))  for i in node:     lst=list(G.neighbors(i))     if(len(lst)>1):          for j in range(len(lst)):              for k in range(j+1,len(lst)):                  if(len(nx.shortest_path(G,lst[j],lst[k]))>2):                      G.add_edge(lst[j],lst[k]) 

KD-tree range search/query complexity

I’m currently reading up on the time complexity of the range search/query for an unbalanced KD-tree.

I see all these different articles where the same the complexity is O(sqrt(N)) where N is the number of points. And this order of growth is proportional to the number of points a vertical line can intersect with in a KD-tree. But if we create a KD-tree with 7 points, the MAX intersects points from a vertical line should be sqrt(7) = 2,64. Let’s assume that it’s rounded up to 3. While drawing the some “test” kd-tree’s you see this relation is true. But when you draw unbalanced kd-trees, this is not true.

How would you go about analyzing the range search complexity of an unbalanced KD-tree?

My thoughts: An unbalanced KD-tree would be O(N) since the points/nodes are aligned to one side(Like a linked-list)

How do you represent an r.e. complexity class with a list of TMs?

In this book ‘Theory of computation’ By Dexter Kozen on page 313,exercise 127 he says “A set of total recursive functions is recursively enumerable (r.e.) if there exists an r.e. set of indices representing all and only functions in the set. For example, the complexity class P is r.e., because we can represent it by an r.e. list of TMs with polynomial-time clocks.” How do you do what he is talking about for any collection of languages that is r.e.? How do you represent an r.e. complexity class with a list of TMs? What is an example of an enumerator that does this for any r.e. class C?

Time complexity of code running at most summation(N) times in a loop

Let’s say I have a JavaScript loop iterating over input of size N. Let’s say all elements in N are unique, so the includes method traverses the entire output array on each loop iteration:

let out = [] for (x in N) }   if (!out.includes(x)) {     out.push(x)   } } 

The worst case runtime of the code inside the loop seems to be not O(N), but the summation of N, which is substantially faster.

Is this properly expressed as O(N^2) overall or is there a standard way to convey the faster asymptotic behavior given the fact that the output array is only of size N at the end of the loop?

Complexity of numerical derivation for general nonlinear functions

In classical optimization literature numerical derivation of functions is often mentioned to be a computationally expensive step. For example Quasi-Newton methods are presented as a method to avoid the computation of first and/or second derivatives when these are “too expensive” to compute.

What are the state of the art approaches to computing derivatives, and what is their time complexity? If this is heavily problem-dependent, I am particularly interested in the computation of first and second order derivatives for Nonlinear Least Squares problems, specifically the part concerning first order derivatives (Jacobians).

BFS complexity Why is the complexity

I’m having a hard time understand the reasoning in the solution of 18.7 in Elements of programming interviews (EPI):

Let $ s$ and $ t$ be stings and $ D$ a dictionary, i.e., a set of strings. Define $ s$ to produce $ t$ if there exists a sequence of strings from the dictionary $ P = \langle s_0,s_1,\ldots,s_{n-1} \rangle$ such that the first string is $ s$ , the last string is $ t$ , and adjacent strings have the same length and differ in exactly one character. The sequence $ P$ is called a production sequence. For example, if the dictionary is {bat,cot,dog,dag,dot,cat}, then ⟨cat,cot,dot,dog⟩ is a valid production sequence.

Given a dictionary $ D$ and two strings $ s$ and $ t$ , write a program to determine if $ s$ produces $ t$ . Assume that all characters are lowercase alphabets. If $ s$ does produce $ t$ , output the length of a shortest production sequence; otherwise, output -1.

Hint: Treat strings as vertices in an undirected graph, with an edge between $ u$ and $ v$ if and only if the corresponding strings differ in one character.

Here is the solution:

The number of vertices is $ d$ , the number of words in the dictionary. The number of edges is, in the worst-case $ O(d^2)$ . The time complexity is that of BFS, namely $ O(d+d^2) = O(d^2)$ . If the string length $ n$ is less than $ d$ then the maximum number of edges out of a vertex is $ O(n)$ , implying an $ O(nd)$ bound.

So I agree with number of vertices being $ d$ , and the worst case number of edges is $ d^2$ . We know that the complexity of BFS is $ O(V+E)$ , hence $ O(d+d^2) = O(d^2)$ , though if we used a set to record visited vertices (or removed a vertex once we visited it from the graph), that should reduce BFS’s complexity to $ O(d)$ . But then things get funky.

If the string length $ n$ is less than $ d$ then the maximum number of edges out of a vertex is $ O(n)$ .

don’t agree with this. Imagine we have 5 words in our dictionary $ \{ab, ac, ad, ae, af\}$ , so $ d=5$ and $ n=2$ . All these vertices are connected and you can see that each vertex has 4 edges leaving it… which is more than $ O(n)$ . You can have $ 26^n$ possible edges leaving the vertex, but you only have $ d$ vertices in the graph, so the number of edges leaving a single vertex should be $ O(d)$ .

I ultimately agree that the final complexity of the algorithm is $ O(nd)$ , but I calculated that simply given that we can visit up to $ d$ vertices (we use a visited set to prevent cycles) and for each vertex visited one we iterate over the string of length $ n$ as we look for differences in the alphabet of lower characters $ O(26nd) = O(nd$ ).

Interested to hear what people think, Thanks 🙂

Are any two complexity classes equipped with an oracle to solve the halting problem equivalent?

Equip any complexity class $ C$ and $ B$ (to be more specific: any complexity class that contains only standard decidable problems) with an oracle $ O$ to solve the halting problem. Is $ C^O = B^O$ for any $ B$ and $ C$ that only contain problems decidable by a normal TM (meaning a TM with no access to an oracle (only the empty oracle))?

Importance of space constructability in time space relation in complexity

I am reading Arora-Barak’s Complexity book. In Chapter 4, they state and prove the following theorem.

enter image description here

Why $ S$ should be space constructible? Wouldn’t all three containments of theorem hold, even if $ S$ is not space constructible?

My other question is about Remark 4.3, the book claims that if $ S$ is space constructible then you can make an $ NSPACE(S(n))$ machine halt on every sequence of non-deterministic choices by keeping a counter till $ 2^{O(S(n))}$ . I am not sure how we can keep such a counter in $ S(n)$ space. The space constructability of $ S$ implies that we can compute $ S(n)$ in $ O(S(n))$ space, not $ 2^{O(S(n))}$ in $ O(S(n))$ space.