## Big O notation, code time complexity

does appending an element to a list through a for loop work in O(1) time or O(n) time? In addition, what is the time complexity does "".join that list into a string work in?

## What does the notation $[ i \neq k ]$ means?

I can’t figure out what the notation $$[x \neq k ]$$ means. Here’s a bit of context:

The formula is: $$Pr[A_i^k = 1] = \frac{[i\neq k]}{|k-i| + 1} = \begin{cases} \frac{1}{k-i+1} \text{ if } i \lt k \ 0 \text { if } i = k \ \frac{1}{i-k+1} \text{ if } i \gt k \end{cases}$$

and is part of a chapter where the average expected time of operations of a randomised treap are proved.

$$A_i^k$$ is an indicator variable defined as $$[ x_i \text{ is a proper ancestor of }x_k ]$$ where $$x_n$$ is the node with the $$n$$-th smallest search key. That probability comes up because $$\text{depth}(x_k) = \sum_{i=1}^{n} A_i^k$$ and $$\mathbf{E}[\text{depth}(x_k)] = \sum_{i=1}^nPr[A_i^k = 1]$$.

I have no access to the pages that explain the notation since I’m studying from a pdf of a few pages taken from a book.

## Estimating the bit operations using big O notation

Using big- O notation estimate in terms of a simple function of $$n$$ the number of bit operations required to compute $$3^n$$ in binary.

I need some help with the above question. The number of bit operations required to multiply two k- bit numbers is $$O(k^2)$$. In the first step I am multiplying two 2-bit numbers, in the 2nd step a 4-bit and a 2-bit number and so on. So the total bit operations will be I feel $$O(k^2) + O(k^2 * k) +…. + O(k^{n-1} * k) \,\,with \,\, k \,= 2$$

How will the above sum be estimated as a function of n?

## Order notation subtractions in Fibonacci Heap

Can order notation on its own imply:

$$O(D(n)) + O(t(H)) – t(H) = O(D(n))$$

My guess is that you cannot since the constant in the O(t(H)) would still exist after the subtraction if c > 1.

Well, this is actually the case, but there are underlying factors. This equation appears in Fibonacci heap analysis in CLRS (518). The justification for this step comes from the underlying potential function. According to the authors, “we can scale up the units of potential to dominate the constant hidden in $$O(t(H))$$“. I want to know how this happens, but don’t really know how to ask this complicated question.

## Same computation order using postfix notation?

I’m trying to understand arithmetic using stacks. Specifically converting infix notation to postfix notation. My question is how you convert an expression like: 1 + (2 + 3) + (4 + 5) that computes in the exact order that the order of operations says.

Meaning: 1 + (2 + 3) + (4 + 5) becomes 1 + 5 + (4 + 5) then 1 + 5 + 9 then 6 + 9

If you do:

Push 1, Push 2, Push 3, Add, Push 4, Push 5, Add

Where Add pops the top two operands off the stack, adds them, and then pushes the sum back on the stack.

Alternative notation: 1 2 3 + 4 5 +

How do you add 1 + (sum of 2 and 3) next? If you do another Add then the sum of 2 and 3 will be added to sum of 4 and 5, because these are the top two on the stack. Is it impossible to do it in the exact same order? Doing each parentheses group first left to right then doing the rest of the computation left to right.

## Is grammar that describes an equation in prefix (Polish) notation always unambiguous?

I recently completed a problem in which I was asked to generate a parse tree for the expression $$+ \, 5 \, * \, 4 \, 3$$ using the following grammar and rightmost derivation:

$$Expr \rightarrow + \, Expr \, Expr \, | \, * \, Expr \, Expr \, | \, 0 \, | \, \dots \, | \, 9 \,$$

While I have no trouble taking the derivation and creating its parse tree, the question also asks whether the grammar is ambiguous. In the scope of what I’ve been taught, my only tool for proving ambiguity has been to find a different parse tree for whatever leftmost or rightmost derivation I have, thus proving multiple valid parses and ambiguity. However, I have not been told how to prove unambiguity. I am fairly confident that the grammar described above is unambiguous based partially on intuition, and partially because it’s designed for prefix notation. I tried to generate new trees for a given string to prove ambiguity, but since the operator is always leftmost, I could not find any string in which multiple parse trees could be created. If I am mistaken, please let me know.

My question is this: Is it possible for grammar that describes strings using prefix (Polish) notation such as the one above to ever be ambiguous? My intuition tells me that it will always be unambiguous, but I was wondering why this might be the case.

## Converting Context-sensitive grammar to set notation

I have this context-sensitive grammar:

S -> XSY | a | b Xa -> aa Xb -> bb Y -> a  I know what it does, as it always ends in a and is proceeded by 3 as or 3 bs. I’m just not sure how to write this in set notation and would appreciate any help. L={a^n,b^m | n ≥ 1 , 0 < m ≤ 3} Would it be something like this? (sorry don’t know latex)

## Is the usage for asymptotic notation for these algorithms correct? [duplicate]

So after reading a lot of information around asymptotic analysis of algorithms and the use of Big O / Big Ω and Θ, I’m trying to grasp how to utilise this in the best way when representing algorithms and operations on data structures.

For example there is a recommended website where I got this screenshot from describing Quicksort and I’ve noticed a few issues that stand out to me based on what I’ve learnt.

1. Is it possible for all notations to represent “Best” “Average” and “Worst” cases? and if so how is this possible? For example for a “Worst” case, How can Big Ω represent the Upper bound. The upper bound is tied to Big O.
2. I thought in order to find Theta Θ, Big O and Big Ω had to be the same values? In the screenshot “Best” case is n log(n) and Worst case is n^2 so how can Θ(n log(n))?
3. Take for instance a Hash Table data structure, if you were to perform an analysis on the time complexity for insertion of an element. Would I be correct is saying you could interchangeably say Ω(1) and O(N) or conversely “Average Case is O(1)” and “Worst Case is O(N)”?

## Decoding Music in Staff Notation using ImageIdentify

I wrote this Code fragment

Which is a E (quarter) note in Treble Clef. Instead Mathematica simply classified it as a “musical note” Can we do better?

## Big O notation of $\left(\begin{array}{c} n\\ \frac{n}{2} \end{array} \right)$

What is the O-notation (or $$\Theta$$ notation ) of $$\left(\begin{array}{c} n\ \frac{n}{2} \end{array} \right)$$ ?
Can I use Sterling approximation : $$n! = \Theta(\sqrt{n}\left(\frac{n}{e}\right)^n)$$ and evaluate my function as $$\Theta\left(\frac{2^n}{\sqrt{n}}\right)$$ ?