Proving a language is not regular using Myhill Nerode Theorem

Let $ L = \{\alpha\in\{a,b,c\}^{*} \mid \alpha \text{ is palindrome}\}$ , show that $ L$ is not regular using Myhill-Nerode relation.

I don’t know how to show that $ L$ has infinite equivalence classes because $ \alpha$ is a palindrome. I tried to use something like this, but I don’t know if its correct:

$ \alpha \equiv_{L} \beta \iff \alpha (aba)^k \in{L} \iff \beta (aba)^k \in{L}$ $ \forall k \in \mathbb N$ which implies that for every k there exists an equivalence class because the repetition of aba k times.

Iterative-substitution method yields different solution for T(n)=3T(n/8)+n than expected by using master theorem

I’s like to guess the running time of recurrence $ T(n)=3T(n/8)+n$ using iterative-substitution method. Using master theorem, I can verify the running time is $ O(n).$ Using subtitution method however, I arrive at a different answer than expected..

$ T(n)=3T(n/8)+n\ =3T(3T(n/8^2)+n)+n=3^2T(n/8^2)+3n+n\ =3^2(3T(n/8^3)+n)+3n+n=3^3T(n/8^3)+3^2n+3n+n$

We can see the pattern: $ 3^iT(n/8^i)+n*\frac{3^i -1}{2}$

The recursion finishes when $ i=log_8 n$ .

Substituting i into the discovered pattern, $ 3^{log_8 n}T(1)+n*\frac{3^{log_8 n} -1}{2} =\ n^{log_8 3}*c+0.5n*n^{log_8 3} – 0.5n =n^{log_8 3}*c+ 0.5n^{log_8 3+1}-0.5n \in O(n^{1.52})$ .

What am I doing wrong? why is my answer not $ O(n)$ ?

$ \Omega(m)$ and $O(m)$ meaning in theorem proof about dynamic array complexity

My algorithms and data structures’ book states that to create a dynamic array the following procedure is followed:

Let $ d$ be the length of an array $ a $ and $ n $ the number of elements stored in it. Each time an insertion operation is done, if there is enough space $ (n+1<d)$ , $ n $ grows by 1; otherwise if $ n=d$ , we allocate and array $ b$ of size $ 2d$ , $ d$ is updated to$ 2d$ and all elements are copied to it, then we do $ a=b$ . Similarly, everey time a deletion operation is performed, $ n$ decreases by 1, when $ n=d/4$ , we allocate and array of size $ d/2$ , $ d$ is updated to $ d/2$ and we copy all elements of the array $ a$ to the array $ b$ an do $ a=b$ .

The pseudocode of the functions doing the array doubling and halving is shown in the picture

enter image description here

then the following theorem is prooved:

The execution of N insertion or deletion operations in a dynamic array requires a time $ O(n)$ , beside d=$ O(n)$

proof: Let $ d$ be the length of an array and n the number of elements stored in it After doubling the array’s size there are $ m=d+1$ elements in the new array of $ 2d $ positions. We need at least m-1 insertion requests for a new doubling and at least $ m/2 $ deletion requests for a halving. Similarly,after a halving, there are$ m=d/4$ elements in the new array of $ d/2 $ elements, for wich at least$ m+1$ insertion requests are needed for a new doubling and at least $ m/2 $ deletion requests are needed for a new halving.

In any case, the cost of $ O(m) $ time required for the resizing can be virtually distributed among the $ \Omega(m)$ operations that caused it(starting from the last resizing) . Finally, supposing the when the array is created there are $ n=1$ and $ d=1$ , the number of elements of the array is always one fourth of its size, so $ d=O(n)$

I am a beginner with this $ \Omega(m)$ and $ O(m)$ , I know the mathematical definitions and I’ve been reading a lot about it but I am not able to understand it well in context. I know big O should indicate an upper bound on the time complexity of an algorithm and big omega a lower bound.

I don’t understand the last paragraph when they use these symbols, Why is the time cost O(m), why are they using $ \Omega(m)$ for the number of operations that caused the resizing (besides specifically what operations are they referring to?) and why do they write $ d=O(n)$ , what should I understand from it? Any help will be greatly appreciated

How to use master theorem to solve $T(n)=4T(n/8) + \sqrt n (log_2 n)^2$

I want to solve the following using master theorem.

$ T(n)=4T(n/8) + \sqrt n (log_2 n)^2$

I have: $ a=4, b=8,f(n)=\sqrt n (log_2 n)^2$

I calculate $ n^{log_b a} = n^{log_8 4} = n^{2/3}$

I compare $ f(n)=\sqrt n (log_2 n)^2$ with $ n^{2/3}$ , and see that $ f(n)$ grows faster than $ n^{2/3}$ .

So using master theorem, rule 3, $ f(n) \in \Omega (n^{2/3 +some \ c})$ , which means $ T(n) \in \Theta(f(n))$

$ ———————————-$

But I guess this isn’t correct, according to this master theorem calculator tool I used to check my answer https://www.nayuki.io/page/master-theorem-solver-javascript

Where’s the mistake?

Minimization of DFA, Table Filling Method or Myhill-Nerode Theorem

I want to know, what if the DFA is merged states. So if my DFA starts with q0 and then have (q0,q1) state it goes to with an ‘a’ for example. How do I do the table filling method.

I tried to rename the merged state, for example turning (q0,q1) to q4 for example. But then the issue is when I try to do the table filling. With the input in the Automaton, I get only one state, sometimes the final state. So do I mark it.

For example, I renamed (q0,q1) to q4. Now in the table, if I want to find out the marking for q0 and q4. With an ‘a’ I get q4,q4 for both of the state and with ‘b’ I get nothing from q0 and q2(final state) from q4. I know since q4,q4 does not exist I do not mark. But I only get one state, which happens to be the final state. Do I mark q0,q4 part of the table or do I leave that blank aswell

Standardisation Theorem versus Leftmost reduction Theorem

According to Chris Hankin in his book (Lambda Calculus a Guide for Computer Scientists). A reduction sequence $ \sigma: M_0 \to^{\Delta_0} M_1 \to^{\Delta_1}M_2 \to^{\Delta_2}\ldots $ is a standard reduction if for any pair $ (\Delta_i, \Delta_{i+1})$ , $ \Delta_{i+1}$ is not a residual of a redex to left of $ \Delta_{i}$ relative to the given reduction from $ M_{i}$ to $ M_{i}$ . The Standardization Theorem says that if a term $ A$ $ \beta$ -reduces to $ B$ , then there is a standard path from $ A$ to $ B$ . On the other hand, in the leftmost strategy, outermost redex is always reduced first. The Leftmost Reduction Theorem says that if a term $ A$ $ \beta$ -reduces to $ B$ , and $ B$ is in $ \beta$ -normal form, then the leftmost strategy reaches $ B$ . My question is:

1- I’m not seeing the difference between the reductions Standardization and Leftmost, for me, it seems to say the same thing.

2- Why the Leftmost Reduction Theorem is a particular case of the standardization Theorem?

Solve $T(n) = 16T(n/2)+[n\log_2 {n}]^4$ using Master Theorem

The problem is to solve this using Master Theorem :

$ $ T(n) = 16T(n/2)+[n\log_2 {n}]^4$ $

My attempt: I said that master theorem does not apply because the ratio of $ f/g = [\log_2 {n}]^4$ . This has no polynomial factor. But, when I looked at similar problems online, I saw some people apply case #2 here.

The version of the Master Theorem we use is as follows:

Master Theorem

Our book also notes that in the first case not only must $ f$ be smaller than $ n^{\log_b{a}}$ , it must polynomially smaller, that is, $ f$ must be asymptotically smaller than $ n^{\log_b{a}}$ by a factor of $ n^\varepsilon$ for some constant $ \varepsilon>0$ . Similarly, in the third case, not only must $ f$ be larger than $ n^{\log_b{a}}$ , it must polynomially larger.

Solve the following recurrences using Master Theorem

Solve the following:

  1. $ T(n) = 2T(n/2)+\log_2 {n}$

  2. $ T(n) = 16T(n/2)+[n\log_2 {n}]^4$

  3. $ T(n) = 7T(n/3)+n$

For number 1 I am stuck because I want to say the master theorem doesn’t apply since $ \log_2{n}$ cannot be written as a polynomial, but the ratio of $ f/g$ is $ n/\log_2{n}$ , which has an $ n$ term in it. And $ f$ clearly is less than $ g$ . So, does case 1 apply or not? I have found very mixed results online.

For number 2 I said that master theorem does not apply because the ratio of $ f/g = [\log_2 {n}]^4$ . But, when I looked at similar problems online, I saw some people apply case #2 here.

For number 3, I said it was case # 1 of the Master Theorem, so the answer is $ T(n)=\Theta(n^{\log_3 {7}} ) $ .

Applying the Parameter Theorem to show that a function is not computable


Show that $ g: \mathbb{N} \to \mathbb{N}$ such that $ $ g(x)=\begin{cases} 1 & \text{if halt}(2833,x) \ 0 & \text{otherwise} \end{cases}$ $ is not computable.

We know that

$ $ g(x)=\begin{cases} 1 & \text{if }\Phi_x(2833)\downarrow \ 0 & \text{if }\Phi_x(2833)\uparrow \end{cases}$ $

How can I use the parameter theorem to reduce $ g$ to $ \text{halt}(x,x)$ ? I’m very confused.