Union of every language within group of decidable languages is also decidable?

So I was trying to solve following exercise:

Let $ (L_{i})_{i \in \mathbb{N}}$ be a family of decidable languages – this means that every $ L_{i}$ is decidable. Then $ \cup_{i \in \mathbb{N}}L_{i} $ is also decidable. Right or wrong?

The solution states, that this statement is wrong because we can set $ L_{i}:=\{f(i)\}$ with $ f : \mathbb{N} \rightarrow H_{0}$ and will therefore receive $ \cup_{i \in \mathbb{N}}L_{i} = H_{0}$ . At the same time $ L_{i}$ languages are still decidable, for every $ i \in \mathbb{N}$ because they are finite.

However I still struggle to understand how exactly a language can be decidable with $ f : \mathbb{N} \rightarrow H_{0}$ , because I thought that $ H_{0}$ was not decidable.

Thanks in advance!!

Difficulty in few steps in proof of “Amortized cost of $\text{Find-Set}$ operation is $\Theta(\alpha(n))$”assuming union by rank, path compression

I was reading the section of data structures for disjoint sets from the text CLRS I faced difficulty in understanding few steps in the proof of the lemma as given in the question title. Here we assume we follow union by rank and path compression heuristics. Before we move into our target lemma a few definitions and lemma is required as a prerequisites for the target lemma.

The prerequisites:

$ $ level(x)=\max\{k:rank[p[x]]\geq A_k(rank[x])\}$ $ $ $ iter(x)=\max\{i:rank[p[x]]\geq A_{level(x)}^{(i)}(rank[x])\}$ $ $ $ \phi_q(x) = \begin{cases} \alpha(n).rank[x] &\quad\text{if $ x$ is a root or $ rank[x]=0$ }\ (\alpha(n)-level(x)).rank[x]-iter(x) &\quad\text{if $ x$ is not a root and $ rank[x]\geq1$ }\ \end{cases}$ $

Lemma 21.9: Let $ x$ be a node that is not a root, and suppose that the $ q$ th operation is either a $ \text{Link}$ or $ \text{Find-Set}$ . Then after the $ q$ th operation, $ \phi_q(х) \leq \phi_{q-1}(х)$ . Moreover, if $ rank[x] \geq 1$ and either $ level(x)$ or $ iter(x)$ changes due to the $ q$ th operation, then $ \phi_q(х) < \phi_{q-1}(х) – 1$ . That is, $ x$ ‘s potential cannot increase, and if it has positive rank and either $ level(x)$ or $ iter(x)$ changes, then $ x$ ‘s potential drops by at least $ 1$ .

Now in the proof below I marks the steps where I face problem

Lemma 21.12: The amortized cost of each $ \text{Find-Set}$ operation is $ \Theta(\alpha(n))$ .

Proof: Suppose that the $ q$ th operation is a $ \text{Find-Set}$ and that the find path contains $ s$ nodes. The actual cost of the $ \text{Find-Set}$ operation is $ O(s)$ . We shall show that no node’s potential increases due to the $ \text{Find-Set}$ and that at least $ \max\{0,s – (\alpha(n) + 2)\}$ nodes on the find path have their potential decrease by at least $ 1$ .

To see that no node’s potential increases, we first appeal to Lemma 21.9 for all nodes other than the root. If $ x$ is the root, then its potential is $ \alpha(n) . rank[x]$ , which does not change.

Now we show that at least $ \max\{0,s – (\alpha(n) + 2)\}$ nodes have their potential decrease by at least $ 1$ . Let $ x$ be a node on the find path such that $ rank[x] > 0$ and $ x$ is followed somewhere on the find path by another node $ у$ that is not a root, where $ level(y) = level(x)$ just before the $ \text{Find-Set}$ operation. (Node $ у$ need not immediately follow $ x$ on the find path.) $ \require{color}\colorbox{yellow}{All but at most $ \alpha(n) + 2$ nodes on the find path satisfy these constraints on $ x$ .}$ $ \require{color}\colorbox{yellow}{Those that do not satisfy them are the firstnode on the find path (if it has rank $ 0$ ),}$ $ \require{color}\colorbox{yellow}{ the last node on the path (i.e., the root), and the last node $ w$ on the path for which}$ $ \require{color}\colorbox{yellow}{ $ level(w) = k$ , for each $ k = 0,1,2,…, \alpha(n) – 1$ .}$

Let us fix such a node $ x$ , and we shall show that $ x$ ‘s potential decreases by at least $ 1$ . Let $ k = level(x) = level(y)$ . Just prior to the path compression caused by the $ \text{Find-Set}$ , we have

$ rank[p[x]] \geq A_k^{(iter(x)}(rank[x])$ (by definition of $ iter(x)$ ) ,

$ rank[p[y]] \geq A_k(rank[y])$ (by definition of $ level(y)$ ,

$ rank[y] > rank[p[x]]$ (by Corollary 21.5 and because $ у$ follows $ x$ on the find path)

Putting these inequalities together and letting $ i$ be the value of $ iter(x)$ before path compression, we have

$ rank[p[y]] \geq A_k(rank[y]) \geq A_k(rank[p[x]])$ (because $ A_k(j)$ is strictly increasing) $ > A_k(A_k^{(iter(x)}(rank[x])) = A_k^{(i+1)}(rank[x])$ .

Because path compression will make $ x$ and $ у$ have the same parent, we know that after path compression, $ rank[p[x]] = rank[p[y]]$ and that the path compression does not decrease $ rank[p[y]]$ . Since $ rank[x]$ does not change, after path compression we have that $ \require{color}\colorbox{pink}{$ rank[p[x]]\geq A_k^{(i+1)}(rank[x])$ . Thus, path compression will cause either $ iter(x)$ to }$ $ \require{color}\colorbox{pink}{increase (to atleast $ i + 1$ ) or $ level(x)$ to increase (which occurs if $ iter(x)$ increases}$ $ \require{color}\colorbox{pink}{to at least $ rank[x] + 1$ ). In either case,by Lemma 21.9, we have $ \phi_q(х) \leq \phi_{q-1}(х) – 1$ .}$ $ \require{color}\colorbox{pink}{Hence, $ x$ ‘s potential decreases by at least $ 1$ .}$

The amortized cost of the $ \text{Find-Set}$ operation is the actual cost plus the change in potential. The actual cost is $ O(s)$ , and we have shown that the total potential decreases by at least $ \max\{0,s – (\alpha(n) + 2)\}$ . The amortized cost, therefore, is at most $ O(s) — (s — (\alpha(n) + 2)) = O(s) — s + 0(\alpha(n)) = O(\alpha(n))$ , since we can scale up the units of potential to dominate the constant hidden in $ О (s)$ . ■

In the proof above I could not get the mathematics behind the statements highlighted in yellow and pink. Can anyone help me out?

Union by Rank vs Union by Size

I was studying Union Find, and according to Wikipedia, there are 2 types of Union: Union by rank and Union by size. My question is, what is the runtime different between the 2(if any)?

Intuitively it feels like Union by size would always be better, since each time we merge, we are increasing the rank of every node in one tree by 1, and to minimize overall runtime, we want to increase that one tree to be the one with smaller number of nodes, even though the final tree height might be greater.

But does this make a big difference in runtime?

Sedgewick Quick Union Analysis

I am currently studying Algorithms, Fourth Edition by Sedgewick et al. On page 226, there is an analysis of the quick union algorithm’s find() method’s worst case. This is the algorithm:

  private int p   {    while (p != id[p]) p = id[p];   return p;   } 

Now I count N comparisons in total (inside the while) plus N-1 accesses (last comparison is false) to rewrite the value of p. This would give me 2N – 1 accesses to the array. The textbook however says there are 2N+1 accesses. What am I doing wrong? Thanks for your time.

Same notation/terminology for union of sets and concatenation (Kleene star)?

For the union of sets we use the union operator $ \cup$ (or $ \bigcup$ ). And for a concatenation (Kleene star) we also use the union operator. The operations are different, but why the same terminology and operator?

The following is my understanding of the union of sets versus the concatenation of sets (Kleene star). Please correct me if I’m wrong.

Union of sets

For the two sets $ \{a,b\}$ and $ \{a,b\}$ we have the union \begin{align} \{a,b\}\cup\{a,b\}=\{a,b\} \end{align}

Concatenation of sets (Kleene star)

The concatenation of $ \{a,b\}$ and $ \{a,b\}$ is also a union (same notation?!) of two sets \begin{align} \{a,b\}^*&=\bigcup _{i=0}^{2} \{a,b\} ^2=\{a,b\} \cup \{a,b\}\ &=\{\epsilon,a,b,aa,ab,ba,bb,aaa,aab,aba,abb,baa,\dots\} \end{align}

why swap function is use in union find algorithm? How rank or size array is used for optimization

void union_sets(int a, int b) {     a = find_set(a);     b = find_set(b);     if (a != b) {         if (size[a] < size[b])             swap(a, b);         parent[b] = a;         size[a] += size[b];     } } 

Q1.why we need this swapping? can’t we do like this without swapping

if (size[a] < size[b])          parent[a] = b;         size[b] += size[a];  

Q2.what is the difference between size array and rank array.Is Rank means the height of a node and size means no of node in that tree which contains this node

Merging 4 tables that also include Union All

I have this code WHICH IS JOIN OF 3 TABLES,    SELECT   [Provider] AS Publisher ,[Price_Rate] ,[Source_ID] ,Media.Date ,Media.Impressions ,Media.Clicks ,'0' AS [Opt-buyers] ,'0' AS [Completed buyers] ,Media.accounting FROM [dbo].[budget]  AS BUDGET  LEFT JOIN  (    SELECT CASE WHEN [Ad_Set_Name] LIKE 'tw_%'    THEN'twitter'    WHEN [Ad_Set_Name] LIKE 'IN_%'   THEN 'insta'    ELSE '?'       END AS Publisher   ,CAST([Day] AS Date) AS Date   ,SUM(CAST([Impressions] AS INT)) AS Impressions   ,SUM(CAST([Link_Clicks] AS INT)) AS Clicks   ,SUM(CAST([Amount_Spent__USD_] AS money)) AS Spend   FROM [dbo].[twitter]   Group by Day,[Ad_Set_Name]     UNION ALL      SELECT CASE WHEN [Site__PCM_] = 'acuits.com'     THEN 'acqt'      WHEN [Site__PCM_]= 'PulsePoint'      THEN 'plpt'     WHEN [Site__PCM_] = 'SRAX'     THEN 'srax'     ELSE [Site__PCM_]     END AS Publisher     ,CAST(Date AS Date) AS Date     ,SUM(CAST(impressions AS INT)) AS Impressions      ,SUM(CAST(clicks AS INT)) AS Clicks     ,SUM(CAST(media_cost AS money)) AS Spend    FROM [dbo] [pcm]    Group by [Site__PCM_]   ,Date   ) AS new_sources_budget  ON BUDGET.Source_ID = Media.Publisher   WHERE source_id IS NOT NULL    and I'm trying to join another table **called Email** to what's this code is currently providing, but    I'm  having tough time passing thus far. the goal is to add this code     SELECT    SUM(CAST(_Send2 AS INT)) AS [Email Sent]  ,SUM(CAST(_Open2 AS INT)) AS [Email Open]  ,SUM(CAST(Click2 AS INT)) AS [Email Click] FROM [dbo].[behaviour] Group by _Send2,_Open2,Click2   any help will be appreciated. 

Algorithm to compute decomposition of a union of sets to a disjoint union of intersections

A disjoint union of sets can be decomposed into a disjoint union of intersections. Rather than writing confusing notation, this is easiest to to see in an example of three sets. enter image description here

This clearly generalizes. If we want to calculate the disjoint pieces, we have to compute $ 2^n$ intersections, where $ n$ is the number of sets.

However, if our sets were nested, i.e. $ A_1 \subseteq A_2 \subseteq … A_n$ , we need only to compute $ n$ intersections of the form $ A_i \cap A_{i+1}^c$ . The point is if we did our original exhaustive “binary search” computing $ 2^n$ intersections we would find that most of these intersections are empty, so we are wasting computations.

My question is, what if we had $ n$ sets where the vast majority are nested, but a few aren’t and we don’t know which. Can we compute an $ O(n)$ number of intersections to find the disjoint pieces?