$ \Omega(m)$ and $O(m)$ meaning in theorem proof about dynamic array complexity

My algorithms and data structures’ book states that to create a dynamic array the following procedure is followed:

Let $ d$ be the length of an array $ a $ and $ n $ the number of elements stored in it. Each time an insertion operation is done, if there is enough space $ (n+1<d)$ , $ n $ grows by 1; otherwise if $ n=d$ , we allocate and array $ b$ of size $ 2d$ , $ d$ is updated to$ 2d$ and all elements are copied to it, then we do $ a=b$ . Similarly, everey time a deletion operation is performed, $ n$ decreases by 1, when $ n=d/4$ , we allocate and array of size $ d/2$ , $ d$ is updated to $ d/2$ and we copy all elements of the array $ a$ to the array $ b$ an do $ a=b$ .

The pseudocode of the functions doing the array doubling and halving is shown in the picture

enter image description here

then the following theorem is prooved:

The execution of N insertion or deletion operations in a dynamic array requires a time $ O(n)$ , beside d=$ O(n)$

proof: Let $ d$ be the length of an array and n the number of elements stored in it After doubling the array’s size there are $ m=d+1$ elements in the new array of $ 2d $ positions. We need at least m-1 insertion requests for a new doubling and at least $ m/2 $ deletion requests for a halving. Similarly,after a halving, there are$ m=d/4$ elements in the new array of $ d/2 $ elements, for wich at least$ m+1$ insertion requests are needed for a new doubling and at least $ m/2 $ deletion requests are needed for a new halving.

In any case, the cost of $ O(m) $ time required for the resizing can be virtually distributed among the $ \Omega(m)$ operations that caused it(starting from the last resizing) . Finally, supposing the when the array is created there are $ n=1$ and $ d=1$ , the number of elements of the array is always one fourth of its size, so $ d=O(n)$

I am a beginner with this $ \Omega(m)$ and $ O(m)$ , I know the mathematical definitions and I’ve been reading a lot about it but I am not able to understand it well in context. I know big O should indicate an upper bound on the time complexity of an algorithm and big omega a lower bound.

I don’t understand the last paragraph when they use these symbols, Why is the time cost O(m), why are they using $ \Omega(m)$ for the number of operations that caused the resizing (besides specifically what operations are they referring to?) and why do they write $ d=O(n)$ , what should I understand from it? Any help will be greatly appreciated