Intuition behind the entire concept of Fibonacci Heap operations

The following excerpts are from the section Fibonacci Heap from the text Introduction to Algorithms by Cormen et. al


The potential function for the Fibonacci Heaps $ H$ is defined as follows:

$ $ \Phi(H)=t(H)+2m(H)$ $

where $ t(H)$ is the number of trees in the root list of the heap $ H$ and $ m(H)$ is the number of marked nodes in the heap.

Before diving into the Fibonacci Heap operations the authors try to convince us about the essence of Fibonacci Heaps as follows:

The key idea in the mergeable-heap operations on Fibonacci heaps is to delay work as long as possible. There is a performance trade-off among implementations of the various operations.($ \color{green}{\text{I do not get why}}$ ) If the number of trees in a Fibonacci heap is small, then during an $ \text{Extract-Min}$ operation we can quickly determine which of the remaining nodes becomes the new minimum node( $ \color{blue}{\text{why?}}$ ). However, as we saw with binomial heaps, we pay a price for ensuring that the number of trees is small: it can take up to $ \Omega (\lg n)$ time to insert a node into a binomial heap or to unite two binomial heaps. As we shall see, we do not attempt to consolidate trees in a Fibonacci heap when we insert a new node or unite two heaps. We save the consolidation for the $ \text{Extract-Min}$ operation, which is when we really need to find the new minimum node.


Now the problem which I am facing with the text is that they dive into proving the amortized cost mathematically using the potential method without going into the vivid intuition of the how or when the "credits" are stored as potential in the heap data structure and when it is actually used up. Moreover in most of the places what is used is "asymptotic" analysis instead of actual mathematical calculations, so it is not quite possible to conjecture whether the constant in $ O(1)$ for the amortized cost ( $ \widehat{c_i}$ ) is greater or less than the constant in $ O(1)$ for the actual cost ($ c_i$ ) for an operation.


$ $ \begin{array}{|c|c|c|} \hline \text{Sl no.}&\text{Operation}&\widehat{c_i}&c_i&\text{Method of cal. of $ \widehat{c_i}$ }&\text{Cal. Steps}&\text{Intuition}\ \hline 1&\text{Make-Fib-Heap}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=0\text{ ; $ \widehat{c_i}=c_i=O(1)$ } &\text{None}\ \hline 2&\text{Fib-Heap-Insert}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=1 \text{ ; $ \widehat{c_i}=c_i=O(1)+1=O(1)$ } &\text{None}\ \hline 3&\text{Fib-Heap-Min}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=0;\text{ ; $ \widehat{c_i}=c_i=O(1)$ } &\text{None}\ \hline 4&\text{Fib-Heap-Union}&O(1)&O(1)&\text{Asymptotic}&\Delta\Phi=0;\text{ ; $ \widehat{c_i}=c_i=O(1)$ } &\text{None}\ \hline 5&\text{Fib-Extract-Min}&O(D(n))&O(D(n)+t(n))&\text{Asymptotic}&\Delta\Phi=D(n)-t(n)+1 &\text{$ \dagger$ }\ \hline 6&\text{Fib-Heap-Decrease-Key}&O(1)&O(c)&\text{Asymptotic}&\Delta\Phi=4-c &\text{$ \ddagger$ }\ \hline \end{array}$ $


$ \dagger$ – The cost of performing each link is paid for by the reduction in potential due to the link’s reducing the number of roots by one.

$ \ddagger$ – Why the potential function was defined to include a term that is twice the number of marked nodes. When a marked node $ у$ is cut by a cascading cut, its mark bit is cleared, so the potential is reduced by $ 2$ . One unit of potential pays for the cut and the clearing of the mark bit, and the other unit compensates for the unit increase in potential due to node $ у$ becoming a root.


Moreover the authors deal with a notion of marking the nodes of Fibonacci Heaps with the background that they are used to bound the amortized running time of the $ \text{Decrease-Key}$ or $ \text{Delete}$ algorithm, but not much intuition is given behind their use of it. What things shall go bad if we do not use markings or use $ \text{Cacading-Cut}$ when the number of children lost from a node is not just $ 2$ but possibly more. The excerpt corresponding to this is as follows:

We use the mark fields to obtain the desired time bounds. They record a little piece of the history of each node. Suppose that the following events have happened to node $ x$ :

  1. at some time, $ x$ was a root,
  2. then $ x$ was linked to another node,
  3. then two children of $ x$ were removed by cuts.

As soon as the second child has been lost, we cut $ x$ from its parent, making it a new root. The field $ mark[x]$ is true if steps $ 1$ and $ 2$ have occurred and one child of $ x$ has been cut. The Cut procedure, therefore, clears $ mark[x]$ in line $ 4$ , since it performs step $ 1$ . (We can now see why line $ 3$ of $ \text{Fib-Heap-Link}$ clears $ mark[y]$ : node $ у$ is being linked to another node, and so step $ 2$ is being performed. The next time a child of $ у$ is cut, $ mark[y]$ will be set to $ \text{TRUE}$ .)


Strictly I do not get the intuition behind the $ mark$ in the above block text especially the logic of doing the stuff in bold-italics.

[EDIT: The intuition of why to use the marking in the way stated was made clear to me by the lucid answer here, but I still do not get the cost benefit which we get using markings]


Note: It is quite a difficult question in the sense that it involves the description the problem which I am facing to understand the intuition behind the concept of Fibonacci Heap operations which is in fact related to an entire chapter in the CLRS text. If it demands too much in a single then please do tell me then I shall split it accordingly into parts. I have made my utmost attempt to make the question the clear. If at places the meaning is not clear, then please do tell me then I shall rectify it. The entire corresponding portion of the text can be found here. (Even the authors say that it is a difficult data structure, having only theoretical importance.)

Method to change value in a key for a min heap

How would you write a method to change the value of a min heap where bool changeKey(int oldKey, int newKey). The keys are unique, no duplicate keys are permitted. If there is a key in the heap with value oldKey, and no existing entry with key newKey, it will change it to newKey, keeping the the heap properties and returning true. If there is no key in the heap with value oldKey, or an existing entry has value newKey, then it will take no action and return false. Also an auxiliary map is used to store the locations of the keys in an array. I know how to write methods for insertion but haven’t seen any implementations of this.

Any exploit details regarding CVE-2019-3846 : Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

How to get this exploit working or any method for this.

I have seen and read a lot about this issue at various references

It is seen that various Linux version < 8 is vulnerable to this issue

Linux Kernel ‘marvell/mwifiex/scan.c’ Heap Buffer Overflow Vulnerability

Issue Description: A flaw that allowed an attacker to corrupt memory and possibly escalate privileges was found in the mwifiex kernel module while connecting to a malicious wireless network.

Can you share exploit details regarding this.?

https://vulners.com/cve/CVE-2019-3846 https://www.securityfocus.com/bid/69867/exploit : NO exploit there

Any tips on how to exploit this.

Order notation subtractions in Fibonacci Heap

Can order notation on its own imply:

$ O(D(n)) + O(t(H)) – t(H) = O(D(n))$

My guess is that you cannot since the constant in the O(t(H)) would still exist after the subtraction if c > 1.

Well, this is actually the case, but there are underlying factors. This equation appears in Fibonacci heap analysis in CLRS (518). The justification for this step comes from the underlying potential function. According to the authors, “we can scale up the units of potential to dominate the constant hidden in $ O(t(H))$ “. I want to know how this happens, but don’t really know how to ask this complicated question.

Does the heap property in the definition of binary heaps apply recursively?

The definition of binary heaps says that it should be a complete binary tree and it should follow the heap property where according to the heap property, the key stored in each node is either greater than or equal to or less than or equal to the keys in the node’s children.

Binary tree

In the above tree, the node with value 70 is greater than its parent 10 breaking the heap property. However, 70 is also greater than 40 and lies in the subtree of 40. Will we say that the heap property is also breaking at 40, even though 40 is greater than its two children 10 and 2?

the place of the median in a binary heap

I need to prov that the median in a binary heap that all of it’s elements are different that the median can be in any level except the root. I know that it’s true because I can manipulate the places of the elements so I can keep the heap feature and put the median in any place but I don’t have an idea how to prove it in a formal way Thanks for the help

Modify Fibonacci Heap to Have a Linear Chain of Marked/Unmarked Nodes Only

In CLRS book there is an exercise (19.4-2) the aim of which is to create a linear chain of nodes by a sequence of Fibonacci-Heap operations. I have solved the problem by recursively making a union with a chain of two nodes, inserting a new minimum node and extracting the minimum, after which consolidation takes place and returns a new linear chain. Since there are no DECREASE-KEY or DELETE-NODE operations, no node is being marked.

My question is, is it possible to create a linear chain consisting of marked nodes only. If so, how?

I have tried several strategies. In one case I am getting a linear chain with all but the last node marked and I cannot proceed from there.

Another possibility is to get a chain as follows for $ n$ nodes:

Example of resulting FH

From here one can delete all the nodes on the shortest path for each sub-tree starting from the bottom to mark all of the nodes on the longer path. However, I cannot find a way to get this Fibonacci-Heap in the first place (and I am not sure whether it is possible). Any help would be appreciated.

Is destructuring a heap (taking down a heap) also O(n) like building a heap? If so, can the selection problem be solved by this method in O(n) time?

If we can build up a heap with time O(n), can we take down a heap also by O(n)? (by delete-max repeatedly).

Intuitively, it may feel it is, because it is like the reverse of build it up.

If building a heap is O(n) in the worst case, including the numbers are all adding by ascending order, then taking the heap down is exactly the “reverse in time” operation, and it is O(n), but this may not be the “worst case” of taking it down.

If taking down a heap is really O(n), can’t the selection problem be solved by building a heap, and then taking it down (k – 1) time, to find the kth max number?