I read a C++ implementation of a hash map here. https://www.geeksforgeeks.org/implementing-hash-table-open-addressing-linear-probing-cpp/

Let’s say key `k1`

has a hash index of `h`

. Suppose there’s a collision with key `k2`

such that `hash(k1) == hash(k2)`

. Then won’t the new hash index of `k2`

become `h+1`

?

Key: k1 Index: h Key: k2 Index: h+1

Suppose we introduce a third key `k3`

such that `hash(k3) == h+1`

. Then when we insert `k3`

into the hash map, its hash index will become `h+2`

.

Key: k1 Hash value: h Hash table index: h Key: k2 Hash value: h Hash table index: h+1 Key: k3 Hash value: h+1 Hash table index: h+2

This can cause the hash indexes for keys to be shifted over 1 in the case of collisions (as seen above), and if collisions happen frequently then they could be shifted over more than once.

Is this a bad implementation of a hash map? From an aesthetic point of view I prefer the linked approach, where each hash node has a pointer to a next node, such that if multiple nodes have the same hash index, they are all part of the same linked list starting at that hash index.

In the linked approach, at least we have the assurance that a key will logically correspond to its hash index in the hash table, even if it’s part of a linked list (in which case it doesn’t physically correspond, but logically still does, since the head of the linked list is stored there).

Is the implementation of the hash map in GeeksForGeeks bad? Is the linked approach more logical and intuitive? What are your thoughts?

**Note:** The linked approach I refer to is simply to store a linked list at each hash index in the hash table, such that if multiple keys are hashed to that index, they are stored in this linked list.