# Counting occurrences of word in a text

Let’s say I have a long text of 1M words and I would like to create a table of all the words ordered by the number of occurrences in the text.

One approach would be populating a dynamic array with each word and linear search them to count the occurrences in $$O(n^2)$$ then sort the array by occurrences in $$O(n\cdot log~n)$$.

Another approach would be to use y priority queue and a trie. The insertion in the priority queue is $$O(log n)$$ and the build of the trie is $$O(n)$$. But traversing the trie to build the priority queue is somehow difficult to evaluate.

Eventually using a hash map seems to be the best solution, but computing the hash could cost a little bit of time even though it is just a constant. In this you have $$n$$ insertion/lookup in $$O(1)$$ then a final sort of the hashmap by occurrences in $$O(n\cdot log~n)$$.

So it is clear that the former approach is the worse and the latter the best. But how can I evaluate the complexity of the second one?