Autosuggest at scale – trie sharding

While reading on the design for autosuggest implementation on large scale systems (like google), I’m able to understand the usage of trie and how top “n” terms are stored at each node to quickly retrieve the list. However, I’m not able to get my head around the logic of efficient way of “sharding” the trie in a distributed system. Sharding on the first letter/first two letters isn’t obviously a neat solution and I’ve read somewhere else on using a hash of the term – but that requires an aggregation server that pulls up results from all the servers and aggregate them. Doesn’t sound like an efficient thing to do at “web” scale.

Would the ideal approach be something like calculating the actual density and breaking up the tree accordingly (sort of application managed shard/partitioning ?) – but think it would incur lots of maintenance and re-balancing?

Can someone advice or point me to any reference?

A related question to this – what if I wanted to store top “n” results for different time windows. Like, top 10 in last day, top 10 in last month, top 10 of all time. What’s the best solution? – Store the pointer list at the tree node for each time window? What if the set of windows are not finite?


Note: Earlier I posted the same question on stack overflow, but after reading some guidelines, thought it’s appropriate to open question here instead.