Why do agents always employ the same algorithm when playing a congestion game?


I’ve been conducting research into congestion games and have come across many papers that study the effects on the outcome of a game played by agents employing a particular algorithm e.g. seeing how quickly Nash equilibrium is approached when using a modified version of fictitious play.

Is there any particular reason as to why there hasn’t been any research conducted that looks into agents using different algorithms playing a single congestion game? For example, agents who uses fictitious play playing alongside agents who use a q-learning algorithm.