Estimating users standard deviation given avg, min, max for various tests


Given a series of tests, where we are given one users score, the overall minimum, the overall maximum, and the overall average, how would I estimate the user’s standard deviation on total score (ie. sum of all their tests)?

We cannot assume that the lowest scoring person from one test was the lowest scoring in the next test, but I think it is fair to assume that people generally stay within some score bands (although if this can be done without that assumption, that would be better).

My intuition tells me that this seems to be some sort of application of Monte Carlo, but I can’t seem to figure out how to actually do this.