I have a cubic-order algorithm which must be executed sequentially; there appears to be no way of making it parallel.
I need to come up with an estimate of maximum input size that can be solved using today’s technology. So, the essential problem seems to be how to relate the cost of a single step to execution time on a “typical” hardware. Is there a gold-standard way of doing this, e.g. something an attentive peer-reviewer would like to see being done in a manuscript?