Sedgewick and Wayne talk about how sorting algorithms and, specifically, priority queues are used in devising ways to improve accuracy in floating-point calculations: https://algs4.cs.princeton.edu/25applications/
Scientific computing is often concerned with accuracy (how close are we to the true answer?). Accuracy is extremely important when we are performing millions of computations with estimated values such as the floating-point representation of real numbers that we commonly use on computers. Some numerical algorithms use priority queues and sorting to control accuracy in calculations.
What “numerical algorithms” use priority queues and sorting like this?