How does rounding affect subsequent calculations?

When we are doing calculations in mathematics, we often express exact values, like $ \sqrt 2$ or $ \arctan (1)$ , as decimals and round these to a finite number of decimal places/significant figures before using these approximations in subsequent calculations. We might also round off a very long but finite decimal and use it in subsequent calculations. My question is, what is the minimum number of decimal places/significant figures which all of my intermediate values must be rounded to if I want my final answer to be accurate to a particular number of decimal places/significant figures?

Example: I’m doing a $ \chi^2$ test. I want to find $ \chi^2$ exact to $ 1$ decimal place. The exact $ \chi^2$ contributions are: $ (0.56,0.32,1.76,1.99,0.72,0.88)$ . If I sum these, the exact value of $ \chi^2$ is $ 6.23$ , which becomes $ 6.2$ rounded to $ 1$ decimal place. Now if I round the contributions to $ 1$ decimal place, so they become $ (0.6,0.3,1.8,2.0,0.7,0.9)$ , before I sum them, I get the sum to be $ 6.3$ , which is not accurate to $ 1$ decimal place, as we have shown it should be $ 6.2$ .

How can I be sure that my intermediate values are rounded with enough remaining decimal places/significant figures that my final answer is accurate to a desired number of decimal places/significant figures? Not just for this particular example, but for a general calculation.