Floating point binary number to a 7 segment decimal display


I have covered floating point (32 bit) conversion from float to decimal and decimal to float. I am happy with the theory and I have created a conversion tool in Excel VBA which works just fine following the IEEE754 , so I am happy with theory. Also happy with add, sub , mult , div of 32 bit floating point binary numbers. What I cannot understand or find anywhere online is the answer to this simple question. How do computers / calculators do the final conversion from the floating point binary number onto a display. For eaxmple , I have built a BCD to decimal converter using Logisim (combinational logic gates) and I have built a Binary to Decimal converter in Logisim using the Double Dabble algorithm si I can see how these can display on a set of 7 segment displays but how does the number (0 10000001 01 00 11 00 11 00 11 00 11 00 11 0) which is the floating point binary for decimal number 5.2 actually get converted using logic circuits.