I know it’s a silly scenario but I wolud like to know if it’s relevant and maybe there are some sort of theory/studies on it or it’s merely a non-sense situation.
Immagine a language much close to a Turing machine that is compiled in an higher language (let’s say C). This language can accept an integer as input, but using his syntax i don’t have any access to his internal rappresentation of this integer, that is the integer wolud be located in one single cell and the only operation I can do on a cell are:
+1, -1, copy/paste from a register, set to 0, chek if it's non-zero.
Now if I want to output the sign of the integer I would create a copy of it so that in one cell the value gets incremented and in the other gets decremented. Until I reach 0 and then I would stop and correctly tell the sign.
The interesting part is that under the hood, this program would surely use the information of the sign of the integer to execute +1 and -1 correctlty…
So I’m using operations that would use information x to obtain information x… Normally I would have access to the rappresentation of the integers and to check the sign consists in looking at one particular bit. But in this case some information is hidden from me.
It reminds me of solving the limit sin(x)/x as x goes to 0 by de l’Hopital rule… It’s a logical fallacy because that limit is required in order to derivate sin(x)
Could there be situations when this is not generated explicitly by us and it’s inevitable? Has it more profound consequences?