Is there such a notion as “effectively computable reductions” or would this be not useful

Most reductions for NP-hardness proofs I encountered are effective in the sense that given an instance of our hard problem, they give a polynomial time algorithm for our problem under question by using reductions. All reductions in the 21 classic problems considered by R. Karp work this way. A very simple example might be reduction from INDEPENDENT_SET to CLIQUE, just built the complement graph of your input.

But when considering the proof of the famous Cook-Levin Theorem that SAT is NP-complete, the reduction starts from a non-deterministic TM and some polynomial, that exists by definition. But to me it is not clear how to get this polynomial effectively, meaning given an non-deterministic TM from which I know it runs in polynomial time, it is not clear how to compute this polynomial, and I highly suspect it is not computable at all.

So soley from the encoding of NP-complete problems by the class of non-deterministic polynomial time TMs (the polynomial itself is not encoded as far as I know) I see no way how to give a reduction in an effective way, the aforementioned proof just shows that there exists some, but not how to get it.

Maybe I have understood something wrong, but I have the impression that usually the reductions given are stronger in the sense that they are indeed computable, i.e. given a problem we can compute our reduction, and not merely know its existence.

Has this ever been noted? And if so, is there such a notion as “effectively computable reduction”, or would it be impractical to restrict reductions to be itself computable? For me, from a more practical perspective, and also the way I sometimes see reductions introduced (“like we have an algorithm to convert one instance to another”) it would be highly desirable to know how to get this algorithm/reduction, so actually it seems more natural to demand it, but why is it not done?