Turing machines take inputs that are strings of symbols from some alphabet, and they give outputs that are strings of symbols from the same alphabet. To show that a function is computable, we have to exhibit a Turing machine that computes it. In the case where the function is not from strings to strings, something about this bothers me.
I will introduce the problem with a simple example. Suppose that I define a function from binary trees to binary trees, and I want to show, formally, that it’s computable. To do that I would have to exhibit a Turing machine that computes it. Since binary trees are not strings, I first have to design a string representation of binary trees. For example, I could represent the tree
((*,*),(*)). Having done this, I can then pass binary trees to Turing machines in string form, and get trees back in string form, no problem.
But here’s my problem: the representation of binary trees as strings is itself a function (in fact a bijection) from binary trees to strings. How can I know formally that this function is a computable one?
On an intuitive level I have no problem seeing that it is, but on a formal level I can’t exhibit a Turing machine to verify it, because then I would need a string representation of binary trees, and we quickly get into an infinite regress. For simple things like binary trees the computability of the representation is obvious, but for more complicated sets it might not be.
Can this circle be squared — is there a formal definition of computability for functions of arbitrary countable sets that might not admit an obvious mapping to strings of symbols?
Here is another way of explaining my question: we might choose to define a model of computation that operates inherently on trees or some other set rather than strings. Indeed, many such models of computation have been proposed, including the lambda calculus and combinator calculi (which are arguably more naturally seen as operating on trees rather than strings), the recursive functions (which operate on integers), and various cellular automata (which operate on infinite strings or two dimensional lattice configurations, rather than finite strings). It’s only really historical accident that led the string-based Turing machines to become the gold standard against which other models of computation are judged.
The problem is that without being able to talk about computable mappings between these sets, it’s not strictly formally possible to say whether these other models of computation are equivalent to Turing machines or not. While it usually doesn’t raise any practical problems to just think of everything in terms of string representations, it seems dissatisfying that our theory of computation doesn’t treat all models of computation equally, but instead privileges those whose input and output are from a particular kind of set.
So really my question is whether a theory of computation exists has been formulated that is model-agnostic, in precisely this sense of not privileging one particular kind of set for input and output. It’s quite an abstract question, motivated more by a desire for formal niceness than by any practical concerns.
One suspects, for example, that the category theorists might have done some work in this direction. This is because category theory tends to talk about the properties of functions without talking much about the sets they map, so a “category of computable functions” would probably not know or care whether its underling objects were sets of strings or something else. This is an approach I would be particularly keen to read about if it exists. (I’ve asked another question, specifically about the category theory aspect, over at MathOverflow.)