This is a very basic question but I spent some time reading and find no answer. I am not computer science majored but have read some basic algorithm stuff, for example, some basic sorting algorithms and I also have some basic knowledge of how computers operate. However, I am really interested in the idea of a Turing machine, especially the non-deterministic one.

I have read Wiki about the definitions of a Turing machine (and watched some youtube videos) and I sort of accept that, although I really feel that this is a huge jump from an algorithm to an abstract machine. From my understanding (you are more than welcome to correct me):

- A Turing machine is a machine performing works specified on a cookery book (algorithm).
- The pages of the cookery book represent the “states” of your machine and each page contains a table saying that which state and which cell your machine will move to given the alphabet the machine read and your current state. (NB. This is not a function but a partial function because it is possible that the machine stops.)
- So, to guess the idea and motivation of defining an abstract Turing machine, I imagine that the algorithm corresponds to the partial map, the memory of the computer corresponds to the (infinitely long) tape and what’s finally on the tape is the answer to the question you wanna solve.

So, Turing machine looks like a machine to realize any algorithm to solve problems. One just “translates” any algorithm to a set of mysterious simple rules (i.e. the partial function) and let the machine do the laboring job and then we get the solution.

In this respect, Turing machine is always deterministic, because algorithms are deterministic. It tells you what to do next precisely. This is no uncertainty. Turing machine is just a machine to realize any algorithm.

OK, This is very abstract and I sort of accept it. However, then I read something called non-deterministic Turing machine (NTM) and then I was knocked down. A NTM is pretty much similar to a Turing machine except that the partial function is now replaced by a “relation”. That is, it is a one-to-multiple map and it is no longer a (partial) function.

Could someone explain to me why we need such multiple options? I would never expect to encounter something uncertainty in the implementation of an algorithm. It is like telling the machine: you first do A, then if you find yourself in a state B and find the data is now B’ then you choose for yourself one of the 10 allowed next steps?

Are NTM’s corresponding to a set of algorithms that need uncertainty? for example the generation of random numbers? If no, why do we need to allow multiple choices for a Turing machine?

Any help will be appreciated!