Wikipedia states on the page of the halting problem, “For any program f that might determine if programs halt, a “pathological” program g called with an input can pass its own source and its input to f and then specifically do the opposite of what f predicts g will do.”
Suppose we have 2 neural networks, approximating f and g (allowing infinite size and depth limits). Is it wrong to assume that these two NNs can participate in a zero-sum game against each other, and that the Nash equilibrium (if it exists) of these 2 NNs contains the solution to the halting problem?
A question was posed to me as to the relevance of the ZeroSum game to Heuristic search algorithms.
I know that the zero sun game is a theory in which one persons gain is exactly balanced to another persons loss but I am unsure as to how this applies to Heuristics.
I think it may have something to do with how parts of the problem space are removed and as a result, the problem space is optimised?
You may know of the paper on the “Memory” game – sometimes the best strategy is turning known cards (here: https://www.math.kth.se/xComb/x1.pdf). Here is a simpler toy example: You and your opponent have to guess a time on the clock (4,8,12), you say “4” or “8” or “12” and the referee says “Yes” if correct, “Nein” if it’s closer clockwise and “Njet” if anticlockwise. Clearly the 2nd player has an 1/3:2/3 advantage since he gets more information from the answer than you.
Do you have more references for games with this effect? Especially, can it be avoided to scan the whole game tree (for longer games like Memory) by computing a local entropy? At least approximatively? (For example, if the referree says only “yes” and “no”, the remaining possibilities have 1:1 chance, which favors you again. So ask a question that spreads them out as equally as possible.)