IMG_20140910_141117_570The basic result of the incompleteness proofs is pretty simple.  The proofs are complicated because the idea was literally unthinkable until programming was invented. The heroic efforts of Gödel, Skolem, Post, and Church to create a mathematics in which doing mathematics could be represented as some sort of calculation were an incredible feat of imagination. But today, programming is kind of obvious. So the general result should be simpler.

Suppose we have a computer with an infinite memory that holds arbitrary integers and  can execute a simple programming language that consists of numbers. The key thing about the computer is that numbers in memory can be interpreted as integers or instructions, as we please. It’s certainly possible that the number encodes a program that makes no sense, but every program is just an integer and every integer encodes some program. Suppose that the computer has the usual arithmetic operations and conditionals plus recursion. Because numbers are programs,  we can have a number, say x, and a second number y and compute x(y) because x “encodes” some program.

Programs as values, means that we can pass programs that compute functions as arguments to other functions.  For example we should be able to write “evaluate” so that “evaluate(x,y) = x(y)”. If f(x) = x*x then evaluate(f,5) = 25.  It would be useful to have a function “S”  which computes a “solution” to a function so that “S(f,x) = the smallest number y ≥ x  so that f(y)=0”. In fact, such a function is easy to define in our language: S(f,x) = ( if f(x)= 0 then x; else S(f,x+1) ). It is easy to see that S(f,x) will never terminate if there is no y ≥ x so that f(y)=0. For example if g(x) = x+1 then S(g,1) never terminates.  The question is whether we can write a function E(f,x) so that E(f,x) always terminates and so E(f,x)=1 if S(f,x) terminates and E(f,x)=0 if S(f,x) does not terminate. E doesn’t  find the solution, it just has to tell us whether S will find the solution. However, we can prove that we cannot write E with these specifications.

The proof is by contradiction. Suppose we could write E. Then someone could write a “diagonal” program

D(f) = { if E(f,f)=0 then 1; else S(f,f) }.

So D(f) =1 if  E(f,f)=0;  there is no y ≥ f so that f(y)=0 but D(f) never terminates otherwise.  What about D(D) ? Does it terminate? If E(D,D)=0 then D(D) terminates with value 1. But if E(D,D)=0 then  for every y ≥ D, D(y) never completes. Which means D(D) does not terminate. So

D(D) terminates with value 1 if and only if D(D) does not ever terminate.
D(D) does not terminate if and only if D(D) does terminate.

Assuming that the world makes logical sense ( a big leap) then  to get to this paradoxical result, we must have assumed something that is false: namely that we could somehow define E(f,x) so that it always terminates and correctly computes the answer to whether S(f,x) terminates.

For mathematics, this result has all sorts of interesting implications. For example, if it were not true, we could, in principal, write a function that tested numbers to see if they were odd perfect numbers and marched up the positive integers until it found one. If that function could be evaluated by E, we’d know if there was such a number – something nobody seems to know right now. Any theorem Exists x, P(x) where P can be represented in  our language could be tested by evaluating E on a function that tests P on consecutive integers until it finds a solution. Any theorem Forall x P(x) could be evaluated by seeing if a program that looks for a
counter-example ever terminates. All that should indicate that E is implausible as well as impossible. For computer science the implications are more subtle because the fact that we don’t really have computers with infinite memory matters in computer science.  That’s also interesting in pure mathematics: see Godel’s lost letter.

 

A hard theorem becomes easy over time.