# On the Implausibility that the Human Mind is Turing Equivalent

Regrettably, I have much better things to do than write this article, like try and get funding for my A.I. venture, but upon reflection this morning while exercising, I realized that the idea of the human mind being Turing Equivalent is comically implausible. In fact, I was instantly offended once I understood why, because people really take these incredibly stupid ideas seriously, and this is often how people justify the awful manner in which they treat each other.

Here’s the mathematical argument:

Can a program produce a copy of itself, if it does not have access to its own code?

This is distinct from a program that produces a copy of itself, because it contains print instructions that end up reproducing itself. What I am instead asking is whether there is a program that can reproduce itself, or its mathematical equivalent, when the original code for the program does not contain any descriptions of itself?

So for example, a program that generates all possible programs up to some certain length would not qualify as a solution, since in order for this program to terminate, it would have to know its own length, which is forbidden by the problem.

Now assume that the program somehow did reproduce itself. How would it test this? It can’t: since it can’t read its own code, there is no way for it to know for sure, and it can only test some finite number of inputs, and develop some measure of belief that it has in fact succeeded.

Therefore, it’s not possible to write such a program, since the program can test only a finite number of inputs.

Now consider a program that can tell you how long its own code is, without access to its own code. This is also impossible, because by definition, the best you can do is to write a program that simulates the underlying program for some finite number of inputs. But there could be multiple programs that produce the same outputs for all inputs. Therefore, even if you produce a correct mathematical equivalent of your underlying program, you can’t be sure it’s the same length as the underlying program.

It must be the case that any sufficiently intelligent person is at least as powerful as a Turing Machine, since I can teach any such person a Turing Complete language, and then that person can perform any algorithm written in that language, given enough time. In summary, it must be the case that sufficiently intelligent, literate people are at least as powerful as a Turing Machine.

Now assume that the mind of Alan Turing is Turing Equivalent. That is, Alan Turing was a Turing Machine. Alan Turing produced a paper that describes the Turing Machine. This is exactly the scenario described above, where a program (i.e., Alan Turing) wrote another program that is mathematically equivalent to itself (i.e., his description of the Turing Machine) without having access to his own code (i.e., Alan Turing had no idea how his brain actually operates).

The epistemology here is a bit subtle, in that the proof above tells us only that there is no way for a program to know that it has produced another program that is equivalent to itself, if it doesn’t have access to its own code. However, in the case of a Turing Equivalent program, you do have information about the program, even if you don’t know its implementation, since it is by definition Turing Equivalent. As a result, it is in this case at least possible for a UTM to test whether or not a given program is Turing Equivalent, since this can be established by deduction, which can be mechanized.

But Alan Turing didn’t know what Turing Equivalent was until he described the Turing Machine. So we’re back where we started, which is that Alan Turing had no idea, and no way to test, whether or not what he produced was in fact equivalent to himself. And neither does anyone else, absent experimentation.

Again, human beings are necessarily just as powerful as Turing Machines, for the reasons described above, but there’s plenty of experimental evidence suggesting that Turing Machines will never produce the types of artifacts that some human beings produce constantly, in particular, novel mathematics. This doesn’t imply the non-existence of other types of machines beyond a UTM that might be able to do these things, but the point is, human beings can do all of the things that a UTM can do, but it seems quite clear that a UTM cannot do many of the things that some people can do.

This suggests the obvious: some people are more powerful than UTMs.

The contrary view assumes that Alan Turing just happened to produce a copy of his own code, even though this is not a testable hypothesis. The contrary view assumes that Alan Turing just happened to accidentally reproduce all of that which describes his mind, without having access to that information, and despite any means by which this hypothesis can be tested.

This suggests the obvious as well: some people find it morally convenient to assume that we’re machines, and would rather not consider the issue any further.