Can the concept of “my” reasoning, as opposed to “someone else’s” reasoning, be formalized? Suppose I manage to write an AI that can run on my laptop. I boot my laptop up, run the program, and it finds a proof of X. It writes “a proof of X exists” on my hard drive in the “what statements has proofs database”. I shut my laptop down, reboot my laptop, and restart the program. It checks the “what statements has proofs database”, and finds the “a proof of X exists” statement. Is that “its” reasoning? It was created by that program. But it wasn’t created by that instance of the program. What if a bunch of people are running AIs, all with the same source code, on different laptops. If one of those AIs queries another, is that “its own reasoning”, or another AI’s? What if Saturday, an AI says “I knew on Friday that X is true, I didn’t record my proof, but I know that I am a rational thinker, so I know it’s true”. But the reason it knew on Friday that X is true was that it knew on Thursday that X is true, and so on. Is this the same paradox?
Also, there is a difference between creating an AI that you trust, and finding one. One can imagine a universe in which an infinite past chain of AIs exists, but the idea of creating an infinite chain of reasoning is a lot harder to imagine.
There are some problems with the idea that PA can be perfectly substantiated physically, but putting that aside, there’s even further problems in extrapolating into the past. What’s to stop me from creating an AI with the same source code as you, and putting “2+2 =5 is true” in its memory banks? Can you tell the difference between an AI that has put “there is a proof of X” in its memory banks according to its programming, and an AI that was simply created with “there is a proof of X” in its memory banks to begin with? Even if it “remembers” going through the process of coming up with a proof, that memory could have been there to begin with. If it remembers all of its reasoning, you could check that that reasoning is consistent with its source code, but that would take as much work as just running the calculations over again.
Can the concept of “my” reasoning, as opposed to “someone else’s” reasoning, be formalized? Suppose I manage to write an AI that can run on my laptop. I boot my laptop up, run the program, and it finds a proof of X. It writes “a proof of X exists” on my hard drive in the “what statements has proofs database”. I shut my laptop down, reboot my laptop, and restart the program. It checks the “what statements has proofs database”, and finds the “a proof of X exists” statement. Is that “its” reasoning? It was created by that program. But it wasn’t created by that instance of the program. What if a bunch of people are running AIs, all with the same source code, on different laptops. If one of those AIs queries another, is that “its own reasoning”, or another AI’s? What if Saturday, an AI says “I knew on Friday that X is true, I didn’t record my proof, but I know that I am a rational thinker, so I know it’s true”. But the reason it knew on Friday that X is true was that it knew on Thursday that X is true, and so on. Is this the same paradox?
Also, there is a difference between creating an AI that you trust, and finding one. One can imagine a universe in which an infinite past chain of AIs exists, but the idea of creating an infinite chain of reasoning is a lot harder to imagine.
There are some problems with the idea that PA can be perfectly substantiated physically, but putting that aside, there’s even further problems in extrapolating into the past. What’s to stop me from creating an AI with the same source code as you, and putting “2+2 =5 is true” in its memory banks? Can you tell the difference between an AI that has put “there is a proof of X” in its memory banks according to its programming, and an AI that was simply created with “there is a proof of X” in its memory banks to begin with? Even if it “remembers” going through the process of coming up with a proof, that memory could have been there to begin with. If it remembers all of its reasoning, you could check that that reasoning is consistent with its source code, but that would take as much work as just running the calculations over again.