Hi.
humpolec
What, no bad ending?
Unless it’s unFriendly AI that revives you and tortures you forever.
Let’s suppose Church-Turing thesis is true.
Are all mathematical problems solvable?
Are they all solvable to humans?
If there is a proof* for every true theorem, then we need only to enumerate all possible texts and look for one that proves—or disproves—say, Goldbach’s conjecture. The procedure will stop every time.
(* Proof not in the sense of “formal proof in a specific system”, but “a text understandable by a human as a proof”.)
But this can’t possibly be right—if the human mind that looks at the proofs is Turing-computable, then we’ve just solved the Halting Problem—after all, we can pose the halting of any Turing machine as a mathematical problem.
So what does that mean?
Not all true theorems have a proof? (what does that even mean)
Not all proofs are possible to follow by a human? (very pessimistic, in my opinion)
Some other answer I’m missing?
You can also extend the question to any human-made AIs/posthuman minds, but this doesn’t help much—if the one looking at proofs can reliably self-improve, then the Halting Problem would still be solved.
EDIT: A longer explanation of the problem, by a friend.
Correct. Read about Gödel’s Incompleteness Theorem, preferably from Gödel, Escher, Bach by Douglas Hofstadter.
I read GEB. Doesn’t Gödel’s theorem talk about proofs in specific formal systems?
Pessimistic perhaps, but also correct. If I remember correctly, theorem-proving programs have already produced correct proofs that are easily machine-verifiable but intractably long and complicated and apparently meaningless to humans.
I consider this a question of scale. Besides, the theorem-proving program is written by humans and humans understand (and agree with) its correctness, so in some sense humans understand the correct proofs.
It applies to any formal system capable of proving theorems of number theory.
Right. So if humans reasoning follows some specified formal system, they can’t prove it. But does it really follow one?
We can’t, for example, point to some Turing machine and say “It halts because of (...), but I can’t prove it”—because in doing so we’re already providing some sort of reasoning.
But then what do you mean by “possible to follow by a human”?
Maybe “it’s possible for a human, given enough time and resources, to verify validity of such proof”.
I’m suggesting that there are aspects of the universe which are as explicable as Newtonian mechanics—except that we, even with our best tools and with improved brains, won’t be able to understand them.
Is that really a ‘physical’ aspect, or a mathematical one? Newtonian mechanics can be (I think) derived from lower level principles.
So do you mean something that is a consequence of possible ‘theory of everything’, or a part of it?
I’ll have to think some more about it, but this looks like a correct answer. Thank you.
Theory of everything as I see it (and apparently Wikipedia agrees ) would allow us (in principle—given full information and enough resources) to predict every outcome. So every other aspect of physical universe would be (again, in principle) derivable from it.
I think a relatively simple theory of everything is possible. This is however not based on anything solid—I’m a Math/CS student and my knowledge of physics does not (yet!) exceed high school level.
One thing I haven’t elaborated on here (and probably more hand-waving/philosophy than mathematics):
If Church-Turing thesis is true, there is no way for a human to prove any mathematical problem. However, does it have to follow that not every theorem has a proof?
What if every true theorem has a proof, not necessarily understandable to humans, yet somehow sound? That is, there exists a (Turing-computable) mind that can understand/verify this proof.
(Of course there is no one ‘universal mind’ that would understand all proofs, or this would obviously fail. And for the same reason there can be no procedure of finding such a mind/verifying one is right.)
Does the idea of not-universally-comprehensible proofs make sense? Or does it collapse in some way?
What you’re describing is an evil AI, not just an unFriendly one—unFriendly AI doesn’t care about your values. Wouldn’t an evil AI be even harder to achieve than a Friendly one?
Agreed, AI based on a human upload gives no guarantee about its values… actually right now I have no idea about how Friendliness of such AI could be ensured.
Why do you think that an evil AI would be harder to achieve than a Friendly one?
Maybe not harder, but less probable - ‘paperclipping’ seems to be a more likely failure of friendliness than AI wanting to torture humans forever.
I have to admit I haven’t thought much about this, though.
Well, what if the AI took some liberty in the extrapolation and made up what it was missing? Being a simulation, you wouldn’t know how the “real you” differs from you.
In fact, combined with ‘write dreams down as soon as you wake up’ and ’consume X substance” it more or less summarizes the techniques.
There are also trance/self-hypnosis methods, like WILD, some people seem to be very successful with them.
How to check that you aren’t dreaming: make up a random number that’s too large for you to factor in your head, factor it with a computer, then check the correctness by pen and paper. If the answer fits, now you know the computing hardware actually exists outside of you.
A similar method was used by Solaris protagonist to check if he isn’t hallucinating.
You’re not supposed to choose by looking at the numbers, only decide if the fact was Hard / Good / Easy to remember.
Here is some information on the algorithm used by Anki, it’s modified from one of SuperMemo’s algorithms.
I’m happy to see something about Anki here, but how are Sequences supposed to be SRS-able in any way? “Learning” them involves understanding, not memorization of text / facts.
Or is that the joke (“mysterious answers”) and I failed to see it?
SRS software is very efficient, but still makes you think how terribly inefficient the brain is at remembering things. Wouldn’t it be better to just read a word list once, decide “this is important” and just remember it, instead of spending half an hour every day to make the words stick?
How to deal with a program that has become self aware? - April Fools on StackOverflow.