Are you open to the possibility that, after going through all this “deep knowledge of a huge spectrum of science and philosophy” you will find that there is a heretofore undetected fault in the concept?
I bet they are open to the possibility, but it will never arise because they don’t know how their beliefs are supposed to pay rent in future anticipations, short of having an AI take over the universe.
Do their hypotheses make any testable predictions? I don’t see any reason why the SIAI couldn’t sustain its position even a thousand years from now, because their arguments will still work.
The reason for this is that their concepts are simply too vague to be updated on new evidence.
Surely a computer simulation of the intelligence explosion can be created, given some simplified assumptions.
This will either require so many simplifying assumptions that it effectively proves nothing, or be as difficult to do as creating the intelligence explosion itself.
I bet they are open to the possibility, but it will never arise because they don’t know how their beliefs are supposed to pay rent in future anticipations, short of having an AI take over the universe.
Do their hypotheses make any testable predictions? I don’t see any reason why the SIAI couldn’t sustain its position even a thousand years from now, because their arguments will still work.
The reason for this is that their concepts are simply too vague to be updated on new evidence.
Surely a computer simulation of the intelligence explosion can be created, given some simplified assumptions.
This will either require so many simplifying assumptions that it effectively proves nothing, or be as difficult to do as creating the intelligence explosion itself.
How do you know? Have you done any? Gotta start somewhere.