A calibration question: Are you open to the possibility that, after going through all this “deep knowledge of a huge spectrum of science and philosophy” you will find that there is a heretofore undetected fault in the concept?
But this harder to do than to say. Part of my brain would of course be relieved to discover that, say, I have less reason to expect human extinction this century than I had previously thought. On the other hand, I know that as crazy as it sounds, part of my brain is probably committed to my past stated beliefs and to my current mission, and would prefer to operate under confirmation bias.
Which means I need to do all kinds of things to be actually curious for the truth, like leaving a line of retreat for myself by visualizing what it would be like to make such a discovery and how the world would be okay with that result.
Which, since you gave me the opportunity, I just spent 20 seconds doing. :)
Are you open to the possibility that, after going through all this “deep knowledge of a huge spectrum of science and philosophy” you will find that there is a heretofore undetected fault in the concept?
I bet they are open to the possibility, but it will never arise because they don’t know how their beliefs are supposed to pay rent in future anticipations, short of having an AI take over the universe.
Do their hypotheses make any testable predictions? I don’t see any reason why the SIAI couldn’t sustain its position even a thousand years from now, because their arguments will still work.
The reason for this is that their concepts are simply too vague to be updated on new evidence.
Surely a computer simulation of the intelligence explosion can be created, given some simplified assumptions.
This will either require so many simplifying assumptions that it effectively proves nothing, or be as difficult to do as creating the intelligence explosion itself.
A calibration question: Are you open to the possibility that, after going through all this “deep knowledge of a huge spectrum of science and philosophy” you will find that there is a heretofore undetected fault in the concept?
Of course.
But this harder to do than to say. Part of my brain would of course be relieved to discover that, say, I have less reason to expect human extinction this century than I had previously thought. On the other hand, I know that as crazy as it sounds, part of my brain is probably committed to my past stated beliefs and to my current mission, and would prefer to operate under confirmation bias.
Which means I need to do all kinds of things to be actually curious for the truth, like leaving a line of retreat for myself by visualizing what it would be like to make such a discovery and how the world would be okay with that result.
Which, since you gave me the opportunity, I just spent 20 seconds doing. :)
I bet they are open to the possibility, but it will never arise because they don’t know how their beliefs are supposed to pay rent in future anticipations, short of having an AI take over the universe.
Do their hypotheses make any testable predictions? I don’t see any reason why the SIAI couldn’t sustain its position even a thousand years from now, because their arguments will still work.
The reason for this is that their concepts are simply too vague to be updated on new evidence.
Surely a computer simulation of the intelligence explosion can be created, given some simplified assumptions.
This will either require so many simplifying assumptions that it effectively proves nothing, or be as difficult to do as creating the intelligence explosion itself.
How do you know? Have you done any? Gotta start somewhere.