“Eventually, some AI will go FOOM, locally self-improvingly rather than global-economically”
Ouch. This statement smells to me like backtracking from your actual position. If you honestly have no time estimate beyond “eventually”, why does the SIAI exist? Don’t you have any more urgent good things to do?
(edited to remove unrelated arguments, there will be time for them later)
I honestly don’t understand your objection. “Eventually” means “sometime between tomorrow and X years from now” where my probability distribution over X peaks in the 20-40 range and then starts dropping off but with a long tail because hey, gotta widen those confidence intervals.
If I knew for an absolute fact that nothing was going to happen for the next 100 years, it would still be a pretty damned urgent problem, you wouldn’t want to just let things slide until we ended up in a position as awful as the one we probably occupy in real life.
I still feel shocked when I read something like this and remember how short people’s time horizons are, how they live in a world that is so much tinier than known space and time, a world without a history or a future or an intergalactic civilization that bottlenecks through it. Human civilization has been around for thousands of years. Anything within the next century constitutes the last minutes of the endgame.
There is a path of retreat from belief in sudden FOOM, that still calls for working on FAI (no matter what is feasible, we still need to preserve human value as effectively as possible, and FAI is pretty much this project, FOOM or not):
Ouch. This statement smells to me like backtracking from your actual position. If you honestly have no time estimate beyond “eventually”, why does the SIAI exist? Don’t you have any more urgent good things to do?
(edited to remove unrelated arguments, there will be time for them later)
I honestly don’t understand your objection. “Eventually” means “sometime between tomorrow and X years from now” where my probability distribution over X peaks in the 20-40 range and then starts dropping off but with a long tail because hey, gotta widen those confidence intervals.
If I knew for an absolute fact that nothing was going to happen for the next 100 years, it would still be a pretty damned urgent problem, you wouldn’t want to just let things slide until we ended up in a position as awful as the one we probably occupy in real life.
I still feel shocked when I read something like this and remember how short people’s time horizons are, how they live in a world that is so much tinier than known space and time, a world without a history or a future or an intergalactic civilization that bottlenecks through it. Human civilization has been around for thousands of years. Anything within the next century constitutes the last minutes of the endgame.
20 to 40 years? Didn’t you disavow that?
There is a path of retreat from belief in sudden FOOM, that still calls for working on FAI (no matter what is feasible, we still need to preserve human value as effectively as possible, and FAI is pretty much this project, FOOM or not):
Relevance of intelligence explosion