So if I am understanding you… You think the doomsday scenario (unaligned all-powerful AI as creating a risk of extinction for humanity) is internally consistent, but you want to know if it is actually possible or likely. And you want to make this judgment in a Popperian way.
Since you undoubtedly know more than me about Popperian methods, can I first ask how a Popperian would approach a proposition like, “a nuclear war in which hundreds of cities were bombed would be a disaster”. Like certain other big risks, it’s a proposition that we would like to evaluate in some way, without just letting the event happen and seeing how bad it is… In short, can you clarify for me how falsificationism is applied to claims that a certain event is possible but must never be allowed to happen.
Quick thought. It is easy to test a nuclear weapon with relatively little harm done (in some desert) and from there note its effects and show (though a bit less convincingly) that if many of these nuclear weapons were used on cities and the like we would have a disaster on our hands. The case for superintelligence is not analogous. We cannot first build it and test it (safely) to see its destructive capabilities, we cannot even test if we can even build it as it would then already be too late if we were successful.
I cannot clarify how falsificationism is applied to claims like that. In addition I am unsure whether that is a possibility. I do think that if this is not a possibility it undermines the theory in some ways. E.g. classical Marxists still think it is only a matter of time until their global revolution.
I think there are ways to set up a falsifiable argument the other way, e.g. we will not reach AGI because 1. the human mind processes information above the Turing Limit and 2. all AI is within the Turing Limit. For this we do not even need to reach AGI to disprove it, we can try to show that human minds are within the Turing Limit or AI is/can be above it.
So if I am understanding you… You think the doomsday scenario (unaligned all-powerful AI as creating a risk of extinction for humanity) is internally consistent, but you want to know if it is actually possible or likely. And you want to make this judgment in a Popperian way.
Since you undoubtedly know more than me about Popperian methods, can I first ask how a Popperian would approach a proposition like, “a nuclear war in which hundreds of cities were bombed would be a disaster”. Like certain other big risks, it’s a proposition that we would like to evaluate in some way, without just letting the event happen and seeing how bad it is… In short, can you clarify for me how falsificationism is applied to claims that a certain event is possible but must never be allowed to happen.
Quick thought. It is easy to test a nuclear weapon with relatively little harm done (in some desert) and from there note its effects and show (though a bit less convincingly) that if many of these nuclear weapons were used on cities and the like we would have a disaster on our hands. The case for superintelligence is not analogous. We cannot first build it and test it (safely) to see its destructive capabilities, we cannot even test if we can even build it as it would then already be too late if we were successful.
I cannot clarify how falsificationism is applied to claims like that. In addition I am unsure whether that is a possibility. I do think that if this is not a possibility it undermines the theory in some ways. E.g. classical Marxists still think it is only a matter of time until their global revolution.
I think there are ways to set up a falsifiable argument the other way, e.g. we will not reach AGI because 1. the human mind processes information above the Turing Limit and 2. all AI is within the Turing Limit. For this we do not even need to reach AGI to disprove it, we can try to show that human minds are within the Turing Limit or AI is/can be above it.