I find it impossible to predict without knowing specifics about the future scenario. As we get closer to creating an AI, we are almost guaranteed to find out more difficulties associated with it.
Maybe in 10 years we will find some unforeseen problem that we have no idea how to resolve, in which case of course my probability estimate would significantly drop.
Or, if we have not seen any significant progress in the field, I predict my estimate would remain constant for the first 30 years, then decrease every year progress is not being made.
If there is a continuous stream of progress that doesn’t also reveal huge new barriers, then I don’t believe it would ever go down. But, I find it hard to imagine any scenario that presents continual progress, doesn’t show any major roadblocks, yet still has not managed to develop AI more than 200 years form now.
I find it impossible to predict without knowing specifics about the future scenario. As we get closer to creating an AI, we are almost guaranteed to find out more difficulties associated with it.
Maybe in 10 years we will find some unforeseen problem that we have no idea how to resolve, in which case of course my probability estimate would significantly drop.
Or, if we have not seen any significant progress in the field, I predict my estimate would remain constant for the first 30 years, then decrease every year progress is not being made.
If there is a continuous stream of progress that doesn’t also reveal huge new barriers, then I don’t believe it would ever go down. But, I find it hard to imagine any scenario that presents continual progress, doesn’t show any major roadblocks, yet still has not managed to develop AI more than 200 years form now.