Counterargument: People will be able to cause significant destruction far before they are able to cause the end of the world, and if people start using powerful AI to do significant destruction for the lulz then that will motivate a lot of lockdown on AI access.
Yeah, that is part of the silver lining, I should have been clearer. That we will have a chance to iterate over issues like that, including a potential lockdown before superintelligence is inevitable.
I’ve referred to the possibility of significant destruction in this context as the first ‘Chernobyl-type event’ involving AI, it will greatly inform policy and legislation regarding AI tech. We’d have to make predictions about what form the disaster takes to discuss the efficacy of such legislation or lockdown on access.
It could be, for example, that many self driving cars malfunction all at once and cause a lot of damage and grief. This first scenario would probably lead to specific policies but little if any broad oversight. Another example: the AI disaster could be psychological (see virtual romance in China) or economic in nature causing much suffering over a long period that goes unnoticed for a time.
If it’s the latter scenario I can see strong political lines forming over AI safety with pro- and anti- tech/lulz supporters. The prospect of humanity’s destruction, then, is at least partially dependant on our ability to govern ourselves. So I can’t blame the alignment community for focusing more on the technical aspects of alignment, as difficult as they are, instead of the social aspects. The social aspects may be easier, all things considered, but are emotionally exhausting which is why so many are firmly resigned to doom.
Counterargument: People will be able to cause significant destruction far before they are able to cause the end of the world, and if people start using powerful AI to do significant destruction for the lulz then that will motivate a lot of lockdown on AI access.
Yeah, that is part of the silver lining, I should have been clearer. That we will have a chance to iterate over issues like that, including a potential lockdown before superintelligence is inevitable.
I’ve referred to the possibility of significant destruction in this context as the first ‘Chernobyl-type event’ involving AI, it will greatly inform policy and legislation regarding AI tech. We’d have to make predictions about what form the disaster takes to discuss the efficacy of such legislation or lockdown on access.
It could be, for example, that many self driving cars malfunction all at once and cause a lot of damage and grief. This first scenario would probably lead to specific policies but little if any broad oversight. Another example: the AI disaster could be psychological (see virtual romance in China) or economic in nature causing much suffering over a long period that goes unnoticed for a time.
If it’s the latter scenario I can see strong political lines forming over AI safety with pro- and anti- tech/lulz supporters. The prospect of humanity’s destruction, then, is at least partially dependant on our ability to govern ourselves. So I can’t blame the alignment community for focusing more on the technical aspects of alignment, as difficult as they are, instead of the social aspects. The social aspects may be easier, all things considered, but are emotionally exhausting which is why so many are firmly resigned to doom.