I’ve referred to the possibility of significant destruction in this context as the first ‘Chernobyl-type event’ involving AI, it will greatly inform policy and legislation regarding AI tech. We’d have to make predictions about what form the disaster takes to discuss the efficacy of such legislation or lockdown on access.
It could be, for example, that many self driving cars malfunction all at once and cause a lot of damage and grief. This first scenario would probably lead to specific policies but little if any broad oversight. Another example: the AI disaster could be psychological (see virtual romance in China) or economic in nature causing much suffering over a long period that goes unnoticed for a time.
If it’s the latter scenario I can see strong political lines forming over AI safety with pro- and anti- tech/lulz supporters. The prospect of humanity’s destruction, then, is at least partially dependant on our ability to govern ourselves. So I can’t blame the alignment community for focusing more on the technical aspects of alignment, as difficult as they are, instead of the social aspects. The social aspects may be easier, all things considered, but are emotionally exhausting which is why so many are firmly resigned to doom.
I’ve referred to the possibility of significant destruction in this context as the first ‘Chernobyl-type event’ involving AI, it will greatly inform policy and legislation regarding AI tech. We’d have to make predictions about what form the disaster takes to discuss the efficacy of such legislation or lockdown on access.
It could be, for example, that many self driving cars malfunction all at once and cause a lot of damage and grief. This first scenario would probably lead to specific policies but little if any broad oversight. Another example: the AI disaster could be psychological (see virtual romance in China) or economic in nature causing much suffering over a long period that goes unnoticed for a time.
If it’s the latter scenario I can see strong political lines forming over AI safety with pro- and anti- tech/lulz supporters. The prospect of humanity’s destruction, then, is at least partially dependant on our ability to govern ourselves. So I can’t blame the alignment community for focusing more on the technical aspects of alignment, as difficult as they are, instead of the social aspects. The social aspects may be easier, all things considered, but are emotionally exhausting which is why so many are firmly resigned to doom.