I have a reasonably low value for p(Doom). I also think these approaches (to the extent they are courses of action) are not really viable. However, as long as they don’t increase the probability of p(Doom) its fine to pursue them. Two important considerations here: an unviable approach may still slightly reduce p(Doom) or delay Doom and the resources used for unviable approaches don’t necessarily detract from the resources used for viable approaches.
For example, “we’ll pressure corporations to take these problems seriously”, while unviable as a solution will tend to marginally reduce the amount of money flowing to AI research, marginally increase the degree to which AI researchers have to consider AI risk and marginally enhance resources focused on AI risk. Resources used in pressuring corporations are unlikely to have any effect which increases AI risk. So, while this is unviable, in the absence of a viable strategy suggesting the pursuit of this seems slightly positive.
Resources used in pressuring corporations are unlikely to have any effect which increases AI risk.
Devil’s advocate: If this unevenly delays corporations sensitive to public concerns, and those are also corporations taking alignment at least somewhat seriously, we get a later but less safe takeoff. Though this goes for almost any intervention, including to some extent regulatory.
Yes. An example of how this could go disastrously wrong is if US research gets regulated but Chinese research continues apace, and China ends up winning the race with a particularly unsafe AGI.
I have a reasonably low value for p(Doom). I also think these approaches (to the extent they are courses of action) are not really viable. However, as long as they don’t increase the probability of p(Doom) its fine to pursue them. Two important considerations here: an unviable approach may still slightly reduce p(Doom) or delay Doom and the resources used for unviable approaches don’t necessarily detract from the resources used for viable approaches.
For example, “we’ll pressure corporations to take these problems seriously”, while unviable as a solution will tend to marginally reduce the amount of money flowing to AI research, marginally increase the degree to which AI researchers have to consider AI risk and marginally enhance resources focused on AI risk. Resources used in pressuring corporations are unlikely to have any effect which increases AI risk. So, while this is unviable, in the absence of a viable strategy suggesting the pursuit of this seems slightly positive.
Devil’s advocate: If this unevenly delays corporations sensitive to public concerns, and those are also corporations taking alignment at least somewhat seriously, we get a later but less safe takeoff. Though this goes for almost any intervention, including to some extent regulatory.
Yes. An example of how this could go disastrously wrong is if US research gets regulated but Chinese research continues apace, and China ends up winning the race with a particularly unsafe AGI.