I definitely agree with this, and I’ll probably change the title to focus on AI alignment.
My general view on the other problems of AI safety is that removing accident risk would make the following strategies much less positive EV:
General slowdowns of AI, because misuse is handlable in other, less negative EV ways.
Trying to break the Overton Window, as Eliezer Yudkowsky did, since governments and companies have incentives to restrict misuse.
And in particular, I think that removing the accidental risk probably ought to change a lot of people’s p(doom), especially if the main way they claim that people will die is due to accident risk, which is IMO my sense of a lot of people’s models on LW, and is arguably the main reason people are scared about AI.
Also, I think that the type of governance would change assuming no accident risk.
I definitely agree with this, and I’ll probably change the title to focus on AI alignment.
My general view on the other problems of AI safety is that removing accident risk would make the following strategies much less positive EV:
General slowdowns of AI, because misuse is handlable in other, less negative EV ways.
Trying to break the Overton Window, as Eliezer Yudkowsky did, since governments and companies have incentives to restrict misuse.
And in particular, I think that removing the accidental risk probably ought to change a lot of people’s p(doom), especially if the main way they claim that people will die is due to accident risk, which is IMO my sense of a lot of people’s models on LW, and is arguably the main reason people are scared about AI.
Also, I think that the type of governance would change assuming no accident risk.