I think you’ve substantially misunderstood what Will is talking about. He’s not making a recommendation that people rush through things. He’s noting what he believes (and I mostly agree) to be huge weaknesses in the book’s argument.
Similarly, he’s not saying labs have alignment in the bag. He’s just noting holes in the book’s arguments that extreme catastrophic misalignment is overwhelmingly likely.
All of this together makes me extremely confused if his real view is basically just “I agree with most of MIRI’s policy proposals but I think we shouldn’t rush to enact a halt or slowdown tomorrow”.
I assume that he disagrees with MIRI’s headline policy proposal of banning AI research, in the senses that he thinks it’s a poor choice of policy recommendation given tractability and the concern that this proposal might cause bad things to happen (like uneven bans on AI research). I don’t know what he thinks of whether it would be good to magically institute the MIRI policy proposal; I think it’s fundamentally unclear what hypothetical you’re even supposed to consider in order to answer that question.
I summarized my view on MIRI’s policy suggestions as “poor”, but I definitely think it will be extremely valuable to have the option to slow down AI development in the future.
I definitely think it will be extremely valuable to have the option to slow down AI development in the future.
What are the mechanisms you find promising for causing this to occur? If we all agree on “it will be extremely valuable to have the option to slow down AI development in the future”, then I feel silly for arguing about other things; it seems like the first priority should be to talk about ways to achieve that shared goal, whatever else we disagree about.
(Unless there’s a fast/easy way to resolve those disagreements, of course.)
(I also would have felt dramatically more positive about Will’s review if he’d kept everything else unchanged but just added the sentence “I definitely think it will be extremely valuable to have the option to slow down AI development in the future.” anywhere in his review. XP If he agrees with that sentence, anyway!)
I think you’ve substantially misunderstood what Will is talking about. He’s not making a recommendation that people rush through things. He’s noting what he believes (and I mostly agree) to be huge weaknesses in the book’s argument.
Similarly, he’s not saying labs have alignment in the bag. He’s just noting holes in the book’s arguments that extreme catastrophic misalignment is overwhelmingly likely.
I assume that he disagrees with MIRI’s headline policy proposal of banning AI research, in the senses that he thinks it’s a poor choice of policy recommendation given tractability and the concern that this proposal might cause bad things to happen (like uneven bans on AI research). I don’t know what he thinks of whether it would be good to magically institute the MIRI policy proposal; I think it’s fundamentally unclear what hypothetical you’re even supposed to consider in order to answer that question.
I summarized my view on MIRI’s policy suggestions as “poor”, but I definitely think it will be extremely valuable to have the option to slow down AI development in the future.
What are the mechanisms you find promising for causing this to occur? If we all agree on “it will be extremely valuable to have the option to slow down AI development in the future”, then I feel silly for arguing about other things; it seems like the first priority should be to talk about ways to achieve that shared goal, whatever else we disagree about.
(Unless there’s a fast/easy way to resolve those disagreements, of course.)
(I also would have felt dramatically more positive about Will’s review if he’d kept everything else unchanged but just added the sentence “I definitely think it will be extremely valuable to have the option to slow down AI development in the future.” anywhere in his review. XP If he agrees with that sentence, anyway!)
You don’t feel like “I think the risk of misaligned AI takeover is enormously important.” suffices?