Mm, yeah, maybe. The key part here is, as usual, “who is implementing this plan”? Specifically, even if someone solves the the preference-agglomeration problem (which may be possible to do for a small group of researchers), why would we expect it to end up implemented at scale? There are tons of great-on-paper governance ideas which governments around the world are busy ignoring.
For things like superbabies (or brain-computer interfaces, or uploads), there’s at least a more plausible pathway for wide adoption, similar motives for maximizing profit/geopolitical power as with AGI.
I think there is a fourth option (although it’s not likely to happen):
Indefinitely pause AI development.
Figure out a robust way to do preference agglomeration.
Encode #2 into law.
Resume AI development (after solving all other safety problems too, of course).
I was going to say step 2 is “draw the rest of the owl” but really this plan has multiple “draw the rest of the owl” steps.
Mm, yeah, maybe. The key part here is, as usual, “who is implementing this plan”? Specifically, even if someone solves the the preference-agglomeration problem (which may be possible to do for a small group of researchers), why would we expect it to end up implemented at scale? There are tons of great-on-paper governance ideas which governments around the world are busy ignoring.
For things like superbabies (or brain-computer interfaces, or uploads), there’s at least a more plausible pathway for wide adoption, similar motives for maximizing profit/geopolitical power as with AGI.