One good scenario with low payoffability is one where there is ASI aligned to humanity as a whole, such that only a few prestige goods are meaningfully scarce and money matters much less to material quality of life.
This could be the default outcome of good policy if alignment is moderately hard—not so easy that we get it for free with capabilities, but not so hard that we can’t have a moonshot program or temporary pause that allows it to catch up. (We can throw in ”aligned to human or sentient interests as a whole rather than some corporate/political egomaniac” as part of “good.”)
Ofc no policy can guarantee that alignment is moderately easy!
One good scenario with low payoffability is one where there is ASI aligned to humanity as a whole, such that only a few prestige goods are meaningfully scarce and money matters much less to material quality of life.
This could be the default outcome of good policy if alignment is moderately hard—not so easy that we get it for free with capabilities, but not so hard that we can’t have a moonshot program or temporary pause that allows it to catch up. (We can throw in ”aligned to human or sentient interests as a whole rather than some corporate/political egomaniac” as part of “good.”)
Ofc no policy can guarantee that alignment is moderately easy!