You assume the conclusion:
A lot of the AI alignment success seems to me stem from the question of whether the problem is easy or not, and is not very elastic to human effort.
AI races are bad because they select for contestants that put in less alignment effort.
I do assume that not being in a race lowers the probability of doom by 5%, and that MAGIC can lower it by more than two shannon (from 10% to 2%).
Maybe it was a mistake of mine to put the elasticity front and center, since this is actually quite elastic.
I guess it could be more elastic than that, but my intuition is skeptical.
You assume the conclusion:
AI races are bad because they select for contestants that put in less alignment effort.
I do assume that not being in a race lowers the probability of doom by 5%, and that MAGIC can lower it by more than two shannon (from 10% to 2%).
Maybe it was a mistake of mine to put the elasticity front and center, since this is actually quite elastic.
I guess it could be more elastic than that, but my intuition is skeptical.