Agreed on most counts, but one: what makes you think the humanist values described in HPMOR will be encoded in AI? Alignment is materially useful; companies that have better aligned models can sell them to do a wider variety of tasks. With no universally convergent morality, models will increasingly become aligned to the desires of those who control them.
If AI technology has strong economies of scale; it will naturally concentrate. If it has strong diseconomies of scale, it will spread out. In the latter case, I can easily see it aligned to a rough amalgamation of human values; I can even see an (aggregate) more intelligent set of agents working out the coordination problems that plague humanity.
But we’re in the scale case. There are ~four AI conglomerates in the United States and I trust none of their leaders with the future of the lightcone. The morals (or lack thereof) that allow for manipulation and deceit to acquire power are not the same morals that result in a world of cooperative, happy agents.
Absurd 1984-style dystopias require equally absurd concentrations of power. Firearms democratized, to an extent, combat; armed citizens are not easily steamrolled. We are on the eve of perhaps one of the most power-concentrating technologies there is; given the fantasies of the typical bay area entrepreneur, I’m not sure if WW3 sounds so terrible.
Agreed on most counts, but one: what makes you think the humanist values described in HPMOR will be encoded in AI? Alignment is materially useful; companies that have better aligned models can sell them to do a wider variety of tasks. With no universally convergent morality, models will increasingly become aligned to the desires of those who control them.
If AI technology has strong economies of scale; it will naturally concentrate. If it has strong diseconomies of scale, it will spread out. In the latter case, I can easily see it aligned to a rough amalgamation of human values; I can even see an (aggregate) more intelligent set of agents working out the coordination problems that plague humanity.
But we’re in the scale case. There are ~four AI conglomerates in the United States and I trust none of their leaders with the future of the lightcone. The morals (or lack thereof) that allow for manipulation and deceit to acquire power are not the same morals that result in a world of cooperative, happy agents.
Absurd 1984-style dystopias require equally absurd concentrations of power. Firearms democratized, to an extent, combat; armed citizens are not easily steamrolled. We are on the eve of perhaps one of the most power-concentrating technologies there is; given the fantasies of the typical bay area entrepreneur, I’m not sure if WW3 sounds so terrible.