I feel like intelligence amplification is plenty destabilising. Consider how toxic intelligence discourse is or has been right now already:
some people argue that some ethnic groups (usually their own) has inherently higher intelligence, which makes them better
other people who want to push back against the former then go the other extreme and claim intelligence doesn’t exist at all as an even partially measurable quantity
And what would you do with your intelligence amplification method? Sell it? So now richer people, and richer countries, are the ones to first reap the benefits, amplifying gaps in inequality which again have destabilising effects.
A lot of this ends up in similar places as aligned ASI, if you only consider the political side of it. Similar issues.
Similar issues but less extreme. The degree of concentration of power, misalignment risk, etc. from intelligence amplification would be smaller than from AGI.
It would be slower for sure, at least, being bound to human dynamics. But “same problems but slower” isn’t the same as a solution/alternative. Admittedly better in the limited sense that it’s less likely to end with straight up extinction, but it’s a rather grim world either way.
Since it’s slower, the tech development cycle is faster in comparison. Tech development --> less expensive tech --> more access --> less concetration of power --> more moral outcomes.
I feel like intelligence amplification is plenty destabilising. Consider how toxic intelligence discourse is or has been right now already:
some people argue that some ethnic groups (usually their own) has inherently higher intelligence, which makes them better
other people who want to push back against the former then go the other extreme and claim intelligence doesn’t exist at all as an even partially measurable quantity
And what would you do with your intelligence amplification method? Sell it? So now richer people, and richer countries, are the ones to first reap the benefits, amplifying gaps in inequality which again have destabilising effects.
A lot of this ends up in similar places as aligned ASI, if you only consider the political side of it. Similar issues.
Similar issues but less extreme. The degree of concentration of power, misalignment risk, etc. from intelligence amplification would be smaller than from AGI.
It would be slower for sure, at least, being bound to human dynamics. But “same problems but slower” isn’t the same as a solution/alternative. Admittedly better in the limited sense that it’s less likely to end with straight up extinction, but it’s a rather grim world either way.
Since it’s slower, the tech development cycle is faster in comparison. Tech development --> less expensive tech --> more access --> less concetration of power --> more moral outcomes.