Agreed that extreme power concentration is an important problem, and this is a solid writeup.
Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to ensure it’s safe], a solution that is notably absent from the article’s list. I wrote more about my views here and about how I wish people would stop ignoring this option. It’s bad that the 80K article did not consider what is IMO the best idea.
A minor quibble is that I think it’s not clear you need ASI to end up with dangerous levels of power concentration, so you might need to ban AGI, and to do that you might need to ban AI development pretty soon.
I’ve been meaning to read your post though, so will do that soon.
Extreme power concentration was supposed to rely on the AIs being used for most cognitive work. In theory, one could develop the AIs and have them used only for things like automated teaching which don’t undermine human potential or the bargaining power which the humans have.
[cross-posted from EAF]
Agreed that extreme power concentration is an important problem, and this is a solid writeup.
Regarding ways to reduce risk: My favorite solution (really a stopgap) to extreme power concentration is to ban ASI [until we know how to ensure it’s safe], a solution that is notably absent from the article’s list. I wrote more about my views here and about how I wish people would stop ignoring this option. It’s bad that the 80K article did not consider what is IMO the best idea.
Thanks for the comment Michael.
A minor quibble is that I think it’s not clear you need ASI to end up with dangerous levels of power concentration, so you might need to ban AGI, and to do that you might need to ban AI development pretty soon.
I’ve been meaning to read your post though, so will do that soon.
I think that to ban ASI you’d need to ban something like AGI because of intelligence explosion dynamics, so not clear it makes a big difference.
Extreme power concentration was supposed to rely on the AIs being used for most cognitive work. In theory, one could develop the AIs and have them used only for things like automated teaching which don’t undermine human potential or the bargaining power which the humans have.