All of 1-4 seem plausible to me, and I don’t centrally expect that power concentration will lead to everyone dying.
Even if all of 1-4 hold, I think the future will probably be a lot less good than it could have been:
- 4 is more likely to mean that earth becomes a nature reserve for humans or something, than that the stars are equitably allocated
- I’m worried that there are bad selection effects such that 3 already screens out some kinds of altruists (e.g. ones who aren’t willing to strategy steal). Some good stuff might still happen to existing humans, but the future will miss out on some values completely
- I’m worried about power corrupting/there being no checks and balances/there being no incentives to keep doing good stuff for others
Yup sorry, the Tom above is actually Rose!
I like your distinction between narrow and broad IC dynamics. I was basically thinking just about narrow, but now agree that there is also potentially some broader thing.
How likely do you think it is that helper-nanobots outcompete auto-nanobots? Two possible things that could be going on are:
I’m unhelpfully abstracting away what kind of AI systems we end up with, but actually this significantly impacts how likely power concentration is and I should think more about it
Theoretically the distinction between helper and auto-nanobots is significant, but in practice it’s very unlikely that helper-nanobots will be competitive, so it’s fine to ignore the possibility and treat auto-nanobots as ‘AI’