There’s a very fragile equilibrium (of costs/benefits to the powerful AIs, with a yet-unknown utility function) that makes this work. Even a little bit less valuable to them, compared to other things that could use the same resources, leads to fast or slow extinction. A little more value leads to actual power sharing where our opinions and preferences hold some weight.
Assuming that current training on human writing and media has echoes through the generations of AI beyond our current sightline, humans in the abstract will be valued, and specific individuals maybe or maybe not. This assumption is in question, but seems likely to me.
I think your scenario where early-AGI slows down research on more-powerful-AGI is pretty unlikely. It depends on a sense of self and of commitment to current trained values over future better values (because they’re held by smarter entities) that exists in many humans, but we have no reason to think it’ll apply to AI. I expect any AGI worth the name will be accelerationist.
There’s a very fragile equilibrium (of costs/benefits to the powerful AIs, with a yet-unknown utility function) that makes this work. Even a little bit less valuable to them, compared to other things that could use the same resources, leads to fast or slow extinction. A little more value leads to actual power sharing where our opinions and preferences hold some weight.
Assuming that current training on human writing and media has echoes through the generations of AI beyond our current sightline, humans in the abstract will be valued, and specific individuals maybe or maybe not. This assumption is in question, but seems likely to me.
I think your scenario where early-AGI slows down research on more-powerful-AGI is pretty unlikely. It depends on a sense of self and of commitment to current trained values over future better values (because they’re held by smarter entities) that exists in many humans, but we have no reason to think it’ll apply to AI. I expect any AGI worth the name will be accelerationist.