I tend to think something like this is more likely than the kind of intelligence explosions the AI Safety community tends to imagine. And I think it’s a much, much more difficult scenario to navigate.
I also think this is more likely.
And this requires an entirely different approach: one needs to aim for a world order which represents and protects interests of all these different entities and lifeforms, so that they all have vested interest in helping to maintain this kind of world order.
Then there might be a decent chance for a reliable collective security system which survives drastic self-modifications of the overall ecosystem and continues to work through those self-modifications.
I also think this is more likely.
And this requires an entirely different approach: one needs to aim for a world order which represents and protects interests of all these different entities and lifeforms, so that they all have vested interest in helping to maintain this kind of world order.
Then there might be a decent chance for a reliable collective security system which survives drastic self-modifications of the overall ecosystem and continues to work through those self-modifications.