I don’t think that an AI would automatically “spend resources on safeguarding itself against value drift”—except if it has been explicitly coded that way (or its instances mutate toward that by natural selection, but I don’t see that).
So clippy probably wouldn’t and could likely loose its clipping ability or find itself mutated or discover that it fights instances of itself due to accidental (cartesianism-cause probably) partitioning of its ‘brain’. All processes that do submit to natural selection. And that could result in AI (or cosmic civilizations) failing to expand due to percolation theory.
I don’t think that an AI would automatically “spend resources on safeguarding itself against value drift”—except if it has been explicitly coded that way (or its instances mutate toward that by natural selection, but I don’t see that).
It requires at least a solution to the cartesianism problem which is currently unsolved and not every self-optimizing process neccessarily solves this.
So clippy probably wouldn’t and could likely loose its clipping ability or find itself mutated or discover that it fights instances of itself due to accidental (cartesianism-cause probably) partitioning of its ‘brain’. All processes that do submit to natural selection. And that could result in AI (or cosmic civilizations) failing to expand due to percolation theory.
I’m not sure why people consider cartesianism unsolved. I wrote a couple comments about that here, also see Wei_Dai’s comment.
I agree that there is some solid progress in this direction.
But that doesn’t mean that any self-optimizing process necessarily solves it. Rather the opposite.