What do you think of the argument here (if you read that far) to build this into progenitor models? This idea does not apply to an SI, as I clarify in the text as well.
Let’s look at this from today. Current AI is unable to hide all deception from us. It seems reasonable to me that a switch would trigger before stealth SI is deployed.
What do you think of the argument here (if you read that far) to build this into progenitor models? This idea does not apply to an SI, as I clarify in the text as well.
Let’s look at this from today. Current AI is unable to hide all deception from us. It seems reasonable to me that a switch would trigger before stealth SI is deployed.