creates an infinite risk for minimal gain. A rational ASI calculating expected utility would recognize that:The reward for hostile expansion is finite (limited cosmic resources)The risk is potentially infinite (destruction by more advanced ASIs)
creates an infinite risk for minimal gain. A rational ASI calculating expected utility would recognize that:
The reward for hostile expansion is finite (limited cosmic resources)
The risk is potentially infinite (destruction by more advanced ASIs)
Depending on the shape of the reward function it could also be closer to exactly the other way round.
I think the main idea I was pushing for here is that the probability function is likely to have the gradient described because of the unknowables involved and the infinite loss curve
Depending on the shape of the reward function it could also be closer to exactly the other way round.
I think the main idea I was pushing for here is that the probability function is likely to have the gradient described because of the unknowables involved and the infinite loss curve