I’ve read Omohundro’s paper, and while I buy the weak form of the argument, I don’t buy the strong form. Or rather, I can’t accept the strong form without a solid model of the algorithm/mind-design I’m looking at.
I’d say the main reason it’s so counterintuitive is that this behaviour exists strongly for expected utility maximisers—and we’re so unbelievably far from being that ourselves.
In which case we should be considering building agents that are not expected utility maximizers.
I’ve read Omohundro’s paper, and while I buy the weak form of the argument, I don’t buy the strong form. Or rather, I can’t accept the strong form without a solid model of the algorithm/mind-design I’m looking at.
In which case we should be considering building agents that are not expected utility maximizers.