But where did that number come from? At some point, an intelligent system that was not handed a budget selects a budget for itself. Presumably the number is set according to some cost-benefit criterion, instead of chosen because it’s three hands worth of fingers in a log scale based on two hands worth of fingers.
Of course, my point is to build all intelligent systems so that they do not hand themselves a new budget, with probability that is within our risk budget (which we choose arbitrarily).
If it isn’t, how do you expect the agent to actually stick to such a budget?
I hope that survival of humanity dominates the utility function of people who build AI, and they will do their best to carry it over to the AI. You can individually have another utility function, if it serves you well in your life. (As long as you won’t build any AIs). But that was a wrong way to answer your previous point:
One, it looks like simple utility maximization (go to the movie if the benefits outweigh the costs) gives the right answer, and being more or less cautious than that suggests is a mistake (at least, of how the utility is measured).
Not in case of multiple agents, who cannot easily coordinate. E.g. what if each human’s utility function makes it look reasonable to have a 1/1000 risk of destroying the world, for potential huge personal gains?
If I view the seven pulls as independent events, it depletes my budget by 7⁄6, but if I treat them as one event, it depletes my budget by only 1-(5/6)^7, which is about 72%.
I am well aware of this, but the effect is negligible if we speak of small probabilities.
Of course, my point is to build all intelligent systems so that they do not hand themselves a new budget, with probability that is within our risk budget (which we choose arbitrarily).
I hope that survival of humanity dominates the utility function of people who build AI, and they will do their best to carry it over to the AI. You can individually have another utility function, if it serves you well in your life. (As long as you won’t build any AIs). But that was a wrong way to answer your previous point:
Not in case of multiple agents, who cannot easily coordinate. E.g. what if each human’s utility function makes it look reasonable to have a 1/1000 risk of destroying the world, for potential huge personal gains?
I am well aware of this, but the effect is negligible if we speak of small probabilities.