At the present time, colonising other plants probably would not increase the chance of UFAI, because we will probably develop AI before colonized planets would develop to the point of competing with Earth in R&D.
Another good reason why number of colonized planets is irrelevant. If you agree with me about that then why did you mention it?
It means that all “UFAI” outcomes are considered equivalent. Instead of asking what the real utility function is, you just make these two categories, FAI and UFAI, and say “UFAI bad”, and thus don’t have to answer detailed questions about what specifically is bad, and how bad it is.
Assigning one utility value to all UFAI outcomes is obviously stupid, which is why I don’t think anyone does it (please stop strawmanning). What some people (including myself) do assume is that at their state of knowledge they have no way of telling which UFAI projects will work out better than others so they give them all the same *expected” utility.
You claim that this is a mistake, and that it has lead to the disturbing conclusions you reach. I cannot see how assuming more than one UFAI possibility has any effect on your argument, since any of the policies you suggest could still be ‘justified’ on the grounds of avoiding the worst kind of UFAI. There are plenty of mistakes in your whole argument, no need to assume the existence of another one.
Also, your statement is not correct. When someone says “a utility function is unchanged by affine transformations”, what they mean is that the outcome of a decision process using that utility function will be the same.
I am aware that that is what it means. Since the only purpose of utility functions is to determine the outcomes of decision processes to say an outcome is assigned “zero utility” without giving any other points on the utility function is to make a meaningless statement.
And that is not true if we define zero utility as “that utility level at which I am indifferent to life.” An outcome leading to utility epsilon for eternity has infinite utility. An outcome leading to utility minus epsilon means it is better to destroy ourselves, or the universe.
You and I seem to be using different domains for our utility functions. Whereas yours is computed over instants of time mine is computed over outcomes. I may be biased here but I think mine is better on the grounds of not leading to infinities (which tend to screw up expected utility calculations).
If you assume FOOM is the only possible outcome. See the long debate between Eliezer & Robin.
I have seen it, and I agree that it is an interesting question with no obvious answer. However, since UFAI is not really much of a danger unless FOOM is possible, your whole post is only really relevant to FOOM scenarios.
Another good reason why number of colonized planets is irrelevant. If you agree with me about that then why did you mention it?
Assigning one utility value to all UFAI outcomes is obviously stupid, which is why I don’t think anyone does it (please stop strawmanning). What some people (including myself) do assume is that at their state of knowledge they have no way of telling which UFAI projects will work out better than others so they give them all the same *expected” utility.
You claim that this is a mistake, and that it has lead to the disturbing conclusions you reach. I cannot see how assuming more than one UFAI possibility has any effect on your argument, since any of the policies you suggest could still be ‘justified’ on the grounds of avoiding the worst kind of UFAI. There are plenty of mistakes in your whole argument, no need to assume the existence of another one.
I am aware that that is what it means. Since the only purpose of utility functions is to determine the outcomes of decision processes to say an outcome is assigned “zero utility” without giving any other points on the utility function is to make a meaningless statement.
You and I seem to be using different domains for our utility functions. Whereas yours is computed over instants of time mine is computed over outcomes. I may be biased here but I think mine is better on the grounds of not leading to infinities (which tend to screw up expected utility calculations).
I have seen it, and I agree that it is an interesting question with no obvious answer. However, since UFAI is not really much of a danger unless FOOM is possible, your whole post is only really relevant to FOOM scenarios.