We want to minimize the amount of the universe eventually controlled by unaligned ASIs because their values tend to be absurd and their very existence is abhorrent to us.
No. We want to optimize the universe in accordance with our values. That’s not at all the same thing as minimizing the existence of agents with absurd-to-us values. Life is not a zero-sum game: if we think of a plan that increases the probability of Friendly AI and the probability of unaligned AI (at the expense of the probability of “mundane” human extinction via nuclear war or civilizational collapse), that would be good for both us and unaligned AIs.
Thus, if you’re going to be thinking about galaxy-brained acausal trade schemes at all—even though, to be clear, this stuff probably doesn’t work because we don’t know how model distant minds well enough to form agreements with them—there’s no reason to prefer other biological civilizations over unaligned AIs as trade partners. (This is distinct from us likely having more values in common with biological aliens; all that gets factored away into the utility function.)
the creation of huge amounts of the other entity’s disvalue
We do not want to live in a universe where agents deliberately spend resources to create disvalue for each other! (As contrasted to “merely” eating each other or competing for resources.) This is the worst thing you could possibly do.
Apparently this was a really horrible idea! I’m glad to have found out now instead of wasting my time and energy thinking further about it.
What I’ve learned is that I am overly biased in favor of my own ideas even now; I was trying while writing the post to convince the reader that my idea was good, rather than actually seek disproof of the idea and then seek disproof of the disproof etc in a dispassionate way. If I’d tried hard to prove myself wrong I probably would have never posted it.
Another thing I’ve learned is that I ought not think about acausal things because they don’t make sense and I am not a Yudkowsky who can intuitively think in timeless decision theory!
No. We want to optimize the universe in accordance with our values. That’s not at all the same thing as minimizing the existence of agents with absurd-to-us values. Life is not a zero-sum game: if we think of a plan that increases the probability of Friendly AI and the probability of unaligned AI (at the expense of the probability of “mundane” human extinction via nuclear war or civilizational collapse), that would be good for both us and unaligned AIs.
Thus, if you’re going to be thinking about galaxy-brained acausal trade schemes at all—even though, to be clear, this stuff probably doesn’t work because we don’t know how model distant minds well enough to form agreements with them—there’s no reason to prefer other biological civilizations over unaligned AIs as trade partners. (This is distinct from us likely having more values in common with biological aliens; all that gets factored away into the utility function.)
We do not want to live in a universe where agents deliberately spend resources to create disvalue for each other! (As contrasted to “merely” eating each other or competing for resources.) This is the worst thing you could possibly do.
Apparently this was a really horrible idea! I’m glad to have found out now instead of wasting my time and energy thinking further about it.
What I’ve learned is that I am overly biased in favor of my own ideas even now; I was trying while writing the post to convince the reader that my idea was good, rather than actually seek disproof of the idea and then seek disproof of the disproof etc in a dispassionate way. If I’d tried hard to prove myself wrong I probably would have never posted it.
Another thing I’ve learned is that I ought not think about acausal things because they don’t make sense and I am not a Yudkowsky who can intuitively think in timeless decision theory!