If both FAIs use TDT or a comparable decision theory, then (under plausible assumptions), they will both maximize an aggregate of both civilizations’ welfare.
You might need a coalition against less tractable aliens, and you also might need a coalition to deal with something the non-living universe is going to throw at you.
If your creators include an interest in novelty in their CEV, then aliens are going to provide more variety than what your creators can make up on their own.
If your creators include an interest in novelty in their CEV, then aliens are going to provide more variety than what your creators can make up on their own.
Heh. The situation is symmetric, so the humanity is also novelty for aliens. And how much value does novelty has? It it similar to having some exotic pets? X-D
I meant novelty in a broad sense—not just like having an exotic pet. I’d expect different sensoria leading to somewhat different angles on the universe, and better understanding of biology and material science, at least.
It’s not clear that territory that already has a FAI watching over it can be overtaken by another FAI.
A FAI might expand to inhibit territory by sending small probes. I think those probes are unlikely to have any effect in territory already occupied by another FAI.
I’m also not sure to what extend you can call nodes of an FAI of the same origin that have millions of light years between them the same FAI.
I’m also not sure to what extend you can call nodes of an FAI of the same origin that have millions of light years between them the same FAI.
That’s a valid point. An AI can rapidly expand across interstellar distances only by replicating and sending out clones. Assuming the speed of light limit, the clones would be essentially isolated from each other and likely to develop independently. So while we talk about “AI expanding through the light cone”, it’s actually a large set of diverging clones that’s expanding. It’s an interesting question how far could they diverge from one another.
If their ideas of friendliness are incompatible with each other, perhaps a conflict? Superintelligent war? It may be the case that one will be ‘stronger’ than the other, and that there will be a winner-take-all(-of-the-universe?) resolution?
If there is some compatibility, perhaps a merge, a la Three Worlds Collide?
Or maybe they co-operate, try not to interfere with each other? This would be more unlikely if they are in competition for something or other (matter?), but more likely if they have difficulties assessing risks to not co-operating, or if there is mutually assured destruction?
It’s a fun question, but I mean, Vinge had that event horizon idea, about how fundamentally unpredictable things are for us mere humans when we’re talking about hypothetical intelligences of this caliber, and I think he had a pretty good point on that. This question is taking a few extra steps beyond that, even.
This question is taking a few extra steps beyond that, even.
Oh, sure, it’s much more of a flight-of-fantasy question than a realistic one. An invitation to consider the tactical benefits of bombarding galaxies with black holes accelerated to a high fraction of c, maybe X-D
But the original impetus was the curiosity about the status of intelligent aliens for a FAI mathematically proven to be friendly to humans.
In the intersection of their future light cones, each FAI can either try to accommodate the other (C) or try to get its own way (D). If one plays C and one plays D, the latter’s values are enforced in the intersection of light cones; if both play C, they’ll enforce some kind of compromise values; if they both play D, they will fight. So the payoff matrix is either PD-like or Chicken-like depending on how bloody the fight would be and how bad their values are by each other’s standards.
The contact between the FAIs is not a one-decision-to-fight-or-share deal. It’s a process that will take some time and each party will have to take many decisions during that process. Besides, the payoff matrix is quite uncertain—if one initially cooperates and one initially defects does the defecting one get more? No one knows. For example, the start of the hostilities between Hitler and Stalin was the case where Stalin (initially) cooperated and Hitler (initially) defected. The end result—not so good for Hitler.
There are many options here—fully cooperate (and potentially merge), fight till death, divide spheres of influence, set up a DMZ with shared control, modify self, etc.
The first interesting question is, I guess, how friendly to aliens will a FAI be? Will it perceive another alien FAI as an intolerable obstacle in its way to implement friendliness as it understands it?
More questions go along the lines of how likely it is that one FAI will be stronger (or smarter) than the other one. If they fight, what might it look like (assume interstellar distances and speed of light limits). How might an AI modify itself on meeting another AI, etc. etc.
As much as is reasonable in given situation. If it is stronger, and if conquering the other AI is a net gain, it will fight. If it is not stronger, or the peace could be more efficient than the war, it will try to negotiate.
The costs of peace will depend on the differences between those two AIs. “Let’s both self-modify to become compatible” is one way to make peace, forever. It has some cost, but it also saves some cost. Agreeing to split the universe into two parts, each governed by one AI, also has some cost. Depending on specific numbers, the utility maximizing choice could be “winner takes all” or “let’s split the universe” or “let’s merge into one” or maybe something else I didn’t think about.
Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.
Absolute strength for one, Absolute intelligence for another. If one AI has superior intelligence and compromises against one that asserts its will, it might be able to fool the assertive AI into believing it got what it wanted when it actually compromised. Alternatively, two equally intelligent AIs might present themselves to each other as though both are on equal strength, but one could easily be hiding a larger military force whose presence it doesn’t want to affect the interaction (if it plans to compromise and is curious to know whether the other one will as well)
Both of those scenarios result in C out-competing D.
… Since I am, of course a FAI (Sarcasm!) I can tell you the answer to this. They obviously split the future time-streams of the universe by each committing instant civilization-wide suicide or not based on a quantum lottery. Anthropic engineering in this way ensures they do not have to fight each other at all, which would entail actual risk of people getting hurt,
No, seriously, you want us to take guesses at how weakly godlike entities are going to interact ? Pftrttfffff,mwhahahahahaahaaa.
What’s supposed to happen if an expanding FAI friendly to civilization X collides with an expanding FAI friendly to civilization Y?
If both FAIs use TDT or a comparable decision theory, then (under plausible assumptions), they will both maximize an aggregate of both civilizations’ welfare.
Each FAI is friendly to its creators, not necessarily to the rest of the universe. Why would a FAI be interested in the welfare of aliens?
You might need a coalition against less tractable aliens, and you also might need a coalition to deal with something the non-living universe is going to throw at you.
If your creators include an interest in novelty in their CEV, then aliens are going to provide more variety than what your creators can make up on their own.
Heh. The situation is symmetric, so the humanity is also novelty for aliens. And how much value does novelty has? It it similar to having some exotic pets? X-D
I meant novelty in a broad sense—not just like having an exotic pet. I’d expect different sensoria leading to somewhat different angles on the universe, and better understanding of biology and material science, at least.
It’s not clear that territory that already has a FAI watching over it can be overtaken by another FAI. A FAI might expand to inhibit territory by sending small probes. I think those probes are unlikely to have any effect in territory already occupied by another FAI.
I’m also not sure to what extend you can call nodes of an FAI of the same origin that have millions of light years between them the same FAI.
That’s a valid point. An AI can rapidly expand across interstellar distances only by replicating and sending out clones. Assuming the speed of light limit, the clones would be essentially isolated from each other and likely to develop independently. So while we talk about “AI expanding through the light cone”, it’s actually a large set of diverging clones that’s expanding. It’s an interesting question how far could they diverge from one another.
If their ideas of friendliness are incompatible with each other, perhaps a conflict? Superintelligent war? It may be the case that one will be ‘stronger’ than the other, and that there will be a winner-take-all(-of-the-universe?) resolution?
If there is some compatibility, perhaps a merge, a la Three Worlds Collide?
Or maybe they co-operate, try not to interfere with each other? This would be more unlikely if they are in competition for something or other (matter?), but more likely if they have difficulties assessing risks to not co-operating, or if there is mutually assured destruction?
It’s a fun question, but I mean, Vinge had that event horizon idea, about how fundamentally unpredictable things are for us mere humans when we’re talking about hypothetical intelligences of this caliber, and I think he had a pretty good point on that. This question is taking a few extra steps beyond that, even.
Oh, sure, it’s much more of a flight-of-fantasy question than a realistic one. An invitation to consider the tactical benefits of bombarding galaxies with black holes accelerated to a high fraction of c, maybe X-D
But the original impetus was the curiosity about the status of intelligent aliens for a FAI mathematically proven to be friendly to humans.
Neither defects?
Why do you think it’s going to be a prisoner’s dilemma type of situation?
In the intersection of their future light cones, each FAI can either try to accommodate the other (C) or try to get its own way (D). If one plays C and one plays D, the latter’s values are enforced in the intersection of light cones; if both play C, they’ll enforce some kind of compromise values; if they both play D, they will fight. So the payoff matrix is either PD-like or Chicken-like depending on how bloody the fight would be and how bad their values are by each other’s standards.
Or am I missing something?
The contact between the FAIs is not a one-decision-to-fight-or-share deal. It’s a process that will take some time and each party will have to take many decisions during that process. Besides, the payoff matrix is quite uncertain—if one initially cooperates and one initially defects does the defecting one get more? No one knows. For example, the start of the hostilities between Hitler and Stalin was the case where Stalin (initially) cooperated and Hitler (initially) defected. The end result—not so good for Hitler.
There are many options here—fully cooperate (and potentially merge), fight till death, divide spheres of influence, set up a DMZ with shared control, modify self, etc.
The first interesting question is, I guess, how friendly to aliens will a FAI be? Will it perceive another alien FAI as an intolerable obstacle in its way to implement friendliness as it understands it?
More questions go along the lines of how likely it is that one FAI will be stronger (or smarter) than the other one. If they fight, what might it look like (assume interstellar distances and speed of light limits). How might an AI modify itself on meeting another AI, etc. etc.
As much as is reasonable in given situation. If it is stronger, and if conquering the other AI is a net gain, it will fight. If it is not stronger, or the peace could be more efficient than the war, it will try to negotiate.
The costs of peace will depend on the differences between those two AIs. “Let’s both self-modify to become compatible” is one way to make peace, forever. It has some cost, but it also saves some cost. Agreeing to split the universe into two parts, each governed by one AI, also has some cost. Depending on specific numbers, the utility maximizing choice could be “winner takes all” or “let’s split the universe” or “let’s merge into one” or maybe something else I didn’t think about.
The critical question is, whose utility?
The Aumann theorem will not help here since the FAIs will start with different values and different priors.
Each AI tries to maximize their own utility, of course. When they consider merging, they make an estimate: how much of the original utility can I expect to get after we both self-modify to maximize the new utility function.
Then each AI makes its own choice and the two choices might well turn out to be incompatible.
There is also the issue of information exchange—basically, it will be hard for the two AIs to trust each other.
Absolute strength for one, Absolute intelligence for another. If one AI has superior intelligence and compromises against one that asserts its will, it might be able to fool the assertive AI into believing it got what it wanted when it actually compromised. Alternatively, two equally intelligent AIs might present themselves to each other as though both are on equal strength, but one could easily be hiding a larger military force whose presence it doesn’t want to affect the interaction (if it plans to compromise and is curious to know whether the other one will as well)
Both of those scenarios result in C out-competing D.
… Since I am, of course a FAI (Sarcasm!) I can tell you the answer to this. They obviously split the future time-streams of the universe by each committing instant civilization-wide suicide or not based on a quantum lottery. Anthropic engineering in this way ensures they do not have to fight each other at all, which would entail actual risk of people getting hurt,
No, seriously, you want us to take guesses at how weakly godlike entities are going to interact ? Pftrttfffff,mwhahahahahaahaaa.
Sure. I find such speculations fun. YMMV, of course.