I’d expect that any AGI (originating and interested in our universe) would initiate an exploration/colonization wave in all directions regardless of whether it has information that a given place has intelligent life, so broadcasting that we’re here doesn’t make it worse. Expecting superintelligent AI aliens that require a broadcast to notice us is like expecting poorly hidden aliens on flying saucers, the same mistake made on a different level. Also, light travels only so quickly, so our signals won’t reach very far before we’ve made an AGI of our own (one way or another), and thus had a shot at ensuring that our values obtain significant control.
Such an entity would have special interest in Earth, not because of special interest in acquiring its resources, but because Earth has intelligent lifeforms which may eventually thwart its ends.
Receiving a signal from us would seem to make the direction that the signal is coming from a preferred direction of exploration/colonization. If space exploration/colonization is sufficiently intrinsically costly then an AGI may be forced to engage in triage with regard to which directions it explores.
(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.
(3) An AGI may be cautious about exploring so as to avoid encountering more powerful AGIs with differing goals and hence may avoid initiating an indiscriminate exploration/colonization wave in all directions, preferring to hear from other civilizations before exploring too much.
The point about subtle deception made in a comment by dclayh suggests that communication between extraterrestrials may degenerate into a Keynesian beauty contest of second guessing what the motivations of other extraterrestrials are, how much they know, whether they’re faking helplessness or faking power, etc. This points in the direction of it being impossible for extraterrestrials to credibly communicate anything toward one another, which suggests that human attempts to communicate with extraterrestrials having zero expected value rather than negative expected value as I suggest in my main post.
Even so, there may be genuine opportunities for information transmission. At present I think the possibility that communicating with extraterrestrials has large negative expected value deserves further consideration, even if it seems that the probable effect of such consideration is to rule out the possibility.
If space exploration/colonization is sufficiently intrinsically costly then an AGI may be forced to engage in triage with regard to which directions it explores.
An AGI is extremely unlikely to be forced to engage in such a triage.
By far the most probable way for an extraterrestrial civilization to become powerful enough to threaten us is for it to learn how to turn ordinary matter like you might find in an asteroid or in the Oort cloud around an ordinary star into an AGI (e.g., turn the matter into a powerful computer and load the computer with the right software) like Eliezer is trying to do. And we know with very high confidence that silicon, aluminum, and other things useful for building powerful computers and space ships and uranium atoms and other things useful for powering them are evenly distributed in the universe (because our understanding of nucleosynthesis is very good).
ADDED. This is not the best explanation, but I’ll leave it alone because it is probably good enough to get the point across. The crux of the matter is that since the relativistic limit (on the speed of light) keeps the number of solar systems and galaxies an expanding civilization can visit to the cube of time whereas the number of new space ships that can be constructed in the absence of resource limits goes as 2 ^ time, even if it is very inefficient to produce new spaceships, the expansion in any particular direction quickly approaches the relativistic limit.
Still, even if an AGI is capable of simultaneously exploring in all directions, it may be inclined to send a disproportionately large amount of its resources (e.g. spaceships) in the direction of Earth with a view toward annihilating intelligent life on the Earth. After all, by the time it arrives at Earth, humans may have constructed their own AGI, so the factor determining whether the hypothetical extraterrestrial AGI can take over Earth may be the amount of resources that it sends toward the human civilization.
Also, maybe an AGI informed of our existence could utilize advanced technologies which we don’t know about yet to destroy us from afar (e.g. a cosmic ray generator?) and would not be inclined to utilize such technologies if it did not know of our existence (because using such hypothetical technologies could have side effects like releasing destructive radiation that detract from the AGI’s mission).
Still, even if an AGI is capable of simultaneously exploring in all directions, it may be inclined to send a disproportionately large amount of its resources (e.g. spaceships) in the direction of Earth with a view toward annihilating intelligent life on the Earth.
WHAT? It only takes one tiny probe with nanotech (femtotech?) and the right programming. “Colonization” (optimization, really) wave feeds on resources it encounters, so you only need to initiate it with a little bit of resources, it takes care of itself in the future.
I don’t follow this remark. Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle. It would seem that maximizing the resources present in a given area (with a view toward winning a potential AGI battle) would entail diverting resources from other areas of the galaxy.
Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle.
Since they can trade globally, what’s locally available must be irrelevant.
(I was talking about what it takes to stop a non-AGI civilization, hence a bit of misunderstanding.)
And if you get an alien AGI, you don’t need to rush towards it, you only need to have had an opportunity to do so. Everyone is better off if instead of inefficiently running towards fighting the new AGI, you go about your business as usual, and later at your convenience the new AGI surrenders, delivering you all the control you could gain by focusing on fighting it and a bit more. Everyone wins.
How do the AGI’s model each other accurately enough to be able to acausally trade with each other like that? Is just using UDT/TDT enough? Probably. Is every sufficiently intelligent AGI going to switch to that, regardless of the decision theory it started out with, the way a CDT AGI would? Maybe there are possible alien decision theories that don’t converge that way but are still winning enough to be a plausible threat?
Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle.
Since they can trade globally, what’s locally available must be irrelevant.
An AGI is likely to hit the physical limitations before it gets very far, so all AGIs will be more or less equal, excepting the amount of controlled resources.
“Destruction” is probably not an adequate description of what happens when two AGIs having different amount of resources controlled meet, it’ll be more of a trade. You keep what you control (in the past), but probably the situation makes further unbounded growth (inc. optimizing the future) impossible. And what you can grab from the start, as an AGI, is the “significant amount of control” that I referred to, even if the growth stops at some point.
Avoiding AGIs with different goals is not optimal, since it hurts you to not use the resources, and you can pay the correct amount of what you captured when you are discovered later, to everyone’s advantage.
An AGI is likely to hit the physical limitations before it gets very far
This is a good point.
“Destruction” is probably not an adequate description of what happens when two AGIs having different amount of resources controlled meet, it’ll be more of a trade.
Why do you say so? I could imagine them engaging in trade. I could also imagine them trying to destroy each other and the one with the greater amount of controlled resources successfully destroying the other. It would seem to depend on the AGIs’ goals which are presently unknown.
It’s always better for everyone if the loser surrenders before the fight begins. And since it saves the winner some resources, the surrendered loser gets a corresponding bonus. If there is a plan that gets better results, as a rule of thumb you should expect AGIs to do no worse than this plan allows (even if you have no idea how they could coordinate to follow this plan).
But what if the two AGIs were a literal paperclip maximizer and a literal staple maximizer? Suppose that the paperclip maximizer controlled 70% of the resources and calculated that it had a 90% chance of winning a fight. Then the paperclip maximizer would maximize expected number of paperclips by initiating a fight.
Now, obviously I don’t believe that we’ll see a literal paperclip maximizer or a literal staple maximizer, but do we have any reason to believe that the AGIs that arose in practice would act differently? Or that trading would systematically produce higher expected value than fighting?
“Fighting” is a narrow class of strategies, while in “trading” I include a strictly greater class of strategies, hence expectation of there being a better strategy within “trading”.
Suppose that the paperclip maximizer controlled 70% of the resources and calculated that it had a 90% chance of winning a fight [against staple maximizer]. Then the paperclip maximizer would maximize expected number of paperclips by initiating a fight.
But they’ll be even better off without a fight, with staple maximizer surrendering most of its control outright, or, depending on disposition (preference) towards risk, deciding the outcome with a random number and then orderly following what the random number decided.
Okay, I think I finally understand where you’re coming from. Thanks for the interesting conversation! I will spend some time digesting your remarks so as to figure out whether I agree with you and then update my top level post accordingly. You may have convinced me that the negative effects associated with sending signals into space are trivial.
I think (but am not sure) that the one remaining issue in my mind is the question of whether an AGI could somehow destroy human civilization from far away upon learning of our existence.
I think that Vladimir’s points were valid, but that they definitely shouldn’t have convinced you that the negative effects associated with sending signals into space are trivial (except in the trivial sense that no-one is likely to receive them).
Actually, your comment and Vladimir’s comment highlight a potential opportunity for me to improve my rationality.
•I’ve noticed that when I believe A and when somebody presents me with credible evidence against A, I have a tendency to alter my belief to “not A” even when the evidence against A is too small to warrant such a transition.
I think that my thought process is something like “I said that I believe A, and in response person X presented credible evidence against A which I wasn’t aware of. The fact that person X has evidence against A which I wasn’t aware of is evidence that person X is thinking more clearly about the topic than I am. The fact that person X took the time to convey evidence against A is an indication that person X does not believe A. Therefore, I should not believe A either.”
This line of thought is not totally without merit, but I take it too far.
(1) Just because somebody makes a point that didn’t occur to me doesn’t mean that that they’re thinking more clearly about the topic than I am.
(2) Just because somebody makes a point that pushes against my current view doesn’t mean that the person disagrees with my current view.
On (2), if Vladimir had prefaced his remarks with the disclaimer “I still think that it’s worthwhile to think about attracting the attention of aliens as an existential risk, but here are some reasons why it might not be as worthwhile as it presently looks to you” then I would not have had such a volatile reaction to his remark—the strength of my reaction was somehow predicated on the idea that he believed that I was wrong to draw attention to “attracting the attention of aliens as an existential risk.”
If possible, I would like to overcome the issue labeled with a • above. I don’t know whether I can, but I would welcome any suggestions. Do you know of any specific Less Wrong posts that might be relevant?
Changing your mind too often is better than changing your mind too rarely, if on the net you manage to be confluent: if you change your mind by mistake, you can change it back later.
(I do believe that it’s not worthwhile to worry about attracting attention of aliens—if that isn’t clear—though it’s a priori worthwhile to think about whether it’s a risk. I’d guess Eliezer will be more conservative on such an issue and won’t rely on an apparently simple conclusion that it’s safe, declaring it dangerous until FAI makes a competent decision either way. I agree that it’s a negative-utility action though, just barely negative due to unknown unknowns.)
Just because somebody makes a point that pushes against my current view doesn’t mean that the person disagrees with my current view.
Actually that is a good heuristic for understanding most people. Only horribly pedantic people like myself tend to volunteer evidence against our own beliefs.
Yes, I think you’re right. The people on LessWrong are unusual. Even so, even when speaking to members of the general population, sometimes one will misinterpret the things that they say as evidence of certain beliefs. (They may be offering evidence to support their beliefs, but I may misinterpret which of their beliefs they’re offering evidence in support of).
Thanks for your remark. I agree that what I said in my last comment is too strong.
I’m not convinced that the negative effects associated with sending signals into space are trivial, but Vladimir’s remarks did meaningfully lower my level of confidence in the notion that a really powerful optimization process would go out of its way to attack Earth in response to receiving a signal from us.
To me that conclusion also didn’t sound to be in the right place, but we did begin the discussion from that assertion, and there are arguments for that at the beginning of the discussion (not particularly related to where this thread went). Maybe something we cleared out helped with those arguments indirectly.
Isn’t this a Hawk-Dove situation, where pre-committing to fight even if you’ll probably lose could be in some AGI’s interests, by deterring others from fighting them?
Threats are not made to be carried out. Possibility of actual fighting sets the rules of the game, worst-case scenario which the actual play will improve on, to an extent for each player depending on the outcome of the bargaining aspect of the game.
For a threat to be significant, it has to be believed. In the case of AGI, this probably means the AGI itself being unable to renege on the threat. If two such met, wouldn’t fighting be inevitable? If so, how do we know it wouldn’t be worthwhile for at least some AGIs to make such a threat, sometimes?
Then again, ‘Maintain control of my current level of resources’ could be a schelling point that prevents descent into conflict.
But it’s not obvious why an AGI would choose to draw their line in the sand their though, when ‘current resources plus epsilon% of the commons’ is available. The main use of schelling points in human games is to create a more plausible threat, whereas an AGI could just show its source code.
This points in the direction of it being impossible for extraterrestrials to credibly communicate anything toward one another, which suggests that human attempts to communicate with extraterrestrials having zero expected value rather than negative expected value as I suggest in my main post.
Or rather, the only thing you can communicate is that you’re capable of producing the message. In our case, this basically means we’re communicating that we exist and little else.
(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.
This is true. The extent to which it is significant seems to depend on how quickly AGIs in general can reach ridiculously-diminishing-returns levels of technology. From there for most part a “war” between AGIs would (unless they cooperate with each other to some degree) consist of burning their way to more of the cosmic commons than the other guy.
This what I often thought about. I perceive the usual attitude here to be that once we managed to create FAI, i.e. a positive singularity, ever after we’ll be able to enjoy and live our life. But who says there’ll ever be a period without existential risks? Sure, the FAI will take care of all further issues. That’s an argument. But generally, as long as you don’t want to stay human yourself, is there a real option besides enjoying the present, not caring about the future much, or to forever focus on mere survival?
I mean, what’s the point. The argument here is that working now is worth it because in return we’ll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.
The argument here is that working now is worth it because in return we’ll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.
Not equally well. The tiny period of time that is the coming century is what determines the availability of huge amounts of resources and time in which to use them. When existential risks are far less (by a whole bunch of orders of magnitude) then the ideal way to use resources will be quite different.
Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources. If that were the case, AI aliens might not find it worthwhile to re-colonise, but still want to take down any other powerful optimisation systems that arose. Even if it was too late to stop them appearing, the sooner it could interrupt the post-singularity growth the better, from its perspective.
Then it would’ve been trivial to leave at least one nanomachine and a radio detector in every solar system, which is all it takes to wipe out any incipient civilizations shortly after their first radio broadcast.
It would be trivial to transform all the matter in every solar system reached, to some useware for the sender and not to bother with the possible future civilizations there, at all.
Wow, one could write a story about a civilization of beings who find coherent radio-frequency radiation extremely painful (for instance), because of precisely this artificial selection.
Re: “Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources.”
What − 4 billion years ago?!? What happened to the second wave? Why did the aliens not better dissipate the resources to perform experments and harvest energy, and then beam the results to the front? This hypothesis apparently makes little sense.
There are mountains of untapped resources lying around. If there were intelligent agents in the galaxy 4 billion years ago, where are their advanced descendants? There are no advanced descendants—so there were likely no intelligent agents in the first place.
It might be that what looks like a lot of resources to us is nothing compared to what they need. Imagine some natives living on a pacific island, concluding that, because there’s loads of trees and a fair bit of sand around, there can’t be any civilisations beyond the sea, or they would want the trees for themselves.
We might be able to test this by working out the distribution of stars, etc. we’d expect from the Big Bang.
If Robin is right, we’d expect their advanced descendants to be hundreds of light years away, heading even further away.
These are space-faring aliens we are talking about. Such creatures would likely use up every resource—and forward energy and information to the front, using lasers, with relays if necessary. There would be practically nothing left behind at all. The idea that they would be unable to utilise some kinds of planetary or solar resource—because they are too small and insignificant—does not seem remotely plausible to me.
Remember that these are advanced aliens we are talking about. They will be able to do practically anything.
I’d expect that any AGI (originating and interested in our universe) would initiate an exploration/colonization wave in all directions regardless of whether it has information that a given place has intelligent life, so broadcasting that we’re here doesn’t make it worse. Expecting superintelligent AI aliens that require a broadcast to notice us is like expecting poorly hidden aliens on flying saucers, the same mistake made on a different level. Also, light travels only so quickly, so our signals won’t reach very far before we’ve made an AGI of our own (one way or another), and thus had a shot at ensuring that our values obtain significant control.
(1) Quoting myself,
Receiving a signal from us would seem to make the direction that the signal is coming from a preferred direction of exploration/colonization. If space exploration/colonization is sufficiently intrinsically costly then an AGI may be forced to engage in triage with regard to which directions it explores.
(2) Creating an AGI is not sufficient to prevent being destroyed by an alien AGI. Depending on which AGI starts engaging in recursive self improvement first, an alien AGI may be far more powerful than a human-produced AGI.
(3) An AGI may be cautious about exploring so as to avoid encountering more powerful AGIs with differing goals and hence may avoid initiating an indiscriminate exploration/colonization wave in all directions, preferring to hear from other civilizations before exploring too much.
The point about subtle deception made in a comment by dclayh suggests that communication between extraterrestrials may degenerate into a Keynesian beauty contest of second guessing what the motivations of other extraterrestrials are, how much they know, whether they’re faking helplessness or faking power, etc. This points in the direction of it being impossible for extraterrestrials to credibly communicate anything toward one another, which suggests that human attempts to communicate with extraterrestrials having zero expected value rather than negative expected value as I suggest in my main post.
Even so, there may be genuine opportunities for information transmission. At present I think the possibility that communicating with extraterrestrials has large negative expected value deserves further consideration, even if it seems that the probable effect of such consideration is to rule out the possibility.
An AGI is extremely unlikely to be forced to engage in such a triage.
By far the most probable way for an extraterrestrial civilization to become powerful enough to threaten us is for it to learn how to turn ordinary matter like you might find in an asteroid or in the Oort cloud around an ordinary star into an AGI (e.g., turn the matter into a powerful computer and load the computer with the right software) like Eliezer is trying to do. And we know with very high confidence that silicon, aluminum, and other things useful for building powerful computers and space ships and uranium atoms and other things useful for powering them are evenly distributed in the universe (because our understanding of nucleosynthesis is very good).
ADDED. This is not the best explanation, but I’ll leave it alone because it is probably good enough to get the point across. The crux of the matter is that since the relativistic limit (on the speed of light) keeps the number of solar systems and galaxies an expanding civilization can visit to the cube of time whereas the number of new space ships that can be constructed in the absence of resource limits goes as 2 ^ time, even if it is very inefficient to produce new spaceships, the expansion in any particular direction quickly approaches the relativistic limit.
Your points are fair.
Still, even if an AGI is capable of simultaneously exploring in all directions, it may be inclined to send a disproportionately large amount of its resources (e.g. spaceships) in the direction of Earth with a view toward annihilating intelligent life on the Earth. After all, by the time it arrives at Earth, humans may have constructed their own AGI, so the factor determining whether the hypothetical extraterrestrial AGI can take over Earth may be the amount of resources that it sends toward the human civilization.
Also, maybe an AGI informed of our existence could utilize advanced technologies which we don’t know about yet to destroy us from afar (e.g. a cosmic ray generator?) and would not be inclined to utilize such technologies if it did not know of our existence (because using such hypothetical technologies could have side effects like releasing destructive radiation that detract from the AGI’s mission).
WHAT? It only takes one tiny probe with nanotech (femtotech?) and the right programming. “Colonization” (optimization, really) wave feeds on resources it encounters, so you only need to initiate it with a little bit of resources, it takes care of itself in the future.
I don’t follow this remark. Again, I would imagine that a battle between two AGIs would be determined by the amount of resources controlled within the proximate area of the battle. It would seem that maximizing the resources present in a given area (with a view toward winning a potential AGI battle) would entail diverting resources from other areas of the galaxy.
Since they can trade globally, what’s locally available must be irrelevant.
(I was talking about what it takes to stop a non-AGI civilization, hence a bit of misunderstanding.)
And if you get an alien AGI, you don’t need to rush towards it, you only need to have had an opportunity to do so. Everyone is better off if instead of inefficiently running towards fighting the new AGI, you go about your business as usual, and later at your convenience the new AGI surrenders, delivering you all the control you could gain by focusing on fighting it and a bit more. Everyone wins.
How do the AGI’s model each other accurately enough to be able to acausally trade with each other like that? Is just using UDT/TDT enough? Probably. Is every sufficiently intelligent AGI going to switch to that, regardless of the decision theory it started out with, the way a CDT AGI would? Maybe there are possible alien decision theories that don’t converge that way but are still winning enough to be a plausible threat?
Since they can trade globally, what’s locally available must be irrelevant.
An AGI is likely to hit the physical limitations before it gets very far, so all AGIs will be more or less equal, excepting the amount of controlled resources.
“Destruction” is probably not an adequate description of what happens when two AGIs having different amount of resources controlled meet, it’ll be more of a trade. You keep what you control (in the past), but probably the situation makes further unbounded growth (inc. optimizing the future) impossible. And what you can grab from the start, as an AGI, is the “significant amount of control” that I referred to, even if the growth stops at some point.
Avoiding AGIs with different goals is not optimal, since it hurts you to not use the resources, and you can pay the correct amount of what you captured when you are discovered later, to everyone’s advantage.
This is a good point.
Why do you say so? I could imagine them engaging in trade. I could also imagine them trying to destroy each other and the one with the greater amount of controlled resources successfully destroying the other. It would seem to depend on the AGIs’ goals which are presently unknown.
It’s always better for everyone if the loser surrenders before the fight begins. And since it saves the winner some resources, the surrendered loser gets a corresponding bonus. If there is a plan that gets better results, as a rule of thumb you should expect AGIs to do no worse than this plan allows (even if you have no idea how they could coordinate to follow this plan).
I would like to believe that you’re right.
But what if the two AGIs were a literal paperclip maximizer and a literal staple maximizer? Suppose that the paperclip maximizer controlled 70% of the resources and calculated that it had a 90% chance of winning a fight. Then the paperclip maximizer would maximize expected number of paperclips by initiating a fight.
Now, obviously I don’t believe that we’ll see a literal paperclip maximizer or a literal staple maximizer, but do we have any reason to believe that the AGIs that arose in practice would act differently? Or that trading would systematically produce higher expected value than fighting?
“Fighting” is a narrow class of strategies, while in “trading” I include a strictly greater class of strategies, hence expectation of there being a better strategy within “trading”.
But they’ll be even better off without a fight, with staple maximizer surrendering most of its control outright, or, depending on disposition (preference) towards risk, deciding the outcome with a random number and then orderly following what the random number decided.
Okay, I think I finally understand where you’re coming from. Thanks for the interesting conversation! I will spend some time digesting your remarks so as to figure out whether I agree with you and then update my top level post accordingly. You may have convinced me that the negative effects associated with sending signals into space are trivial.
I think (but am not sure) that the one remaining issue in my mind is the question of whether an AGI could somehow destroy human civilization from far away upon learning of our existence.
I think that Vladimir’s points were valid, but that they definitely shouldn’t have convinced you that the negative effects associated with sending signals into space are trivial (except in the trivial sense that no-one is likely to receive them).
Actually, your comment and Vladimir’s comment highlight a potential opportunity for me to improve my rationality.
•I’ve noticed that when I believe A and when somebody presents me with credible evidence against A, I have a tendency to alter my belief to “not A” even when the evidence against A is too small to warrant such a transition.
I think that my thought process is something like “I said that I believe A, and in response person X presented credible evidence against A which I wasn’t aware of. The fact that person X has evidence against A which I wasn’t aware of is evidence that person X is thinking more clearly about the topic than I am. The fact that person X took the time to convey evidence against A is an indication that person X does not believe A. Therefore, I should not believe A either.”
This line of thought is not totally without merit, but I take it too far.
(1) Just because somebody makes a point that didn’t occur to me doesn’t mean that that they’re thinking more clearly about the topic than I am.
(2) Just because somebody makes a point that pushes against my current view doesn’t mean that the person disagrees with my current view.
On (2), if Vladimir had prefaced his remarks with the disclaimer “I still think that it’s worthwhile to think about attracting the attention of aliens as an existential risk, but here are some reasons why it might not be as worthwhile as it presently looks to you” then I would not have had such a volatile reaction to his remark—the strength of my reaction was somehow predicated on the idea that he believed that I was wrong to draw attention to “attracting the attention of aliens as an existential risk.”
If possible, I would like to overcome the issue labeled with a • above. I don’t know whether I can, but I would welcome any suggestions. Do you know of any specific Less Wrong posts that might be relevant?
Changing your mind too often is better than changing your mind too rarely, if on the net you manage to be confluent: if you change your mind by mistake, you can change it back later.
(I do believe that it’s not worthwhile to worry about attracting attention of aliens—if that isn’t clear—though it’s a priori worthwhile to think about whether it’s a risk. I’d guess Eliezer will be more conservative on such an issue and won’t rely on an apparently simple conclusion that it’s safe, declaring it dangerous until FAI makes a competent decision either way. I agree that it’s a negative-utility action though, just barely negative due to unknown unknowns.)
Actually that is a good heuristic for understanding most people. Only horribly pedantic people like myself tend to volunteer evidence against our own beliefs.
Yes, I think you’re right. The people on LessWrong are unusual. Even so, even when speaking to members of the general population, sometimes one will misinterpret the things that they say as evidence of certain beliefs. (They may be offering evidence to support their beliefs, but I may misinterpret which of their beliefs they’re offering evidence in support of).
And in any case, my point (1) above still stands.
Thanks for your remark. I agree that what I said in my last comment is too strong.
I’m not convinced that the negative effects associated with sending signals into space are trivial, but Vladimir’s remarks did meaningfully lower my level of confidence in the notion that a really powerful optimization process would go out of its way to attack Earth in response to receiving a signal from us.
To me that conclusion also didn’t sound to be in the right place, but we did begin the discussion from that assertion, and there are arguments for that at the beginning of the discussion (not particularly related to where this thread went). Maybe something we cleared out helped with those arguments indirectly.
Isn’t this a Hawk-Dove situation, where pre-committing to fight even if you’ll probably lose could be in some AGI’s interests, by deterring others from fighting them?
Threats are not made to be carried out. Possibility of actual fighting sets the rules of the game, worst-case scenario which the actual play will improve on, to an extent for each player depending on the outcome of the bargaining aspect of the game.
For a threat to be significant, it has to be believed. In the case of AGI, this probably means the AGI itself being unable to renege on the threat. If two such met, wouldn’t fighting be inevitable? If so, how do we know it wouldn’t be worthwhile for at least some AGIs to make such a threat, sometimes?
Then again, ‘Maintain control of my current level of resources’ could be a schelling point that prevents descent into conflict.
But it’s not obvious why an AGI would choose to draw their line in the sand their though, when ‘current resources plus epsilon% of the commons’ is available. The main use of schelling points in human games is to create a more plausible threat, whereas an AGI could just show its source code.
An AGI won’t turn itself into a defecting rock, when there is a possibility of pareto improvement over that.
Or rather, the only thing you can communicate is that you’re capable of producing the message. In our case, this basically means we’re communicating that we exist and little else.
This is true. The extent to which it is significant seems to depend on how quickly AGIs in general can reach ridiculously-diminishing-returns levels of technology. From there for most part a “war” between AGIs would (unless they cooperate with each other to some degree) consist of burning their way to more of the cosmic commons than the other guy.
This what I often thought about. I perceive the usual attitude here to be that once we managed to create FAI, i.e. a positive singularity, ever after we’ll be able to enjoy and live our life. But who says there’ll ever be a period without existential risks? Sure, the FAI will take care of all further issues. That’s an argument. But generally, as long as you don’t want to stay human yourself, is there a real option besides enjoying the present, not caring about the future much, or to forever focus on mere survival?
I mean, what’s the point. The argument here is that working now is worth it because in return we’ll earn utopia. But that argument will equally well count for fighting alien u/FAI and entropy itself.
Not equally well. The tiny period of time that is the coming century is what determines the availability of huge amounts of resources and time in which to use them. When existential risks are far less (by a whole bunch of orders of magnitude) then the ideal way to use resources will be quite different.
Absolutely, I was just looking for excuses I guess. Thanks.
Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources. If that were the case, AI aliens might not find it worthwhile to re-colonise, but still want to take down any other powerful optimisation systems that arose. Even if it was too late to stop them appearing, the sooner it could interrupt the post-singularity growth the better, from its perspective.
Then it would’ve been trivial to leave at least one nanomachine and a radio detector in every solar system, which is all it takes to wipe out any incipient civilizations shortly after their first radio broadcast.
It would be trivial to transform all the matter in every solar system reached, to some useware for the sender and not to bother with the possible future civilizations there, at all.
Wow, one could write a story about a civilization of beings who find coherent radio-frequency radiation extremely painful (for instance), because of precisely this artificial selection.
Yes, you’re right. The only reason it would tolerate life/civilisation for so long is if it was hiding as well.
Re: “Robin Hanson wrote a paper wondering if the first wave might not already have passed by, and what we see around us is merely the left-over resources.”
What − 4 billion years ago?!? What happened to the second wave? Why did the aliens not better dissipate the resources to perform experments and harvest energy, and then beam the results to the front? This hypothesis apparently makes little sense.
The first wave might have burnt too many resources for there to be a second wave, or it might go at a much slower rate.
link
Edit: link formatting
Um, that link is to a string quartet version of an Oasis song. It is quite good but I’m pretty sure that isn’t the link you meant to give.
Thanks, Fixed. I better check the link other link I posted, actually.
It’s the new Rickrolling, except with better music.
There are mountains of untapped resources lying around. If there were intelligent agents in the galaxy 4 billion years ago, where are their advanced descendants? There are no advanced descendants—so there were likely no intelligent agents in the first place.
It might be that what looks like a lot of resources to us is nothing compared to what they need. Imagine some natives living on a pacific island, concluding that, because there’s loads of trees and a fair bit of sand around, there can’t be any civilisations beyond the sea, or they would want the trees for themselves.
We might be able to test this by working out the distribution of stars, etc. we’d expect from the Big Bang.
If Robin is right, we’d expect their advanced descendants to be hundreds of light years away, heading even further away.
These are space-faring aliens we are talking about. Such creatures would likely use up every resource—and forward energy and information to the front, using lasers, with relays if necessary. There would be practically nothing left behind at all. The idea that they would be unable to utilise some kinds of planetary or solar resource—because they are too small and insignificant—does not seem remotely plausible to me.
Remember that these are advanced aliens we are talking about. They will be able to do practically anything.