A sufficiently advanced AI should already be propagating at near the speed of light, which is why we needn’t fear mere radiosignals: If there’s such an entity in the neighborhood, its von Neumann probes will be the first sign we get.
Which is a good argument for why a smart AI wouldn’t announce its malicious intentions by sending some sort of universal computer code—which could ultimately announce its intentions, yet have a significant chance of failure—and would just straight send its little optimizing cloud of nanomagic.
The first indication that something’s wrong would be your legs turning into paperclips (The tickets are now diamonds—style).
It may also be, that a well designed radio vawe front colliding with a planet or a gas cloud can produce some artifacts. That a SETI capable civilisation isn’t even necessary.
I will when I figure out how to solve this problem: I’m trying to accomplish two major objectives.
The more important objective is to explain to people how we can use concepts from mathematical fields, especially algorithmic information theory and reflective decision theory, to elucidate the fundamental nature of justification, especially any fundamental similarities or relations between epistemic and moral justification. (The motivation for this approach comes from formal epistemology; I’m not sure if I’ll have to spend a whole post on the motivations or not.)
The less important objective is to show that theology, or more precisely theological intuitions, are a similar approach to the same problem, and it makes sense and isn’t just syncretism to interpret theology in light of (say) algorithmic information theory and vice versa. But to motivate this would require many posts on hermeneutics; without sufficient justification, readers could reasonably conclude that bringing in “God” (an unfortunately political concept) is at best syncretism and at worst an attempt to force through various connotations. I’m more confident when it comes to explaining the math—even if I can be accused of overreaching with the concepts, at least it’s admitted that the concepts themselves have a very solid foundation. When it comes to hermeneutics, though, I inevitably have to make various qualitative arguments and judgment calls about how to make judgment calls, and I’m afraid of messing it up; also I’m just more likely to be wrong.
So I have to think about whether to try to tackle both problems at once, which I would like to do but would be quite difficult, or to just jump into the mathematics without worrying so much about tying it back to the philosophical tradition. I’d really prefer the former but I haven’t yet figured out how to make the presentation (e.g., the order of ideas to be introduced) work.
especially any fundamental similarities or relations between epistemic and moral justification
So, the fact that in natural languages it’s easy to be ambiguous between epistemic and moral modality (e.g. should in English can mean either ‘had better’ or ‘is most likely to’) may be a Feature Not A Bug? (Well, I think that that is due to a quirk of human psychology¹, but if humans have that quirk, it must have been adaptive (or a by-product of something adaptive), in the EEA at least.)
How common is this among the world’s languages? The more common it is, the more likely my hypothesis, I’d guess.
We should not think about AI as about omnipotent God—if it was, he could travel even faster then light and even back in time. But we dont see it around us (if we are not in simulation). So he is not omnipotent. So we should assume that nanobots wave is slower then speed of light. Lets give it 0.8 light speed. The main problem with nanobots wave is to slow down after it reachs its destination. We could accelerate nenobots in accelrators, but slowing down could be complicated.
So, if nanobots speed is 0.8c, then volume of a sphere which they could get is only 0.512 of that of the SETI attack. That meens that SETI attack is 2 times more effective as a way to conqure the space.
Also observer selection is working here. All civilizations inside nanobots wave are probably destroyed. So we could only find ourselves outside it.
As gwern pointed out SETI attacks only target worlds with tech-savvy intelligent life (we so far know about one of those) while a von-neuman probe can likely target pretty much all systems we’ve observed so far (and we’ve observed a bit more than one).
A SETI attack being twice as effective as a von-neumann probe is quite the overstatement. (even discounting the fact that the probes might be able to travel at a speed much closer to c)
SETI attack could happen in any medium where only information transfer is possible.
If in the future we could contact parallel worlds it would be again the case.
As we now dont know exact limitation of interstellar travel, we may think that SETI attack could happened.
Or we should conclude that any serch of alien radiosignals is useless as they should approach us in a speed of light phisically.
And again we could exist only in those regions of the Universe which is not conqured by alien nanobot. Or they are conqured but lay dormant somethere, and in this case SETI attack still possible.
And again we could exist only in those regions of the Universe which is not conqured by alien nanobot. Or they are conqured but lay dormant somethere, and in this case SETI attack still possible.
It seems a bit like you’re grasping at straws to keep the SETI threat viable. I realize you’re attached to it, I saw the website. Still, allow yourself to follow the arguments whereever they may lead.
I know that nano von Neuman probs is strongest argument against the theory and I knew it even before I published it here. Moreover, i have shorter article about possible alien nanobots in Solar system which I will eventually publish here—if it is not to much offtopic.
But from epistemic point of view we cant close one unknown case with another big unknown with 100 percent certanity.
Any way it will not change conclusion: SETI serch is or usless or dangerous, and should be stopped.
There’s nothing this ragtag horde of competing special interests (humanity) needs more than the uniting force of “we received signals from other civilizations”. To unite us and to usher in a new era of a redefined in-group (“us”) versus the new out-group (“them”—the aliens).
As the old adage goes, me against my brother, my brother and I against our cousins, my cousins and I against strangers.
What we need is a “all of humanity versus some unspecified aliens” to save us. Even if we have to make them up ourselves; there should be an astrophysicists’ conspiracy to fake such signals. I imagine something like “Ok Earth-guys, whoever gets to Epsilon Eridani first owns it! Also, we demand a new season of Firefly.” (This would be troublesome, because it would mean they are very close already.)
Against someone with an AI, are we really tech-savvy? Is the Carnot engine turning chemical energy into rotary mechanical energy into electromagnetic energy really the best way to listen for radio signals?
The scheme described in the article seems like one of the most efficient ways to propagate near the speed of light. Why bother sending material von Neumann probes if mere radiosignals are sufficient?
The scheme requires reception by an advanced civilization during a narrow window of opportunity; the radio waves have no effect on the billions of dead planets all around. A probe, on the other hand, presumably would be able to affect any system.
Since we observe so few life-filled planets or signals out there...
The civilizatory window in which a target would be susceptible to such tactics is very small: cavemen don’t notice, superintelligences are thankful for you announcing your hostile intentions. And that’s not even taking into account the small fraction of inhabited planets (via the Drake equation) to begin with.
Compare that to a wave of self replicating probes at near lightspeed reconfiguring all secured matter into computronium performing the desired operations? Seems like no contest. I’d rather rebuild jupiter too, for a loss of just a few percent in propagation speed.
Compare that to a wave of self replicating probes at near lightspeed reconfiguring all secured matter into computronium performing the desired operations? Seems like no contest.
I think the best argument in favour of this SETI virus is that you can really just do both. Nearly all the useful stuff will come from the self-replicating probes, but you might get a little extra out of the virus as well.
Not that it’s an important point of contention, but I don’t think so. If there are any other superintelligences out there (other than the sender) - even if fewer than there are civilizations in their vulnerable phase—they would still pose a serious threat to the signal-sending agent:
A signal travelling slightly ahead of the cavalry would be like a trumpet call announcing “here come the nanobots!”, giving the adversary time to prepare.
(Interestingly, our position in the outskirts of a galaxy / the less densily populated regions can count as weak evidence that such a cosmic chessgame exists, since otherwise due to the SSA we’d expect to find our home star cluster somewhere in the more densily packed areas.)
God I hate it when my comments become needlessly verbose, sorry … argh, and isn’t verbosity needless by definition?
A signal travelling slightly ahead of the cavalry would be like a trumpet call announcing “here come the nanobots!”, giving the adversary time to prepare.
Yes, but we better prepare for nanobots anyway. If they don″t come it’s just a bonus. It is wise to be prepared for an intergalactic war in any case. For the robots, for the small kinetic projectiles with a near light speed, for some artificial gamma ray bursts, for SETI attacks and for many more.
Then we should strike in all directions in the best tradition of a very benevolent colonist. To end all the space wars even before they relay start. As much as we can.
The aliens, who are extremely rare (as I think), had, have or will have the same dilemma, what may be another opportunity. Game theoretically speaking, we must do some calculations right now, it is already late and the OP’s article is a good one.
I actually don’t mind this length of comments (less is okay, but sometimes too vague, and starting at double that length it definitely feels like too much).
Overall, I see your point, but I think it depends on what kind of strategy the spreading superintelligence is using and on what wars would look like in general. For example, the universe probably mostly doesn’t resist, so it might be sending small “conversion” probes everywhere to expand as fast as possible. In that case, any actual opponent might be able to easily repel them and start getting ready to present a serious defence by the time any dedicated offensive force is sent, so the additional forewarning of having a signal travel slightly further ahead wouldn’t really change anything, and might prevent an opponent from emerging in the first place.
(On the other hand, maybe the conversion probes it sends are smart enough to detect any signal originating from their destination and stop flying/change course if it looks like it might resist. But maybe any superintelligence is on the lookout for extremely fast-travelling objects that behave like this and would notice anyway.)
A sufficiently advanced AI should already be propagating at near the speed of light, which is why we needn’t fear mere radiosignals: If there’s such an entity in the neighborhood, its von Neumann probes will be the first sign we get.
Von Neumann probes don’t allow propagation at near the speed of light. They are self replicating exploratory probes that send back information to a home system. That limits propagation to one third the speed of light. If the sufficiently advanced AI is already propagating at near the speed of light then the self replicating ships that are the first sign we get would have to be closer to seeders.
A sufficiently advanced AI should already be propagating at near the speed of light, which is why we needn’t fear mere radiosignals: If there’s such an entity in the neighborhood, its von Neumann probes will be the first sign we get.
A near light speed and the actual light speed may be a significant difference where the universal dominance is the price.
Which is a good argument for why a smart AI wouldn’t announce its malicious intentions by sending some sort of universal computer code—which could ultimately announce its intentions, yet have a significant chance of failure—and would just straight send its little optimizing cloud of nanomagic.
The first indication that something’s wrong would be your legs turning into paperclips (The tickets are now diamonds—style).
Agree.
It may also be, that a well designed radio vawe front colliding with a planet or a gas cloud can produce some artifacts. That a SETI capable civilisation isn’t even necessary.
The optimizer your optimizer could optimize like.
Talking about triple-O, go continue your computational theology blog o.O
I will when I figure out how to solve this problem: I’m trying to accomplish two major objectives.
The more important objective is to explain to people how we can use concepts from mathematical fields, especially algorithmic information theory and reflective decision theory, to elucidate the fundamental nature of justification, especially any fundamental similarities or relations between epistemic and moral justification. (The motivation for this approach comes from formal epistemology; I’m not sure if I’ll have to spend a whole post on the motivations or not.)
The less important objective is to show that theology, or more precisely theological intuitions, are a similar approach to the same problem, and it makes sense and isn’t just syncretism to interpret theology in light of (say) algorithmic information theory and vice versa. But to motivate this would require many posts on hermeneutics; without sufficient justification, readers could reasonably conclude that bringing in “God” (an unfortunately political concept) is at best syncretism and at worst an attempt to force through various connotations. I’m more confident when it comes to explaining the math—even if I can be accused of overreaching with the concepts, at least it’s admitted that the concepts themselves have a very solid foundation. When it comes to hermeneutics, though, I inevitably have to make various qualitative arguments and judgment calls about how to make judgment calls, and I’m afraid of messing it up; also I’m just more likely to be wrong.
So I have to think about whether to try to tackle both problems at once, which I would like to do but would be quite difficult, or to just jump into the mathematics without worrying so much about tying it back to the philosophical tradition. I’d really prefer the former but I haven’t yet figured out how to make the presentation (e.g., the order of ideas to be introduced) work.
So, the fact that in natural languages it’s easy to be ambiguous between epistemic and moral modality (e.g. should in English can mean either ‘had better’ or ‘is most likely to’) may be a Feature Not A Bug? (Well, I think that that is due to a quirk of human psychology¹, but if humans have that quirk, it must have been adaptive (or a by-product of something adaptive), in the EEA at least.)
How common is this among the world’s languages? The more common it is, the more likely my hypothesis, I’d guess.
We should not think about AI as about omnipotent God—if it was, he could travel even faster then light and even back in time. But we dont see it around us (if we are not in simulation). So he is not omnipotent. So we should assume that nanobots wave is slower then speed of light. Lets give it 0.8 light speed. The main problem with nanobots wave is to slow down after it reachs its destination. We could accelerate nenobots in accelrators, but slowing down could be complicated. So, if nanobots speed is 0.8c, then volume of a sphere which they could get is only 0.512 of that of the SETI attack. That meens that SETI attack is 2 times more effective as a way to conqure the space. Also observer selection is working here. All civilizations inside nanobots wave are probably destroyed. So we could only find ourselves outside it.
As gwern pointed out SETI attacks only target worlds with tech-savvy intelligent life (we so far know about one of those) while a von-neuman probe can likely target pretty much all systems we’ve observed so far (and we’ve observed a bit more than one).
A SETI attack being twice as effective as a von-neumann probe is quite the overstatement. (even discounting the fact that the probes might be able to travel at a speed much closer to c)
SETI attack could happen in any medium where only information transfer is possible. If in the future we could contact parallel worlds it would be again the case. As we now dont know exact limitation of interstellar travel, we may think that SETI attack could happened. Or we should conclude that any serch of alien radiosignals is useless as they should approach us in a speed of light phisically.
And again we could exist only in those regions of the Universe which is not conqured by alien nanobot. Or they are conqured but lay dormant somethere, and in this case SETI attack still possible.
It seems a bit like you’re grasping at straws to keep the SETI threat viable. I realize you’re attached to it, I saw the website. Still, allow yourself to follow the arguments whereever they may lead.
I know that nano von Neuman probs is strongest argument against the theory and I knew it even before I published it here. Moreover, i have shorter article about possible alien nanobots in Solar system which I will eventually publish here—if it is not to much offtopic.
But from epistemic point of view we cant close one unknown case with another big unknown with 100 percent certanity.
Any way it will not change conclusion: SETI serch is or usless or dangerous, and should be stopped.
Useless? I don’t think so.
There’s nothing this ragtag horde of competing special interests (humanity) needs more than the uniting force of “we received signals from other civilizations”. To unite us and to usher in a new era of a redefined in-group (“us”) versus the new out-group (“them”—the aliens).
As the old adage goes, me against my brother, my brother and I against our cousins, my cousins and I against strangers.
What we need is a “all of humanity versus some unspecified aliens” to save us. Even if we have to make them up ourselves; there should be an astrophysicists’ conspiracy to fake such signals. I imagine something like “Ok Earth-guys, whoever gets to Epsilon Eridani first owns it! Also, we demand a new season of Firefly.” (This would be troublesome, because it would mean they are very close already.)
[Obligatory Watchmen reference]
That’s not exactly how I remember the movie, but it was still entertaining. I liked that big guy. Klaatu barada nikto!
. . .
(Sorry, just stirring the pot.)
Vg jnf gung jnl va gur pbzvp obbx; Bmlznaqvnf unq n grnz bs fpvragvfgf ovb-ratvarre n uhtr cflpuvp fdhvq gung jbhyq qvr hcba ovegu/npgvingvba naq xvyy n ybg bs crbcyr jvgu vgf cflpuvp “fpernz”. Vg’q znc avpryl gb crbcyr’f rkcrpgngvbaf bs na “nyvra vainqre” naq uhznavgl jbhyq havgr ntnvafg cbgragvny shegure gerngf.
Against someone with an AI, are we really tech-savvy? Is the Carnot engine turning chemical energy into rotary mechanical energy into electromagnetic energy really the best way to listen for radio signals?
You missed the point.
(Agreed.)
The scheme described in the article seems like one of the most efficient ways to propagate near the speed of light. Why bother sending material von Neumann probes if mere radiosignals are sufficient?
The scheme requires reception by an advanced civilization during a narrow window of opportunity; the radio waves have no effect on the billions of dead planets all around. A probe, on the other hand, presumably would be able to affect any system.
Since we observe so few life-filled planets or signals out there...
Doesn’t seem very effective to me.
The civilizatory window in which a target would be susceptible to such tactics is very small: cavemen don’t notice, superintelligences are thankful for you announcing your hostile intentions. And that’s not even taking into account the small fraction of inhabited planets (via the Drake equation) to begin with.
Compare that to a wave of self replicating probes at near lightspeed reconfiguring all secured matter into computronium performing the desired operations? Seems like no contest. I’d rather rebuild jupiter too, for a loss of just a few percent in propagation speed.
I think the best argument in favour of this SETI virus is that you can really just do both. Nearly all the useful stuff will come from the self-replicating probes, but you might get a little extra out of the virus as well.
Not that it’s an important point of contention, but I don’t think so. If there are any other superintelligences out there (other than the sender) - even if fewer than there are civilizations in their vulnerable phase—they would still pose a serious threat to the signal-sending agent:
A signal travelling slightly ahead of the cavalry would be like a trumpet call announcing “here come the nanobots!”, giving the adversary time to prepare.
(Interestingly, our position in the outskirts of a galaxy / the less densily populated regions can count as weak evidence that such a cosmic chessgame exists, since otherwise due to the SSA we’d expect to find our home star cluster somewhere in the more densily packed areas.)
God I hate it when my comments become needlessly verbose, sorry … argh, and isn’t verbosity needless by definition?
Yes, but we better prepare for nanobots anyway. If they don″t come it’s just a bonus. It is wise to be prepared for an intergalactic war in any case. For the robots, for the small kinetic projectiles with a near light speed, for some artificial gamma ray bursts, for SETI attacks and for many more.
Then we should strike in all directions in the best tradition of a very benevolent colonist. To end all the space wars even before they relay start. As much as we can.
The aliens, who are extremely rare (as I think), had, have or will have the same dilemma, what may be another opportunity. Game theoretically speaking, we must do some calculations right now, it is already late and the OP’s article is a good one.
I actually don’t mind this length of comments (less is okay, but sometimes too vague, and starting at double that length it definitely feels like too much).
Overall, I see your point, but I think it depends on what kind of strategy the spreading superintelligence is using and on what wars would look like in general. For example, the universe probably mostly doesn’t resist, so it might be sending small “conversion” probes everywhere to expand as fast as possible. In that case, any actual opponent might be able to easily repel them and start getting ready to present a serious defence by the time any dedicated offensive force is sent, so the additional forewarning of having a signal travel slightly further ahead wouldn’t really change anything, and might prevent an opponent from emerging in the first place.
(On the other hand, maybe the conversion probes it sends are smart enough to detect any signal originating from their destination and stop flying/change course if it looks like it might resist. But maybe any superintelligence is on the lookout for extremely fast-travelling objects that behave like this and would notice anyway.)
Von Neumann probes don’t allow propagation at near the speed of light. They are self replicating exploratory probes that send back information to a home system. That limits propagation to one third the speed of light. If the sufficiently advanced AI is already propagating at near the speed of light then the self replicating ships that are the first sign we get would have to be closer to seeders.