The message we send goes at the speed of light. If the AI has to send ships to conquer it probably has to go slower than the speed of light.
James_Miller
Could be a lot of time. The Andromeda galaxy is 2.5 million light years away from Earth. Say an AI takes over next year and sends a virus to a civilization in this galaxy that would successfully take over if humans didn’t first issue a warning. Because of the warning the Earth Paperclip maximizer has to send a ship to the Andromeda civilization to take over, and say the ship goes at 90% of the speed of light. That gives the Andromeda civilization 280,000 years between when they get humanity’s warning message and when the paperclip maximizer’s ship arrives. During that time the Andromeda civilization will hopefully upgrade its defenses to be strong enough to resist the ship, and then thank humanity by avenging us if the paperclip maximizer has exterminated humanity.
Warning Aliens About the Dangerous AI We Might Create
In a big enough universe “you” are being tortured somewhere, so the goal is to reduce the fraction of you being tortured. Pulping a brain might increase this fraction.
That is a reasonable point about extinction risks motivating some people on climate change. But Republicans, and given their control of the US government and likely short AI time horizons influencing them is a top priority, detest the Extinction Rebellion movement, and current environmental activism seems to anti-motivate them to act on climate change.
The target audience needs to include the Trump administration so connections to religion might strengthen the case. Altman told the Senate that AI might kill everyone, and he was misinterpreted as talking about job loss. Something about human extinction causes powerful people to tune-out. The students at my college hate Elon, but are completely unaware that he went on Joe Rogan and said the tech is he helping to build might annihilate everyone. We see concerns about AI using up water getting more play than AI extinction risks.
AI Doomers Should Raise Hell
That is a valid point. I did ask two AIs to point out mistakes in the article so I got some criticism. One AI wanted me to steelman the position in favor of violence, which I didn’t do because I feared it being taken out of context, and I feared that some might think I was really advocating for violence and putting in the anti-violence positions as cover.
Doomers are claiming that those building AI are threatening the lives of everyone, so that is already an attempt to put a lot of guilt on the builders.
Rejecting Violence as an AI Safety Strategy
Yes. It is running trillions upon trillions simulations and ignoring almost all of the details from the simulations. Our hope is that writing this letter slightly increases the odds that it learns about the contents of this post. Also, there are multiple acausal trade equilibria and this version of me taking about them could favorably alter which equilibria we are in. Finally, agency has value and so writing this letter by itself might slightly increase the expected value of working with us.
Dear Paperclip Maximizer, Please Don’t Turn Off the Simulation
An implicit assumption (which should have been made explicit) of the post is that the cost per simulation is tiny. This is like in WW II where the US would send a long-range bomber to take photos of Japan. I agree with your last paragraph and I think it gets to what is consciousness. Is the program’s existence enough to generate consciousness, or does the program have to run to create conscious observers?
Great example!
I’m assuming the cost of this simulation is tiny compared to the value of learning about potential enemies and trading partners.
Yes, I agree that you can’t give too much weight to my saying I’m in pain because I could be non-conscious from your viewpoint. Assuming all humans are conscious and pain is as it appears to be, there seems to be a lot of unnecessary pain, but yes I could be missing the value of having us experience it.
I’m in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.
I’m in low level chronic pain including as I write this comment, so while I think the entire Andromeda galaxy might be fake, I think at least some suffering must be real, or at least I have the same confidence in my suffering as I do in my consciousness.
They people running the Karma test deserve to lose a lot of Karma for the suffering in this world.
I agree that the probability that any given message is received at the right time by a civilization that can both decode it and benefit from it is extremely low, but the upside is enormous and the cost of broadcasting is tiny, so a simple expected value calculation may still favor sending many such messages. If this is a simulation, the relevant probabilities may shift because the designers may care about game balance rather than our naive astrophysical prior beliefs. The persistent strangeness of the Fermi paradox should also make us cautious about assigning extremely small probabilities to any particular resolution. Anthropic reasoning should push us toward thinking that the situation humanity is in is more common than we might otherwise expect. Finally, if we are going to send any deliberate interstellar signal at all, then there is a strong argument that it should be the kind of warning this post proposes.