According to this book, our current models predict that time travel to the past is theoretically possible, but not practical enough to allow for that. You can’t go back further than the spacetime configuration that allows for travel back in time has existed.
Right, the we’re-not-already-dead argument doesn’t rule out time travel that requires a time machine at the receiving end like a telephone. But you still can’t save anyone that died before the first time machine was built.
Not that I have actual relevant technical knowledge or anything, but why would it be in the relatively small amount of space and time shortly following the Big Bang, as opposed to the vastly larger entire rest of the universe?
Not sure, actually; thanks for the question. The rest of the universe might contain proportionally less computation due to lower temperatures, but probably not enough to make up the difference in duration.
This depends on the nature and limits of time travel. But it does seem fair that if time travel could allow what the parent post wanted then this is a plausible problem. However, it is slightly misleading to talk about uFAIs and Boltzmann brains in this context. If generic time travel exists, we should simply see the repeated intervention of all sorts of entities.
While that might be true in the short-run (incidentally is that a direct quote from Bill and Ted?) if one is dealing with Boltzmann brains then there is no long-term winner, just never-ending fluctuations that become rarer and rarer as the universe cools down.
A Boltzmann brain can’t really become powerful if there already exists a Robot God watching for upstarts and squishing them as they arise. And yes, that’s a quote.
Suppose an AGI comes into being. (I mentioned Boltzmann brains as a sort of lower bound on what it would take for this to occur: the universe being very large, this is virtually guaranteed to happen at least once at some point somewhere.)
What happens to the newborn godling? This greatly depends on its immediate environment. If it’s born into a region of spacetime where it’s surrounded by dumb matter (any sub-foom intelligence being effectively “dumb” for these purposes), then it quickly takes over its future light cone.
If the space it controls gets large enough (which it will, given time), then it will have to contend with the possibility of contenders emerging, Boltzmann gods like itself spontaneously forming out of the ether (or rather, since it’s eaten its future light cone, out of the computronium). Luckily for it, it has a vast resource and positional advantage over the upstarts, by virtue of having time and space to prepare. The upstarts have minds that can fit into a volume small enough to form by pure chance (and the Law of Large Numbers), whereas the standing god has no such limitations.
So we can anticipate that, in a conflict between two Boltzmann gods, the firstborn would win.
If we further postulate that full-strength time travel is possible, then it follows that a Boltzmann (or other) god would eventually figure this out and travel back to the beginning of the universe, so as to control all of everything. By the previous argument, an AGI with first-mover advantage from the beginning of the universe would be able to easily prevent any serious threats to itself from arising. Thus, only one godling will ever rise to full strength, only one will go back in time and set itself up, and there will be no (successful) revolutions.
From this we may infer that either: (1) the universe is not that big; (2) fully general time travel isn’t possible; (3) we are instrumental to the Robot God’s plans in some way; (4) we are a component of the RG’s mind; (5) we are a side effect of the RG’s computation and it doesn’t notice or care enough to kill us.
I don’t really think 1, 3, or 4 is really plausible. 5 without 2, in such a way that 2 is still false of the sub-reality that the side effect created, would probably recurse until we got a non-time-traveling side-effect sub-reality. That leaves 2: you can’t travel back in time to before the invention of the time machine (or some other such restriction).
Ah ok. So it seems we have different conceptions about how and when Boltzmann brains arise. As I understand it, the vast majority of Boltzmann brains will arise after what we would normally call the heat death of the universe. They won’t in general have the resources to control their future light-cone because it won’t have enough usable energy. That’s aside from the fact, that this assumes Boltzmann brains that have their beliefs about reality around them in some way correlated with reality, something which does not necessarily have such a high probability (this is the the classic argument behind why you might be a Boltzmann brain who has lasted a fraction of a second and will soon dissolve back into chaos.)
this assumes Boltzmann brains that have their beliefs about reality around them in some way correlated with reality, something which does not necessarily have such a high probability
Certainly the vast majority of Boltzmann brains don’t become gods, in the same way that the vast majority of virtual particles don’t form brains. But it only takes one, ever.
However, it occurs to me that I haven’t actually done the math, and the improbability of an AGI forming out of the ether may well exceed the space-and-time volume of the universe from Big Bang to heat death.
the vast majority of Boltzmann brains will arise after what we would normally call the heat death of the universe. They won’t in general have the resources to control their future light-cone because it won’t have enough usable energy.
This sounds like a plausible argument for heat death as a hard deadline on the birth of a Boltzmann god. But once one exists, any others that arise in its future light cone are rendered irrelevant.
This sounds like a plausible argument for heat death as a hard deadline on the birth of a Boltzmann god. But once one exists, any others that arise in its future light cone are rendered irrelevant.
That seems correct. So the details come down to precisely how many Boltzmann brains one expects to arise, what sort of goals they’ll have, and how much resources they’ll have. This seems very tough to estimate.
It seems more likely that the AGI would figure out a way to travel backwards in time to rescue you.
If time travel were possible, a uFAI Boltzmann brain would have already gone back to the Big Bang and eaten the universe.
According to this book, our current models predict that time travel to the past is theoretically possible, but not practical enough to allow for that. You can’t go back further than the spacetime configuration that allows for travel back in time has existed.
Right, the we’re-not-already-dead argument doesn’t rule out time travel that requires a time machine at the receiving end like a telephone. But you still can’t save anyone that died before the first time machine was built.
You’re assuming a very large universe.
Yes.
Inherent paradoxes of causal-loop-style time travel aside, isn’t the Big Bang where a Boltzmann brain is most likely to be in the first place?
Not that I have actual relevant technical knowledge or anything, but why would it be in the relatively small amount of space and time shortly following the Big Bang, as opposed to the vastly larger entire rest of the universe?
Not sure, actually; thanks for the question. The rest of the universe might contain proportionally less computation due to lower temperatures, but probably not enough to make up the difference in duration.
This depends on the nature and limits of time travel. But it does seem fair that if time travel could allow what the parent post wanted then this is a plausible problem. However, it is slightly misleading to talk about uFAIs and Boltzmann brains in this context. If generic time travel exists, we should simply see the repeated intervention of all sorts of entities.
Only the winner can go back and set things up.
While that might be true in the short-run (incidentally is that a direct quote from Bill and Ted?) if one is dealing with Boltzmann brains then there is no long-term winner, just never-ending fluctuations that become rarer and rarer as the universe cools down.
A Boltzmann brain can’t really become powerful if there already exists a Robot God watching for upstarts and squishing them as they arise. And yes, that’s a quote.
I don’t follow your logic. Can you expand?
Suppose an AGI comes into being. (I mentioned Boltzmann brains as a sort of lower bound on what it would take for this to occur: the universe being very large, this is virtually guaranteed to happen at least once at some point somewhere.)
What happens to the newborn godling? This greatly depends on its immediate environment. If it’s born into a region of spacetime where it’s surrounded by dumb matter (any sub-foom intelligence being effectively “dumb” for these purposes), then it quickly takes over its future light cone.
If the space it controls gets large enough (which it will, given time), then it will have to contend with the possibility of contenders emerging, Boltzmann gods like itself spontaneously forming out of the ether (or rather, since it’s eaten its future light cone, out of the computronium). Luckily for it, it has a vast resource and positional advantage over the upstarts, by virtue of having time and space to prepare. The upstarts have minds that can fit into a volume small enough to form by pure chance (and the Law of Large Numbers), whereas the standing god has no such limitations.
So we can anticipate that, in a conflict between two Boltzmann gods, the firstborn would win.
If we further postulate that full-strength time travel is possible, then it follows that a Boltzmann (or other) god would eventually figure this out and travel back to the beginning of the universe, so as to control all of everything. By the previous argument, an AGI with first-mover advantage from the beginning of the universe would be able to easily prevent any serious threats to itself from arising. Thus, only one godling will ever rise to full strength, only one will go back in time and set itself up, and there will be no (successful) revolutions.
From this we may infer that either: (1) the universe is not that big; (2) fully general time travel isn’t possible; (3) we are instrumental to the Robot God’s plans in some way; (4) we are a component of the RG’s mind; (5) we are a side effect of the RG’s computation and it doesn’t notice or care enough to kill us.
I don’t really think 1, 3, or 4 is really plausible. 5 without 2, in such a way that 2 is still false of the sub-reality that the side effect created, would probably recurse until we got a non-time-traveling side-effect sub-reality. That leaves 2: you can’t travel back in time to before the invention of the time machine (or some other such restriction).
Ah ok. So it seems we have different conceptions about how and when Boltzmann brains arise. As I understand it, the vast majority of Boltzmann brains will arise after what we would normally call the heat death of the universe. They won’t in general have the resources to control their future light-cone because it won’t have enough usable energy. That’s aside from the fact, that this assumes Boltzmann brains that have their beliefs about reality around them in some way correlated with reality, something which does not necessarily have such a high probability (this is the the classic argument behind why you might be a Boltzmann brain who has lasted a fraction of a second and will soon dissolve back into chaos.)
Certainly the vast majority of Boltzmann brains don’t become gods, in the same way that the vast majority of virtual particles don’t form brains. But it only takes one, ever.
However, it occurs to me that I haven’t actually done the math, and the improbability of an AGI forming out of the ether may well exceed the space-and-time volume of the universe from Big Bang to heat death.
This sounds like a plausible argument for heat death as a hard deadline on the birth of a Boltzmann god. But once one exists, any others that arise in its future light cone are rendered irrelevant.
That seems correct. So the details come down to precisely how many Boltzmann brains one expects to arise, what sort of goals they’ll have, and how much resources they’ll have. This seems very tough to estimate.