A person does not need to understand quantum physics to contribute to AI alignment or to contribute to Lesswrong
I understand quantum physics quite well and am aware of all the issues you’ve raised in your comment. I think you massively underestimate how stringent a condition “an event happens in no Everett branches” is when applied to the evolution of a large complex system like the Earth. (I’ll also remark that if your main source of knowledge regarding quantum mechanics is a pop-science book you probably shouldn’t be lecturing people about their lack of understanding :P)
if the differences were relevant, then because competent computerists and the rest of us had no control over which branch we ended up in, we would have noticed an unreliability in the results of calculations
What you’re missing here and in the rest of your comment is that events that happen in Everett branches with microscopically small total probability can appear to never happen. To use your computer example, let’s say that cosmic rays flip a given register with probability 10−12. And let’s say that a calculation will fail if there are errors in 20 independent registers. And let’s say that the computer uses error-correcting codes such that you actually need to flip 10 physical registers to flip one virtual register. Then there is a probability of 10−2400 that the calculation will fail. If our civilization does on the order of 1022 operations per second(roughly the order of magnitude of our current computing power), we should expect to see such an error once every 102370 years, vastly greater than the expected lifespan of the universe. Nevertheless, branches in which the calculation fails do exist.
And that’s the case of an error-corrected computer performing a simple operation, an isolated system engineered for maximum predictability. In a complex system like the Earth with many chaotic components(which unavoidably create branches due to amplification of tiny fluctuations to macroscopic scales), there are far more opportunities for quantum noise to have huge effects on the trajectory, especially regarding a highly complex event like the development of AGI. That’s where my “horrible certainty” comes from—it is massively overdetermined that there are at least some future Everett branches where humanity is not killed by AGI. Now whether those branches are more like fluctuations in the weather or like a freakish coincidence of cosmic ray accidents is more of an open question.
My probability that humanity will survive AI research is at least .01. Is yours at least that high?
If so, then why bring tiny probabilities into it? Oh, I know: you bring in the tiny probabilities in response to my third sentence in this thread, namely, “It is entirely possible—and I am tempted to say probable—under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment.”
I still believe that statement (e.g., I was tempted to say “probable”) but as a rhetorical move it was probably a mistake because it tended to cause readers to dig in their heels and resist what I really want out of this conversation: for people on this site to stop conflating the notion of a future Everett branch with the notion of a possible world or a possible future. In order to achieve that end, all I need to do, all I should’ve been trying to do, is to plant in the mind of the reader the possibility that there might be a difference between the two.
I am assuming that we both agree that there is a significant probability—i.e., a lot more than a few Everett-branches worth—that humanity will survive the AGI crisis. Please speak up immediately if that is not true.
Where does this probability, this hope, come from? I predict that you will say that the hope comes from diversity in future Everett branches. We both believe that the Everett branches will be diverse. I was briefly tempted to say that no 2 branches will be exactly the same, but I don’t know quantum physics well enough. But if we choose 2 branches at random, there will almost certainly be differences between them. I am not willing to concede with certainty that there is significant hope in that. The differences might not be relevant: just because an electron has spin up in one branch and spin down in another branch does not mean all the people aren’t dead in both branches; and just because we let the branch-creation process run long enough to accumulate lots of differences in the spins of particles and the positions and momentums of particles does not necessarilywith certainty change the analysis.
The global situation is so complex that it is impossible for us humans with our puny brains to predict its outcome with certainty: to my eyes, the global situation (caused by AI research) certainly looks dire, but I could easily be overlooking something important. That is where most of my hope for humanity’s future comes from: from the fact that my ability to predict is limited. And that is a different grounds for hope than your “it is massively overdetermined that there are at least some future Everett branches where humanity is not killed by AGI”. For one thing, my hope makes no reference to Everett branches and would still be there if reality did not split into Everett branches. But more importantly, the 2 grounds for hope are different because there might be a process (steered by an intelligence human or artificial) that has already been set in motion that can reliably kill us or reliably save us from AI researchers in essentially all future Everett branches—i.e., so large a fraction of the future branches that it is not worth even putting any hope into the other, exceptional branches because the probability that we are both fundamentally mistaken about quantum physics or some other important aspect of reality is much higher than the hope that we will end up in one of those exceptional branches.
And that, dear reader, is why you should not conflate in your mind the notion of a possible world (or possible future) with the notion of a “future Everett branch” (which is simply an Everett branch that descends from the present moment in time rather than an Everett branch that split off from our branch in the past with the result that we cannot influence it and it cannot influence us).
I want to stress, for people that might be reading along, that the discussion is not about why we are in danger from AI research or how great the danger is. AI danger is used here as an example (of a complex physical process) in a discussion of the many-worlds interpretation of quantum physics—which as far as I know has no bearing on the questions of why we are in danger or of how great the danger is.
ADDED. The difference between the two grounds for hope is not just pedantic: it might end up mattering.
I predict that you will say that the hope come from diversity in future Everett branches
Nope. I believe (a) model uncertainty dominates quantum uncertainty regarding the outcome of AGI, but also (b) it is overwhelmingly likely that there are some future Everett branches where humanity survives. (b) certainly does not imply that these highly-likely-to-exist Everett branches comprise the majority of the probability mass I place on AGI going well.
what I really want out of this conversation: for people on this site to stop conflating the notion of a future Everett branch with the notion of a possible world or a possible future
I agree that these things shouldn’t be conflated. I just think “it is entirely possible that AGI will kill every single one of us in every single future Everett branch” is not a good example to illustrate this, since it is almost certainly false.
But there’s a larger issue, an issue that I think matters here: You didn’t realize just how very different that the claims “has high probability of occurring in most worlds” from the claim “a certain thing will happen in every world”. That first claim is much easier to show than the second claim, since you now have to consider every example, or have a clever trick, since any counterexample breaks your claim.
I understand quantum physics quite well and am aware of all the issues you’ve raised in your comment. I think you massively underestimate how stringent a condition “an event happens in no Everett branches” is when applied to the evolution of a large complex system like the Earth. (I’ll also remark that if your main source of knowledge regarding quantum mechanics is a pop-science book you probably shouldn’t be lecturing people about their lack of understanding :P)
What you’re missing here and in the rest of your comment is that events that happen in Everett branches with microscopically small total probability can appear to never happen. To use your computer example, let’s say that cosmic rays flip a given register with probability 10−12. And let’s say that a calculation will fail if there are errors in 20 independent registers. And let’s say that the computer uses error-correcting codes such that you actually need to flip 10 physical registers to flip one virtual register. Then there is a probability of 10−2400 that the calculation will fail. If our civilization does on the order of 1022 operations per second(roughly the order of magnitude of our current computing power), we should expect to see such an error once every 102370 years, vastly greater than the expected lifespan of the universe. Nevertheless, branches in which the calculation fails do exist.
And that’s the case of an error-corrected computer performing a simple operation, an isolated system engineered for maximum predictability. In a complex system like the Earth with many chaotic components(which unavoidably create branches due to amplification of tiny fluctuations to macroscopic scales), there are far more opportunities for quantum noise to have huge effects on the trajectory, especially regarding a highly complex event like the development of AGI. That’s where my “horrible certainty” comes from—it is massively overdetermined that there are at least some future Everett branches where humanity is not killed by AGI. Now whether those branches are more like fluctuations in the weather or like a freakish coincidence of cosmic ray accidents is more of an open question.
My probability that humanity will survive AI research is at least .01. Is yours at least that high?
If so, then why bring tiny probabilities into it? Oh, I know: you bring in the tiny probabilities in response to my third sentence in this thread, namely, “It is entirely possible—and I am tempted to say probable—under the many-worlds model of physics for every single one of us to be killed by AI research in every single one of the (billions of) branches that will evolve (or descend) from the present moment.”
I still believe that statement (e.g., I was tempted to say “probable”) but as a rhetorical move it was probably a mistake because it tended to cause readers to dig in their heels and resist what I really want out of this conversation: for people on this site to stop conflating the notion of a future Everett branch with the notion of a possible world or a possible future. In order to achieve that end, all I need to do, all I should’ve been trying to do, is to plant in the mind of the reader the possibility that there might be a difference between the two.
I am assuming that we both agree that there is a significant probability—i.e., a lot more than a few Everett-branches worth—that humanity will survive the AGI crisis. Please speak up immediately if that is not true.
Where does this probability, this hope, come from? I predict that you will say that the hope comes from diversity in future Everett branches. We both believe that the Everett branches will be diverse. I was briefly tempted to say that no 2 branches will be exactly the same, but I don’t know quantum physics well enough. But if we choose 2 branches at random, there will almost certainly be differences between them. I am not willing to concede with certainty that there is significant hope in that. The differences might not be relevant: just because an electron has spin up in one branch and spin down in another branch does not mean all the people aren’t dead in both branches; and just because we let the branch-creation process run long enough to accumulate lots of differences in the spins of particles and the positions and momentums of particles does not necessarily with certainty change the analysis.
The global situation is so complex that it is impossible for us humans with our puny brains to predict its outcome with certainty: to my eyes, the global situation (caused by AI research) certainly looks dire, but I could easily be overlooking something important. That is where most of my hope for humanity’s future comes from: from the fact that my ability to predict is limited. And that is a different grounds for hope than your “it is massively overdetermined that there are at least some future Everett branches where humanity is not killed by AGI”. For one thing, my hope makes no reference to Everett branches and would still be there if reality did not split into Everett branches. But more importantly, the 2 grounds for hope are different because there might be a process (steered by an intelligence human or artificial) that has already been set in motion that can reliably kill us or reliably save us from AI researchers in essentially all future Everett branches—i.e., so large a fraction of the future branches that it is not worth even putting any hope into the other, exceptional branches because the probability that we are both fundamentally mistaken about quantum physics or some other important aspect of reality is much higher than the hope that we will end up in one of those exceptional branches.
And that, dear reader, is why you should not conflate in your mind the notion of a possible world (or possible future) with the notion of a “future Everett branch” (which is simply an Everett branch that descends from the present moment in time rather than an Everett branch that split off from our branch in the past with the result that we cannot influence it and it cannot influence us).
I want to stress, for people that might be reading along, that the discussion is not about why we are in danger from AI research or how great the danger is. AI danger is used here as an example (of a complex physical process) in a discussion of the many-worlds interpretation of quantum physics—which as far as I know has no bearing on the questions of why we are in danger or of how great the danger is.
ADDED. The difference between the two grounds for hope is not just pedantic: it might end up mattering.
Nope. I believe (a) model uncertainty dominates quantum uncertainty regarding the outcome of AGI, but also (b) it is overwhelmingly likely that there are some future Everett branches where humanity survives. (b) certainly does not imply that these highly-likely-to-exist Everett branches comprise the majority of the probability mass I place on AGI going well.
I agree that these things shouldn’t be conflated. I just think “it is entirely possible that AGI will kill every single one of us in every single future Everett branch” is not a good example to illustrate this, since it is almost certainly false.
If something is almost certainly false, then it remains entirely possible that it is true—because a tiny probability is still a possibility :)
But, yeah, it was not a good example to illustrate any point I care enough about to defend on this forum.
But there’s a larger issue, an issue that I think matters here: You didn’t realize just how very different that the claims “has high probability of occurring in most worlds” from the claim “a certain thing will happen in every world”. That first claim is much easier to show than the second claim, since you now have to consider every example, or have a clever trick, since any counterexample breaks your claim.
Most!=All is an important distinction here.