That certainty for that far out future is indeed superheated ridiculous. It feels like with their publicly knowable profenssies that would be unachievable. They have something like “we have time travel and have direct eyewitness testimony that we are going to be okay in year X” or “we know the world is a simulation and we can hack it if need be” or something equally exotic.
It’s not particularly obvious that the future is more than, say, a century, and if AI / nukes / bioterrorism were addressed then I’d also probably be 97% on no apocalypse?
if 3% is spent just to be able to work then it goes down from there from being able to make particular achievements. One could ponder on given infinite working time what is the chance that humans have quantum gravity. And its also not about the tech being made but also the current patients being able to benefit from it. No political upheavals or riots that would compromise the bodily safety of millions of patients for centuries or millenia? They think they can figure out resurrection without knowing what principles apply? That is like saying “I will figure antimatter bombs” without knowing what an atom is.
Riots or upheavals which damage the cryonically suspended seems too unlikely to be an issue to dath ilan? And it’s not like the resurrection is going to take new laws of physics, it’s just a very tricky engineering challenge which they’re probably planning to put the AGI on?
Upheavals could include things like a political faction rising that wants to incorporate a new form of punishment of administered True Death. And since they have done it before if they ever need to do a total screening of the past again that is next to impossible to achieve with popsicle relics around. It would probably be a downlooked supervillain but out of the population not one has the explicit goal of sabotaging the cryopatients? The amount of murders that don’t utilise the head removal service is non-zero afterall.
Supernovas or asteroids could make it tricky to keep the lights on. With a couple of centuries passing and no Future having onset the doubt that its going to happen might creep in a little differently. At some point the warm population is going to be tiny compared to the popsicles.
These are good reasons to have a probability of 97% rather than 99.9% but I’m really not convinced that anything you’ve described has a substantially higher than 3% chance of happening.
Some of the concerns you’re bringing up are obviously-to-me-and-therefore-Keepers nowhere near sufficient to argue the chance is under 97%. The Precipice puts x-risk due to asteroids/comets at 1/100K per millennium and stellar explosion at 1/100M. (The other natural risk, supervolcanos, it puts at 1/1000. I believe Eliezer has said that dath ilan is diligently monitoring supervolcanos.)
At one year the Petrov Day button was pressed because a direct message had the word “admin” in the sending users name. The challenge is not that the threats are especially bad but that there are many kinds of them. Claims like “nothing bad happens to the website” are extremely disjunctive. If even one of the things I would be worried about had a 3% chance of happening and if it happened it would spell doom then that would be enough. If there were 1000 things that threatened then on average they would only need a 0.003% chance to happen on average. There are so many more failure modes for a civilization than a website.
This analogy seems kind of silly. Dath ilan isn’t giving people a “destroy the cryonics facilities” button. I agree the 3% probably consists of a lot of small things that add up, but I don’t think there’s obviously >1000 black swans with a >3/1000 probability of happening.
I, for one, remember reading the number 97%, and thinking “Only 97%? Well, I guess there are some things that could still go wrong, outside of anyone’s control”.
But if you think that number is completely ridiculous, you may be severely underestimating what a competent, aligned civilization could do.
Out of curiousity, what number do you think a maximally competent human civilization living on Earth with a 1980s level of technology, could accomplish, when it comes to probablity of surviving and successfully aligning AI. 90%? 80%? 50%? 10%? 1%? 0%?
“maximally competent” there calls for a sense of possiblity and how I read it makes it starkly contrast with specifying that the tech level would be 1980. They would just instantly jump to atleast 2020. Part of the speculation with the “Planecrash” continuity is about how Keltham being put in a medival setting would be starkly transformative ie injecting outside setting competence is setting breaking.
Dath ilani are still human and make errors and need to partly rely on their clever coordination schemes. I think I am using a heuristic about staying in your powerlevel even if the powerlevel is high. You should be in a situation where you could not predict the directions of your updates. And the principle that if you can expect to be convinced you can just be convinced now. So if you currently do not have a technology you can’t expect to have it. “If I knew what I was doing it would not be called research”. You don’t know what you don’t know.
On the flipside if the civilization is competent to survive for millenia then almost any question will be cracked. But then I could apply that logic to predict that they will crack time travel. For genuine future technologies we are almost definitionally ignorant about them. So anything outside the range of 40-60% is counterproductive hubris (in my lax, I don’t actually know how to take probabilities that seriously scale).
There’s a difference between “technology that we don’t know how to do but it’s fine in theory”, “technology that we don’t even know if it’s possible in principle” and “technology that we believe isn’t possible at all”. Uploading humans is the former; we have a good theoretical model for how to do it and we know physics allows objects with human brain level computing power.
Time travel is the latter.
It’s perfectly reasonable for a civilisation to estimate that problems of the first type will be solved without becoming thereby committed to believing in time travel. Being ignorant of a technology isn’t the same as being ignorant of the limits of physics.
That certainty for that far out future is indeed superheated ridiculous. It feels like with their publicly knowable profenssies that would be unachievable. They have something like “we have time travel and have direct eyewitness testimony that we are going to be okay in year X” or “we know the world is a simulation and we can hack it if need be” or something equally exotic.
It’s not particularly obvious that the future is more than, say, a century, and if AI / nukes / bioterrorism were addressed then I’d also probably be 97% on no apocalypse?
if 3% is spent just to be able to work then it goes down from there from being able to make particular achievements. One could ponder on given infinite working time what is the chance that humans have quantum gravity. And its also not about the tech being made but also the current patients being able to benefit from it. No political upheavals or riots that would compromise the bodily safety of millions of patients for centuries or millenia? They think they can figure out resurrection without knowing what principles apply? That is like saying “I will figure antimatter bombs” without knowing what an atom is.
Riots or upheavals which damage the cryonically suspended seems too unlikely to be an issue to dath ilan? And it’s not like the resurrection is going to take new laws of physics, it’s just a very tricky engineering challenge which they’re probably planning to put the AGI on?
Upheavals could include things like a political faction rising that wants to incorporate a new form of punishment of administered True Death. And since they have done it before if they ever need to do a total screening of the past again that is next to impossible to achieve with popsicle relics around. It would probably be a downlooked supervillain but out of the population not one has the explicit goal of sabotaging the cryopatients? The amount of murders that don’t utilise the head removal service is non-zero afterall.
Supernovas or asteroids could make it tricky to keep the lights on. With a couple of centuries passing and no Future having onset the doubt that its going to happen might creep in a little differently. At some point the warm population is going to be tiny compared to the popsicles.
These are good reasons to have a probability of 97% rather than 99.9% but I’m really not convinced that anything you’ve described has a substantially higher than 3% chance of happening.
Some of the concerns you’re bringing up are obviously-to-me-and-therefore-Keepers nowhere near sufficient to argue the chance is under 97%. The Precipice puts x-risk due to asteroids/comets at 1/100K per millennium and stellar explosion at 1/100M. (The other natural risk, supervolcanos, it puts at 1/1000. I believe Eliezer has said that dath ilan is diligently monitoring supervolcanos.)
At one year the Petrov Day button was pressed because a direct message had the word “admin” in the sending users name. The challenge is not that the threats are especially bad but that there are many kinds of them. Claims like “nothing bad happens to the website” are extremely disjunctive. If even one of the things I would be worried about had a 3% chance of happening and if it happened it would spell doom then that would be enough. If there were 1000 things that threatened then on average they would only need a 0.003% chance to happen on average. There are so many more failure modes for a civilization than a website.
This analogy seems kind of silly. Dath ilan isn’t giving people a “destroy the cryonics facilities” button. I agree the 3% probably consists of a lot of small things that add up, but I don’t think there’s obviously >1000 black swans with a >3/1000 probability of happening.
I, for one, remember reading the number 97%, and thinking “Only 97%? Well, I guess there are some things that could still go wrong, outside of anyone’s control”.
But if you think that number is completely ridiculous, you may be severely underestimating what a competent, aligned civilization could do.
Out of curiousity, what number do you think a maximally competent human civilization living on Earth with a 1980s level of technology, could accomplish, when it comes to probablity of surviving and successfully aligning AI. 90%? 80%? 50%? 10%? 1%? 0%?
“maximally competent” there calls for a sense of possiblity and how I read it makes it starkly contrast with specifying that the tech level would be 1980. They would just instantly jump to atleast 2020. Part of the speculation with the “Planecrash” continuity is about how Keltham being put in a medival setting would be starkly transformative ie injecting outside setting competence is setting breaking.
Dath ilani are still human and make errors and need to partly rely on their clever coordination schemes. I think I am using a heuristic about staying in your powerlevel even if the powerlevel is high. You should be in a situation where you could not predict the directions of your updates. And the principle that if you can expect to be convinced you can just be convinced now. So if you currently do not have a technology you can’t expect to have it. “If I knew what I was doing it would not be called research”. You don’t know what you don’t know.
On the flipside if the civilization is competent to survive for millenia then almost any question will be cracked. But then I could apply that logic to predict that they will crack time travel. For genuine future technologies we are almost definitionally ignorant about them. So anything outside the range of 40-60% is counterproductive hubris (in my lax, I don’t actually know how to take probabilities that seriously scale).
There’s a difference between “technology that we don’t know how to do but it’s fine in theory”, “technology that we don’t even know if it’s possible in principle” and “technology that we believe isn’t possible at all”. Uploading humans is the former; we have a good theoretical model for how to do it and we know physics allows objects with human brain level computing power.
Time travel is the latter.
It’s perfectly reasonable for a civilisation to estimate that problems of the first type will be solved without becoming thereby committed to believing in time travel. Being ignorant of a technology isn’t the same as being ignorant of the limits of physics.