I see here is that if there’s a delay between when uploads are announced and when they occur, living people retain the option to end their lives.
Seems correct that cryonics patients have a lot less ability to flexibly respond to the situation compared to alive and animate people.[1]
I don’t think that this is a very decisive consideration. I expect that whatever series of events will cause the superintelligence to get the most of what it wants in expectation is the series of events that will play out.
It’s astonishingly weird if the superintelligence prefers to upload Bob, and then takes actions that allow Bob to prevent himself from being uploaded. “Announcing” that you’re going to upload people is an unforced error, if it causes people to kill themselves. (Though I suppose it might not be an error if most people would prefer to be uploaded, and the AI is using it as a bargaining chip?)
A very savvy person be able to see the writing on the wall and see that a misaligned superintelligence is close to inevitable, and if the balance of fear of personal s-risk vs. personal death comes out in favor of death, commit suicide early. But this will almost definitely be a gamble based on substantial uncertainty. Presumably less uncertainty than a decision to get frozen, or not, at any point before then, but not so much less that you still need to weigh the probabilities of different outcomes and make a bet.
Not literally zero flexibility, though. It’s normal to leave a will / instructions with the cryonics org about under what circumstances you want to be revived (eg. upload or bodily resurrection, how good does the tech need be before you risk it, etc). It’s probably non-standard to leave instructions like “please destroy my brain, if XYZ happens”, but may be feasible.
Cryonics companies are not enormously competent (this is bad, to be clear). I wouldn’t trust them to execute those instructions unless I had a personal relationship with someone who worked there, I had assessed their competence and trustworthiness as “high”, and they personally told me that they would take responsibility for destroying my brain if XYZ.
Seems correct that cryonics patients have a lot less ability to flexibly respond to the situation compared to alive and animate people.[1]
I don’t think that this is a very decisive consideration. I expect that whatever series of events will cause the superintelligence to get the most of what it wants in expectation is the series of events that will play out.
It’s astonishingly weird if the superintelligence prefers to upload Bob, and then takes actions that allow Bob to prevent himself from being uploaded. “Announcing” that you’re going to upload people is an unforced error, if it causes people to kill themselves. (Though I suppose it might not be an error if most people would prefer to be uploaded, and the AI is using it as a bargaining chip?)
A very savvy person be able to see the writing on the wall and see that a misaligned superintelligence is close to inevitable, and if the balance of fear of personal s-risk vs. personal death comes out in favor of death, commit suicide early. But this will almost definitely be a gamble based on substantial uncertainty. Presumably less uncertainty than a decision to get frozen, or not, at any point before then, but not so much less that you still need to weigh the probabilities of different outcomes and make a bet.
Not literally zero flexibility, though. It’s normal to leave a will / instructions with the cryonics org about under what circumstances you want to be revived (eg. upload or bodily resurrection, how good does the tech need be before you risk it, etc). It’s probably non-standard to leave instructions like “please destroy my brain, if XYZ happens”, but may be feasible.
Cryonics companies are not enormously competent (this is bad, to be clear). I wouldn’t trust them to execute those instructions unless I had a personal relationship with someone who worked there, I had assessed their competence and trustworthiness as “high”, and they personally told me that they would take responsibility for destroying my brain if XYZ.
But there are some options here.