I think that where are 3 levels of Roko’s argument. I signed for the first mild version, and I know another guy who independently comes to the same conclusion and support first mild version.
Mild. Future AI will reward those who helped to prevent x-risks and create safer world, but will not punish. May be they will be resurrected first, or they will get 2 millions dollars of universal income instead of 1 mln, or a street will be named by their name. If any limited resource will be in the future they will be in first lines to get it. (But children first). It is the same as soldier on war expect that if he die, his family will get pension. Nobody is punished but some are rewarded.
Roko’s original. You will be punished if you knew about RB, but didn’t help to create safe AI.
Strong, ISIS-style RB. All humanity will be tortured if you don’t invest all your efforts in promotion of idea of RB. The ISIS already is using this tactic now—they torture people, who didn’t join ISIS (and upload videos about it), and the best way for someone to escape future ISIS-torture is to join ISIS.
I think that 2 and 3 are not valid because FAI can’t torture people, period. But aging and bioweapons catasrophe could.
If I believe that FAI can’t torture people, strong versions of RB does not work for me.
We can imagine the similar problem: If I kill a person N I will get 1 billion USD, which I could use on saving thousands of life in Africa, creating FAI and curing aging. So should I kill him? It may look rational to do so by utilitarian point of view.
So will I kill him?
No, because I can’t kill.
The same way if I know that an AI is going to torture anyone I don’t think that it is FAI, and will not invest a cant in its creation. RB fails.
We can imagine the similar problem: If I kill a person N I will get 1 billion USD, which I could use on saving thousands of life in Africa, creating FAI and curing aging. So should I kill him? It may look rational to do so by utilitarian point of view. So will I kill him? No, because I can’t kill.
I’m not seeing how you got to “I can’t kill” from this chain of logic. It doesn’t follow from any of the premises.
It is a fact which I know about my self and which I add here.
Relevant here is WHY you can’t kill. Is it because you have a deontological rule against killing? Then you want the AI to have deontologist ethics. Is it because you believe you should kill but don’t have the emotional fortitude to do so? The AI will have no such qualms.
It is more like ultimatum in territory which was recently discussed on LW. It is a fact which I know about myself. I think it has both emotional and rational roots but not limited by them.
So I also want other people to follow it and of course AI too. I also think that AI is able to find a way out of any trolley stile problems.
I think that where are 3 levels of Roko’s argument. I signed for the first mild version, and I know another guy who independently comes to the same conclusion and support first mild version.
Mild. Future AI will reward those who helped to prevent x-risks and create safer world, but will not punish. May be they will be resurrected first, or they will get 2 millions dollars of universal income instead of 1 mln, or a street will be named by their name. If any limited resource will be in the future they will be in first lines to get it. (But children first). It is the same as soldier on war expect that if he die, his family will get pension. Nobody is punished but some are rewarded.
Roko’s original. You will be punished if you knew about RB, but didn’t help to create safe AI.
Strong, ISIS-style RB. All humanity will be tortured if you don’t invest all your efforts in promotion of idea of RB. The ISIS already is using this tactic now—they torture people, who didn’t join ISIS (and upload videos about it), and the best way for someone to escape future ISIS-torture is to join ISIS.
I think that 2 and 3 are not valid because FAI can’t torture people, period. But aging and bioweapons catasrophe could.
The only way I could possibly see this being true is if the FAI is a deontologist.
If I believe that FAI can’t torture people, strong versions of RB does not work for me.
We can imagine the similar problem: If I kill a person N I will get 1 billion USD, which I could use on saving thousands of life in Africa, creating FAI and curing aging. So should I kill him? It may look rational to do so by utilitarian point of view. So will I kill him? No, because I can’t kill.
The same way if I know that an AI is going to torture anyone I don’t think that it is FAI, and will not invest a cant in its creation. RB fails.
I’m not seeing how you got to “I can’t kill” from this chain of logic. It doesn’t follow from any of the premises.
It is not a conclusion from previous facts. It is a fact which I know about my self and which I add here.
Relevant here is WHY you can’t kill. Is it because you have a deontological rule against killing? Then you want the AI to have deontologist ethics. Is it because you believe you should kill but don’t have the emotional fortitude to do so? The AI will have no such qualms.
It is more like ultimatum in territory which was recently discussed on LW. It is a fact which I know about myself. I think it has both emotional and rational roots but not limited by them. So I also want other people to follow it and of course AI too. I also think that AI is able to find a way out of any trolley stile problems.