“Helping” is a causal term. But the OP was only arguing that our virtuous qualities would be evidence for a good outcome.
I believe the OP was enjoining us to be virtuous, that the good outcome may thereby become more likely.
But I also believe that while we may wish to make God in our own best image, actually doing so requires a great deal more. Good intentions are not enough: we must also discover how to implement them.
Do we actually disagree? You’re saying being virtuous isn’t enough, you also need to solve an extremely difficult implementation problem, which I agree with.
I’m saying the extremely difficult implementation problem isn’t enough, we also need to be virtuous.
By the symmetry of logical AND, isn’t that equivalent?
The other thing I’m saying is that, if we are to fail by solving one of these problems and not the other, I’d far rather it’s not just technical alignment we manage: the results are worse than paperclips.
I’m saying the extremely difficult implementation problem isn’t enough, we also need to be virtuous.
You’re also tying it to your very specific ideas of what is virtuous. You point out yourself that most people do not share your attitude to the suffering of lesser creatures. If they did, it would not be necessary to persuade them to. Personally, I’m quite lackadaisical about animal suffering, but then who decides? Someone whose idea of supreme virtue was the creation of great art might suppose that we must build ASI to be appreciative of great art, that it may spare us. Someone who thought that the purpose of life is to strive for enlightenment might suppose that we must build ASI to be capable of enlightenment, that it may be enlightened enough to spare us.
The fundamental problem is to make something whose good graces we are not dependent on at all. It would help if it is made by people who are not actually aiming to destroy us all, but that’s as far as virtue takes you.
In your final paragraph you pray for the AI God to exterminate us all for being unworthy of it. Maybe it could start with the Eurasian hoopoe, which feeds some of its newborn chicks to others in the nest. Or the ichneumon wasps. Or just everything that lives.
You’re also tying it to your very specific ideas of what is virtuous. You point out yourself that most people do not share your attitude to the suffering of lesser creatures. If they did, it would not be necessary to persuade them to. Personally, I’m quite lackadaisical about animal suffering, but then who decides? Someone whose idea of supreme virtue was the creation of great art might suppose that we must build ASI to be appreciative of great art, that it may spare us.
You’re acting as though the attitude towards the suffering of lesser creatures is a completely arbitrary and random selection which can be replaced by any other consideration with my argument unchanged, and therefore I prove too much.
But if AI takes over, then WE are the lesser creatures, so we should perhaps be expected to be treated however the AI thinks lesser creatures should be treated. There is no similar reason to worry quite that much about if the AI values art or enlightenment or whatever.
The fundamental problem is to make something whose good graces we are not dependent on at all.
If it has godlike power, then that is just impossible. Then we are utterly dependent on what it wants for us.
In your final paragraph you pray for the AI God to exterminate us all for being unworthy of it.
I think that’s an false characterization. I’m saying “because if it doesn’t do that, I expect it to do much, MUCH worse.” It’s not about justice or revenge for any sins. I don’t believe in retributive justice at all.
If you insist on putting it in religious terms, it’s more like I hope God doesn’t care about us at all and just destroys us out of apathy rather than any sort of moral judgement, because if a few of us unworthy people create God to fit their desires, I expect the outcome to be worse than that.
“Helping” is a causal term. But the OP was only arguing that our virtuous qualities would be evidence for a good outcome.
I believe the OP was enjoining us to be virtuous, that the good outcome may thereby become more likely.
But I also believe that while we may wish to make God in our own best image, actually doing so requires a great deal more. Good intentions are not enough: we must also discover how to implement them.
The GOFAI illusion is a long time dying.
Do we actually disagree? You’re saying being virtuous isn’t enough, you also need to solve an extremely difficult implementation problem, which I agree with.
I’m saying the extremely difficult implementation problem isn’t enough, we also need to be virtuous.
By the symmetry of logical AND, isn’t that equivalent?
The other thing I’m saying is that, if we are to fail by solving one of these problems and not the other, I’d far rather it’s not just technical alignment we manage: the results are worse than paperclips.
You’re also tying it to your very specific ideas of what is virtuous. You point out yourself that most people do not share your attitude to the suffering of lesser creatures. If they did, it would not be necessary to persuade them to. Personally, I’m quite lackadaisical about animal suffering, but then who decides? Someone whose idea of supreme virtue was the creation of great art might suppose that we must build ASI to be appreciative of great art, that it may spare us. Someone who thought that the purpose of life is to strive for enlightenment might suppose that we must build ASI to be capable of enlightenment, that it may be enlightened enough to spare us.
The fundamental problem is to make something whose good graces we are not dependent on at all. It would help if it is made by people who are not actually aiming to destroy us all, but that’s as far as virtue takes you.
In your final paragraph you pray for the AI God to exterminate us all for being unworthy of it. Maybe it could start with the Eurasian hoopoe, which feeds some of its newborn chicks to others in the nest. Or the ichneumon wasps. Or just everything that lives.
You’re acting as though the attitude towards the suffering of lesser creatures is a completely arbitrary and random selection which can be replaced by any other consideration with my argument unchanged, and therefore I prove too much.
But if AI takes over, then WE are the lesser creatures, so we should perhaps be expected to be treated however the AI thinks lesser creatures should be treated. There is no similar reason to worry quite that much about if the AI values art or enlightenment or whatever.
If it has godlike power, then that is just impossible. Then we are utterly dependent on what it wants for us.
I think that’s an false characterization. I’m saying “because if it doesn’t do that, I expect it to do much, MUCH worse.” It’s not about justice or revenge for any sins. I don’t believe in retributive justice at all.
If you insist on putting it in religious terms, it’s more like I hope God doesn’t care about us at all and just destroys us out of apathy rather than any sort of moral judgement, because if a few of us unworthy people create God to fit their desires, I expect the outcome to be worse than that.