accept the risk of trying to teach an artificial person to be good
It’s not a person, it’s an optimisation process. Don’t anthropomorphise AI. Your are right that the risk is large.
or we accept the risk of uploading someone we expect to remain good
Which we know to be nearly impossible with few ways to improve the chances.
or of letting someone we hope to be good build a helpful psychopath. After all, if that programmer has a faulty conception of the human good then they’ll create a monster,
You are not familiar with the current plan and the reasoning behind it. Go read CEV. Also metaethics, because you seem to take it as possible that a human could have a good enough conception of value to program it.
In every case, we have to rely on the uncertain integrity of the ethical person.
fallacy of grey. Some methods are much more promising than others, even if all are uncertain.
Some methods are much more promising than others, even if all are uncertain.
I certainly agree with you there. I have some familiarity with CEV (though it’s quite technical and I don’t have much background in decision theory), but on the basis of that familiarity, I’m under the impression that creating an artificial person and teaching it to be ethical is the safest and most reliable way to accomplish our goal of surviving the singularity. But I haven’t argued for that, of course.
It’s not a person, it’s an optimisation process. Don’t anthropomorphise AI. Your are right that the risk is large.
Which we know to be nearly impossible with few ways to improve the chances.
You are not familiar with the current plan and the reasoning behind it. Go read CEV. Also metaethics, because you seem to take it as possible that a human could have a good enough conception of value to program it.
fallacy of grey. Some methods are much more promising than others, even if all are uncertain.
I certainly agree with you there. I have some familiarity with CEV (though it’s quite technical and I don’t have much background in decision theory), but on the basis of that familiarity, I’m under the impression that creating an artificial person and teaching it to be ethical is the safest and most reliable way to accomplish our goal of surviving the singularity. But I haven’t argued for that, of course.