Somehow this has escaped comment, so I’ll have a go. I write from the perspective of whether it’s suitable as the value system of a superintelligence. If PRISM became the ethical operating system of a posthuman civilization born on Earth, for as long as that civilization managed to survive in the cosmos—would that be a satisfactory outcome?
My immediate thoughts are: It has a robustness, due to its multi-perspective design, that gives it some plausibility. At the same time, it’s not clear to me where the seven basis worldviews come from. Why those seven, and no others? Is there some argument that these seven form a necessary and sufficient basis for ethical behavior by human-like beings and their descendants?
If I dig a little deeper into the paper, the justification is actually in part 2. Specifically, on page 12, six brain regions and their functions are singled out, as contributing to human decision-making at increasingly abstract levels (for the hierarchy, see page 15). The seven basis worldviews correspond to increasing levels of mastery of this hierarchy.
I have to say I’m impressed. I figured that the choice of worldviews would just be a product of the author’s intuition, but they are actually grounded in a theory of the brain. One of the old dreams associated with CEV, was that the decision procedure for a human-friendly AI would be extrapolated in a principled way from biological facts about human cognition, rather than just from a philosophical system, hallowed tradition, or set of community principles. June Ku’s MetaEthical AI, for example, is an attempt to define an algorithm for doing this. Well, this is a paper written by a human being, but the principles in part 2 are sufficiently specific, that one could actually imagine an automated process following them, and producing a form of PRISM as its candidate for CEV! I’d like @Steven Byrnes to have a look at this.
Somehow this has escaped comment, so I’ll have a go. I write from the perspective of whether it’s suitable as the value system of a superintelligence. If PRISM became the ethical operating system of a posthuman civilization born on Earth, for as long as that civilization managed to survive in the cosmos—would that be a satisfactory outcome?
My immediate thoughts are: It has a robustness, due to its multi-perspective design, that gives it some plausibility. At the same time, it’s not clear to me where the seven basis worldviews come from. Why those seven, and no others? Is there some argument that these seven form a necessary and sufficient basis for ethical behavior by human-like beings and their descendants?
If I dig a little deeper into the paper, the justification is actually in part 2. Specifically, on page 12, six brain regions and their functions are singled out, as contributing to human decision-making at increasingly abstract levels (for the hierarchy, see page 15). The seven basis worldviews correspond to increasing levels of mastery of this hierarchy.
I have to say I’m impressed. I figured that the choice of worldviews would just be a product of the author’s intuition, but they are actually grounded in a theory of the brain. One of the old dreams associated with CEV, was that the decision procedure for a human-friendly AI would be extrapolated in a principled way from biological facts about human cognition, rather than just from a philosophical system, hallowed tradition, or set of community principles. June Ku’s MetaEthical AI, for example, is an attempt to define an algorithm for doing this. Well, this is a paper written by a human being, but the principles in part 2 are sufficiently specific, that one could actually imagine an automated process following them, and producing a form of PRISM as its candidate for CEV! I’d like @Steven Byrnes to have a look at this.