I recently gave a talk at an academic science fiction conference about whether sf is useful for thinking about the ethics of cognitive enhancement. I think some of the conclusions are applicable to point 9 too:
(1) Bioethics can work in a “prophetic” and a “regulatory” mode. The first is big picture, proactive and open-ended, dealing with the overall aims we ought to have, possibilities, and values. It is open for speculation. The regulatory mode is about ethical governance of current or near-term practices. Ethicists formulate guidelines, point out problems, and suggest reforms, but their purpose is generally not to rethink these practices from the ground-up or to question the wisdom of the whole enterprise. As the debate about the role of speculative bioethics has shown, mixing the modes can be problematic. (Guyer and Moreno 2004) really takes bioethics to task for using science fiction instead of science to motivate arguments: they point out that this can actually be good if one does it inside the prophetic mode, but a lot of bioethics (like the President’s Council on Bioethics at the time) cannot decide what kind of consideration it is.
(2) Is it possible to find out things about the world by making stuff up? (Elgin 2014) argues fictions and thought experiments do exemplify patterns or properties that they share with phenomena in the real world, and hence we can learn something about the realized world from considering fictional worlds (i.e. there is a homeomorphism between them in some domain). It does require the fiction to be imaginative but not lawless: not every fiction or thought experiment has value in telling us something the real or moral world. This is of course why just picking a good or famous piece of fiction as a source of ideas is risky: it was selected not for how it reflects patterns in the real world, but for other reasons.
Considering Eliezer’s levels of intelligence in fictional characters is a nice illustration of this: level 1 intelligence characters show some patterns (being goal directed agents) that matter, and level 3 characters actually give examples of rational skilled cognition.
(3) Putting this together, if you want to use fiction in your argument, the argument better be in the more prophetic, open-ended mode (e.g. arguing that there is AI risks of various kind, what values are at stake etc.) and the fiction needs to have pretty not only high standards of not just internal consistency but actual mappability to the real world. If the discussion is on the more regulatory side (e.g. thinking of actual safeguards, risk assessment, institutional strategies) then fiction is unlikely to be helpful, and very likely (due to good story bias, easily inserted political agendas or different interpretations of worldview) introducing biasing or noise elements.
There are of course some exceptions. Hannu Rajaniemi provides a neat technical trick to the AI boxing problem in the second novel of his Quantum Thief trilogy (turn a classical computation into a quantum one that will decohere if it interacts with the outside world). But the fictions most people mention in AI safety discussions are unlikely to be helpful—mostly because very few stories succeed with point (2) (and if they are well written, they hide this fact convincingly!)
I recently gave a talk at an academic science fiction conference about whether sf is useful for thinking about the ethics of cognitive enhancement. I think some of the conclusions are applicable to point 9 too:
(1) Bioethics can work in a “prophetic” and a “regulatory” mode. The first is big picture, proactive and open-ended, dealing with the overall aims we ought to have, possibilities, and values. It is open for speculation. The regulatory mode is about ethical governance of current or near-term practices. Ethicists formulate guidelines, point out problems, and suggest reforms, but their purpose is generally not to rethink these practices from the ground-up or to question the wisdom of the whole enterprise. As the debate about the role of speculative bioethics has shown, mixing the modes can be problematic. (Guyer and Moreno 2004) really takes bioethics to task for using science fiction instead of science to motivate arguments: they point out that this can actually be good if one does it inside the prophetic mode, but a lot of bioethics (like the President’s Council on Bioethics at the time) cannot decide what kind of consideration it is.
(2) Is it possible to find out things about the world by making stuff up? (Elgin 2014) argues fictions and thought experiments do exemplify patterns or properties that they share with phenomena in the real world, and hence we can learn something about the realized world from considering fictional worlds (i.e. there is a homeomorphism between them in some domain). It does require the fiction to be imaginative but not lawless: not every fiction or thought experiment has value in telling us something the real or moral world. This is of course why just picking a good or famous piece of fiction as a source of ideas is risky: it was selected not for how it reflects patterns in the real world, but for other reasons.
Considering Eliezer’s levels of intelligence in fictional characters is a nice illustration of this: level 1 intelligence characters show some patterns (being goal directed agents) that matter, and level 3 characters actually give examples of rational skilled cognition.
(3) Putting this together, if you want to use fiction in your argument, the argument better be in the more prophetic, open-ended mode (e.g. arguing that there is AI risks of various kind, what values are at stake etc.) and the fiction needs to have pretty not only high standards of not just internal consistency but actual mappability to the real world. If the discussion is on the more regulatory side (e.g. thinking of actual safeguards, risk assessment, institutional strategies) then fiction is unlikely to be helpful, and very likely (due to good story bias, easily inserted political agendas or different interpretations of worldview) introducing biasing or noise elements.
There are of course some exceptions. Hannu Rajaniemi provides a neat technical trick to the AI boxing problem in the second novel of his Quantum Thief trilogy (turn a classical computation into a quantum one that will decohere if it interacts with the outside world). But the fictions most people mention in AI safety discussions are unlikely to be helpful—mostly because very few stories succeed with point (2) (and if they are well written, they hide this fact convincingly!)