Is there actually (only) a small number of moral worldviews?
My own moral worldview cares about the journey and has a bunch of preferences for “not going too fast”, “not losing important stuff”, “not cutting lives short”, “not forcing people to grow up much faster than they would like”. But my own moral worldview also cares about not having the destination artificially limited. From my vantage point (which is admittedly just intuitions barely held together with duck tape), it seems plausible that there is a set of intermediate preferences between MM and GG, somewhat well-indexed by a continuous “comfortable speed”. Here are some questions on which I think people might differ according to their “comfortable speed”: - how far beyond the frontier are you entitled to remain and still live a nice life? (ie how much should we subsidize people who wait for the second generation of upload tech before uploading?) - how much risk are you allowed to take in pushing the frontier? - how much consensus do we require to decide the path forward to greater capabilities? (eg choosing which of the following is legitimate: genetic edits, uploading, or artificial intelligence) - how much control do we want to exert over future generations? - how much do you value a predictible future? - how comfortable are you with fast change?
If my “comfortable speed” model is correct, then maybe you would want to assign regions of the lightcone to various preferences according to some gradient.
There can also be preferences over how much the present variance within humanity keeps interacting in the future.
Is there actually (only) a small number of moral worldviews?
My own moral worldview cares about the journey and has a bunch of preferences for “not going too fast”, “not losing important stuff”, “not cutting lives short”, “not forcing people to grow up much faster than they would like”. But my own moral worldview also cares about not having the destination artificially limited. From my vantage point (which is admittedly just intuitions barely held together with duck tape), it seems plausible that there is a set of intermediate preferences between MM and GG, somewhat well-indexed by a continuous “comfortable speed”. Here are some questions on which I think people might differ according to their “comfortable speed”:
- how far beyond the frontier are you entitled to remain and still live a nice life? (ie how much should we subsidize people who wait for the second generation of upload tech before uploading?)
- how much risk are you allowed to take in pushing the frontier?
- how much consensus do we require to decide the path forward to greater capabilities? (eg choosing which of the following is legitimate: genetic edits, uploading, or artificial intelligence)
- how much control do we want to exert over future generations?
- how much do you value a predictible future?
- how comfortable are you with fast change?
If my “comfortable speed” model is correct, then maybe you would want to assign regions of the lightcone to various preferences according to some gradient.
There can also be preferences over how much the present variance within humanity keeps interacting in the future.