Mundane Mandy: ordinary conception of what a “good world” looks like, i.e. your friends and family living flourish lives in their biological bodies, with respect for “sacred” goods
I think Mundane Mandy should have the proximal lightcone (anything within 1 billion light years) and Galaxy-brain Gavin should have the distal lightcone (anything 1-45 B ly). This seems like a fair trade.
Not a fair trade, but also present-day “Mundane Mandy” does not want to risk everything she cares about to give “Galaxy-brain Gavin” the small chance of achieving his transhumanist utopia.
If Galaxy-brain Gavin’s theory has serious counterintuitive results, like ignoring the preference of current humans that humanity be not replaced by AI in the future, then Gavin’s theory is not galaxy-brained enough.
Yep, you might be right about the distal/proximal cut-off. I think that the Galaxy-brained value systems will end up controlling most of the distant future simply because they have a lower time-preference for resources. Not sure where the cut-off will be.
For similar reasons, I don’t think we should do a bunch of galaxy-brained acausal decision theory to achieve our mundane values, because the mundane values don’t care about counterfactual worlds.
Is there actually (only) a small number of moral worldviews?
My own moral worldview cares about the journey and has a bunch of preferences for “not going too fast”, “not losing important stuff”, “not cutting lives short”, “not forcing people to grow up much faster than they would like”. But my own moral worldview also cares about not having the destination artificially limited. From my vantage point (which is admittedly just intuitions barely held together with duck tape), it seems plausible that there is a set of intermediate preferences between MM and GG, somewhat well-indexed by a continuous “comfortable speed”. Here are some questions on which I think people might differ according to their “comfortable speed”: - how far beyond the frontier are you entitled to remain and still live a nice life? (ie how much should we subsidize people who wait for the second generation of upload tech before uploading?) - how much risk are you allowed to take in pushing the frontier? - how much consensus do we require to decide the path forward to greater capabilities? (eg choosing which of the following is legitimate: genetic edits, uploading, or artificial intelligence) - how much control do we want to exert over future generations? - how much do you value a predictible future? - how comfortable are you with fast change?
If my “comfortable speed” model is correct, then maybe you would want to assign regions of the lightcone to various preferences according to some gradient.
There can also be preferences over how much the present variance within humanity keeps interacting in the future.
There are two moral worldviews:
Mundane Mandy: ordinary conception of what a “good world” looks like, i.e. your friends and family living flourish lives in their biological bodies, with respect for “sacred” goods
Galaxy-brain Gavin: transhumanist, longtermist, scope-sensitive, risk-neutral, substrate-indifferent, impartial
I think Mundane Mandy should have the proximal lightcone (anything within 1 billion light years) and Galaxy-brain Gavin should have the distal lightcone (anything 1-45 B ly). This seems like a fair trade.
Not a fair trade, but also present-day “Mundane Mandy” does not want to risk everything she cares about to give “Galaxy-brain Gavin” the small chance of achieving his transhumanist utopia.
If Galaxy-brain Gavin’s theory has serious counterintuitive results, like ignoring the preference of current humans that humanity be not replaced by AI in the future, then Gavin’s theory is not galaxy-brained enough.
Does mundane mandy care about stuff outside the solar system? Let alone stuff which is over 1 million light years away.
(Separately, I think the distal light cone is more like 10 B ly than 45 B ly as we can only reach a subset of the observable universe.)
Yep, you might be right about the distal/proximal cut-off. I think that the Galaxy-brained value systems will end up controlling most of the distant future simply because they have a lower time-preference for resources. Not sure where the cut-off will be.
For similar reasons, I don’t think we should do a bunch of galaxy-brained acausal decision theory to achieve our mundane values, because the mundane values don’t care about counterfactual worlds.
Is there actually (only) a small number of moral worldviews?
My own moral worldview cares about the journey and has a bunch of preferences for “not going too fast”, “not losing important stuff”, “not cutting lives short”, “not forcing people to grow up much faster than they would like”. But my own moral worldview also cares about not having the destination artificially limited. From my vantage point (which is admittedly just intuitions barely held together with duck tape), it seems plausible that there is a set of intermediate preferences between MM and GG, somewhat well-indexed by a continuous “comfortable speed”. Here are some questions on which I think people might differ according to their “comfortable speed”:
- how far beyond the frontier are you entitled to remain and still live a nice life? (ie how much should we subsidize people who wait for the second generation of upload tech before uploading?)
- how much risk are you allowed to take in pushing the frontier?
- how much consensus do we require to decide the path forward to greater capabilities? (eg choosing which of the following is legitimate: genetic edits, uploading, or artificial intelligence)
- how much control do we want to exert over future generations?
- how much do you value a predictible future?
- how comfortable are you with fast change?
If my “comfortable speed” model is correct, then maybe you would want to assign regions of the lightcone to various preferences according to some gradient.
There can also be preferences over how much the present variance within humanity keeps interacting in the future.