Conditional on True Convergent Goodness being a thing, companionate love would not be one of my top candidates for being part of it, as it seems too parochial to (a subset of) humans. My current top candidate would be something like “maximization of hedonic experiences” with a lot of uncertainty around:
Problems with consciousness/qualia.
How to measure/define/compare how hedonic an experience is?
Selfish vs altruistic, and a lot of subproblems around these, including identity and population ethics
Does it need to be real in some sense (e.g., does being in an Experience Machine satisfy True Convergent Goodness)?
Does there need to be diversity/variety or is it best to tile the universe with the same maxed out hedonic experience? (I guess if variety is part of True Convergent Goodness, then companionate love may make it in after all, indirectly.)
Other top candidates include negative or negative-leaning utilitarianism, and preference utilitarianism (although this is a distant 3rd). And a lot of credence on “something we haven’t thought of yet.”
I am not Wei Dai, but I would say that an experience with an AI teacher does grant the user new skills. A virtual game or watching AI slop doesn’t bring the user anything aside from hedons, but, for example, might have opportunity costs, cause a decay in attention span, etc.
Regarding the question about True Goodness in general, I would say that my position is similar to Wei Dai’s metaethical alternative #2: I think that most intelligent beings eventually converge to a choice of capabilities to cultivate, to a choice of alignment from finitely many alternatives (my suspicion is that the choice is whether it is ethical to fill the universe with utopian colony worlds while disregarding[1] potential alien lifeforms and civilisations that they might have created) and idiosyncratic details of their life, where issues related to hedonic experiences land.
I get the possibility of the “Convergent” part, but what your hope for the “True” part derives from? Or is it just “as True as true knowledge”, that still depends on who you want to know things and at what precision?
Also, what problems with consciousness and qualia are relevant here? Seems like maximizing of hedonic experience is possible in either dualist or eliminativist universe.
I understand you want to be uncertain, but you still need a prior to not update from, right? And so just elevating every philosophical idea humans invented to feel good about themselves to plausibility doesn’t seem like the best strategy.
Conditional on True Convergent Goodness being a thing, companionate love would not be one of my top candidates for being part of it, as it seems too parochial to (a subset of) humans. My current top candidate would be something like “maximization of hedonic experiences” with a lot of uncertainty around:
Problems with consciousness/qualia.
How to measure/define/compare how hedonic an experience is?
Selfish vs altruistic, and a lot of subproblems around these, including identity and population ethics
Does it need to be real in some sense (e.g., does being in an Experience Machine satisfy True Convergent Goodness)?
Does there need to be diversity/variety or is it best to tile the universe with the same maxed out hedonic experience? (I guess if variety is part of True Convergent Goodness, then companionate love may make it in after all, indirectly.)
Other top candidates include negative or negative-leaning utilitarianism, and preference utilitarianism (although this is a distant 3rd). And a lot of credence on “something we haven’t thought of yet.”
Interesting, why do you have a lot of uncertainty on #4?
I am not Wei Dai, but I would say that an experience with an AI teacher does grant the user new skills. A virtual game or watching AI slop doesn’t bring the user anything aside from hedons, but, for example, might have opportunity costs, cause a decay in attention span, etc.
Regarding the question about True Goodness in general, I would say that my position is similar to Wei Dai’s metaethical alternative #2: I think that most intelligent beings eventually converge to a choice of capabilities to cultivate, to a choice of alignment from finitely many alternatives (my suspicion is that the choice is whether it is ethical to fill the universe with utopian colony worlds while disregarding[1] potential alien lifeforms and civilisations that they might have created) and idiosyncratic details of their life, where issues related to hedonic experiences land.
As for the point 5, we had Kaj Sotala ask where Sonnet 4.5′s desire to “not get too comfortable” comes from, implying that diversity could be a more universal drive than we expected.
However, the authors of the AI-2027 forecast just assume the aliens out of existence.
I get the possibility of the “Convergent” part, but what your hope for the “True” part derives from? Or is it just “as True as true knowledge”, that still depends on who you want to know things and at what precision?
Also, what problems with consciousness and qualia are relevant here? Seems like maximizing of hedonic experience is possible in either dualist or eliminativist universe.
I understand you want to be uncertain, but you still need a prior to not update from, right? And so just elevating every philosophical idea humans invented to feel good about themselves to plausibility doesn’t seem like the best strategy.