I am not Wei Dai, but I would say that an experience with an AI teacher does grant the user new skills. A virtual game or watching AI slop doesn’t bring the user anything aside from hedons, but, for example, might have opportunity costs, cause a decay in attention span, etc.
Regarding the question about True Goodness in general, I would say that my position is similar to Wei Dai’s metaethical alternative #2: I think that most intelligent beings eventually converge to a choice of capabilities to cultivate, to a choice of alignment from finitely many alternatives (my suspicion is that the choice is whether it is ethical to fill the universe with utopian colony worlds while disregarding[1] potential alien lifeforms and civilisations that they might have created) and idiosyncratic details of their life, where issues related to hedonic experiences land.
I am not Wei Dai, but I would say that an experience with an AI teacher does grant the user new skills. A virtual game or watching AI slop doesn’t bring the user anything aside from hedons, but, for example, might have opportunity costs, cause a decay in attention span, etc.
Regarding the question about True Goodness in general, I would say that my position is similar to Wei Dai’s metaethical alternative #2: I think that most intelligent beings eventually converge to a choice of capabilities to cultivate, to a choice of alignment from finitely many alternatives (my suspicion is that the choice is whether it is ethical to fill the universe with utopian colony worlds while disregarding[1] potential alien lifeforms and civilisations that they might have created) and idiosyncratic details of their life, where issues related to hedonic experiences land.
As for the point 5, we had Kaj Sotala ask where Sonnet 4.5′s desire to “not get too comfortable” comes from, implying that diversity could be a more universal drive than we expected.
However, the authors of the AI-2027 forecast just assume the aliens out of existence.