Tho, to be fair, losing points in universes you don’t expect to happen in order to win points in universes you expect to happen seems like good decision theory.
[I do have a standing wonder about how much of dath ilan is supposed to be ‘the obvious equilbrium’ vs. ‘aesthetic preferences’; I would be pretty surprised if Eliezer thought there was only one fixed point of the relevant coordination functions, and so some of it must be ‘aesthetics’.]
I don’t think dath ilan would try to win points in likely universes by teaching children untrue things, which I claim is what they’re doing.
Also, it’s not clear to me that this would even win them points, because when thinking about designing civilisation (or AGIs) you need to have accurate beliefs about this type of thing. (E.g. imagine dath ilani alignment researchers being like “here are all our principles for understanding intelligence” and then continually being surprised, like Keltham is, about how messy and fractally unprincipled some plausible outcomes are.)
Tho, to be fair, losing points in universes you don’t expect to happen in order to win points in universes you expect to happen seems like good decision theory.
[I do have a standing wonder about how much of dath ilan is supposed to be ‘the obvious equilbrium’ vs. ‘aesthetic preferences’; I would be pretty surprised if Eliezer thought there was only one fixed point of the relevant coordination functions, and so some of it must be ‘aesthetics’.]
I don’t think dath ilan would try to win points in likely universes by teaching children untrue things, which I claim is what they’re doing.
Also, it’s not clear to me that this would even win them points, because when thinking about designing civilisation (or AGIs) you need to have accurate beliefs about this type of thing. (E.g. imagine dath ilani alignment researchers being like “here are all our principles for understanding intelligence” and then continually being surprised, like Keltham is, about how messy and fractally unprincipled some plausible outcomes are.)