I think all sufficiently competent/reflective civilizations (including sovereign AIs) may want to do this, because it seems hard to be certain enough in one’s philosophical competence to not do this as an additional check. The cost of running thousands or even millions of such simulations seem very small compared to potentially wasting the resources of an entire universe/lightcone due to philosophical mistakes. Also they may be running such simulations anyway for other purposes, so it may be essentially free to also gather some philosophical ideas from such simulations, to make sure you didn’t miss something important or got stuck in some cognitive trap.
It seems like you think the ceiling of philosophical competence is very very high, so that even civilizations that are substantially wiser than ours and presumably much more philosophically competent (?) than ours, would not trust their philosophy very much. [1]
That is, they could be “twice” as philosophically competent as us (on some hypothetical reasonable scale), but that’s just still not very much in an absolute sense.
Is there a particular reason why you think that the ceiling is so high?
In general, it seems hard to know which problems are just beyond our grasp and which problems far outstrip our abilities. I could imagine that if I were only a little bit smarter than any human to date, that it wouldn’t be easy for me to solve problems that are currently philosophically fraught me, with robust and verifiable methods.
Or alternatively, do you guess that civilizations with a much higher average intelligence, that are overall displaying more wisdom and coordination than ours, and not also more likely to be philosophically competent?
It seems crazy to me that there’s not a positive correlation between intelligence and philosophical competence or wisdom and philosophical competence.
I think all sufficiently competent/reflective civilizations (including sovereign AIs) may want to do this, because it seems hard to be certain enough in one’s philosophical competence to not do this as an additional check. The cost of running thousands or even millions of such simulations seem very small compared to potentially wasting the resources of an entire universe/lightcone due to philosophical mistakes. Also they may be running such simulations anyway for other purposes, so it may be essentially free to also gather some philosophical ideas from such simulations, to make sure you didn’t miss something important or got stuck in some cognitive trap.
It seems like you think the ceiling of philosophical competence is very very high, so that even civilizations that are substantially wiser than ours and presumably much more philosophically competent (?) than ours, would not trust their philosophy very much. [1]
That is, they could be “twice” as philosophically competent as us (on some hypothetical reasonable scale), but that’s just still not very much in an absolute sense.
Is there a particular reason why you think that the ceiling is so high?
In general, it seems hard to know which problems are just beyond our grasp and which problems far outstrip our abilities. I could imagine that if I were only a little bit smarter than any human to date, that it wouldn’t be easy for me to solve problems that are currently philosophically fraught me, with robust and verifiable methods.
Or alternatively, do you guess that civilizations with a much higher average intelligence, that are overall displaying more wisdom and coordination than ours, and not also more likely to be philosophically competent?
It seems crazy to me that there’s not a positive correlation between intelligence and philosophical competence or wisdom and philosophical competence.