Anything can be included in rationality after you realize it needs to be.
Or: You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don’t occur to us (at least until later). So sure, increased accuracy [to social detriment] is not rationality. Once you realize it.* But you need to realize it. I think HungryTurtle is helping us realize it.
So I think the real question is does your current model of rationality, the way you think about it right now and actually (hopefully) use it, is that inoptimal?*
Anything can be included in rationality after you realize it needs to be.
Or: You can always define your utility function to include everything relevant, but in real life estimations of utility, some things just don’t occur to us (at least until later). So sure, increased accuracy [to social detriment] is not rationality. Once you realize it.* But you need to realize it. I think HungryTurtle is helping us realize it.
So I think the real question is does your current model of rationality, the way you think about it right now and actually (hopefully) use it, is that inoptimal?*