So there’s this ethos/thought-pattern where one encounters some claim about some thing X which is hard to directly observe/measure, and this triggers an attempt to find some easier-to-observe thing Y which will provide some evidence about X. This ethos is useful on a philosophical level for identifying fake beliefs, which is why it featured heavily in the Sequences. But I claim that, to a rough approximation, this ethos basically does not work in practice for measuring things X, and people keep shooting themselves in the foot by trying to apply it to practical problems.
What actually happens, when people try to apply that ethos in practice, is that they Do Not Measure What They Think They Are Measuring. The person’s model of the situation is just totally missing the main things which are actually going on, their whole understanding of how X relates to Y is wrong, it’s a coinflip whether they’d even update in the correct direction about X based on observing Y. And the actual right way for a human (as opposed to a Solomonoff inductor) to update in that situation is to just ignore Y for purposes of reasoning about X.
The main thing which jumps out at me in your dialogue is your self-insert repeatedly trying to apply this ethos which does not actually work in practice.
So there’s this ethos/thought-pattern where one encounters some claim about some thing X which is hard to directly observe/measure, and this triggers an attempt to find some easier-to-observe thing Y which will provide some evidence about X. This ethos is useful on a philosophical level for identifying fake beliefs, which is why it featured heavily in the Sequences. But I claim that, to a rough approximation, this ethos basically does not work in practice for measuring things X, and people keep shooting themselves in the foot by trying to apply it to practical problems.
What actually happens, when people try to apply that ethos in practice, is that they Do Not Measure What They Think They Are Measuring. The person’s model of the situation is just totally missing the main things which are actually going on, their whole understanding of how X relates to Y is wrong, it’s a coinflip whether they’d even update in the correct direction about X based on observing Y. And the actual right way for a human (as opposed to a Solomonoff inductor) to update in that situation is to just ignore Y for purposes of reasoning about X.
The main thing which jumps out at me in your dialogue is your self-insert repeatedly trying to apply this ethos which does not actually work in practice.