Hmmmm.
So when I read this post I initially thought it was good. But on second thought I don’t think I actually get that much from it. If I had to summarise it, I’d say
a few interesting anecdotes about experiments where measurement was misleading or difficult
some general talk about “low bit experiments” and how hard it is to control for cofounders
The most interesting claim I found was the second law of experiment design. To quote: “The Second Law of Experiment Design: if you measure enough different stuff, you might figure out what you’re actually measuring.”. But even here I didn’t get much clarity or new info. The argument seemed to boil down to “If you measure more things, you may find the actual underlying important variable”, which is true I guess but doesn’t seem particularly novel and also introduces other risks. e.g: That the more variables you measure the higher the chance that at least some of them will correlate just due to chance. There’s a pointer to a book which the author claims sheds more light on the topic and on modern statistical methods around experiment design more generally, but that’s it.
I think I also have a broader problem here, namely that the article feels a bit fuzzy in a way that makes it hard to pin down what the central claims are.
So yeah, I enjoyed it but on reflection I’m a bit less of a fan than I thought.
Agree on the first part 👍
On this
My bad for being unclear. What I meant to convey here was:
I tend to think that decision theory should be about what kind of decision making algo an agent should implement
Given this Newcomb’s problem is still interesting and useful to talk about, even if you remove the “paradox” aspect
Agree that insofar as decision theory asks two different questions the answers will probably be different and looking for a single theory which works for both isn’t wise.