Curated. I think this is a pretty important point. I appreciate Neel’s willigness to use himself as an example.
I do think this leaves us with the important followup questions of “okay, but, how actually DO we evaluate strategic takes?”. A lot of people who are in a position to have demonstrated some kind of strategic awareness are people who are also some kind of “player” on the gameboard with an agenda, which means you can’t necessarily take their statements at face value as an epistemic claim.
okay, but, how actually DO we evaluate strategic takes?
Yeah, I don’t have a great answer to this one. I’m mostly trying to convey the spirit of: we’re all quite confused, and the people who seem competent disagree a lot, so they can’t actually be that correct. And given that the ground truth is confusion, it is epistemically healthier to be aware of this.
Actually solving these problems is way harder! I haven’t found a much better substitute than looking at people who have a good non-trivial track record of predictions, and people who have what to me seem like coherent models of the world that make legitimate and correct seeming predictions. Though the latter one is fuzzier and has a lot more false positives. A particularly salient form of a good track record is people who had positions in domains I know well (eg interpretability) that I previously thought were wrong/ridiculous, but who I later decided were right (eg I give Buck decent points here, and also a fair amount of points to Chris Olah)
Curated. I think this is a pretty important point. I appreciate Neel’s willigness to use himself as an example.
I do think this leaves us with the important followup questions of “okay, but, how actually DO we evaluate strategic takes?”. A lot of people who are in a position to have demonstrated some kind of strategic awareness are people who are also some kind of “player” on the gameboard with an agenda, which means you can’t necessarily take their statements at face value as an epistemic claim.
Thanks!
Yeah, I don’t have a great answer to this one. I’m mostly trying to convey the spirit of: we’re all quite confused, and the people who seem competent disagree a lot, so they can’t actually be that correct. And given that the ground truth is confusion, it is epistemically healthier to be aware of this.
Actually solving these problems is way harder! I haven’t found a much better substitute than looking at people who have a good non-trivial track record of predictions, and people who have what to me seem like coherent models of the world that make legitimate and correct seeming predictions. Though the latter one is fuzzier and has a lot more false positives. A particularly salient form of a good track record is people who had positions in domains I know well (eg interpretability) that I previously thought were wrong/ridiculous, but who I later decided were right (eg I give Buck decent points here, and also a fair amount of points to Chris Olah)