How to Measure Anything notably won’t help you (much) with aligning AI. It teaches “good decisionmaking”, but doesn’t teach “research taste in novel domains”. I think there’s a concrete branch of rationality training that’d be relevant for novel research, that requires pretty different feedbackloops from the “generally be successful at life” style of training. I think some of “research taste rationality” is reasonably alive in academia, but many elements are not, or are not emphasized enough.
I want to keep pushing for people to disambiguate for what precisely they mean when they use the word “rationality”. It seems to me that there are a bunch separate project that plausibly, but not necessarily, overlap, which have been lumped together under a common term, which causes people to overestimate how much they do overlap.
In particular, “effective decision-making in the real world”, “how to make progress on natural philosophy when you don’t have traction on the problem” are much more different than one might think from reading the sequences (which talks about both in the same breath, and under the same label).
Problems where “rational choice under uncertainty” / necessarily, problems where you already have a frame to operate in. If nothing else, you have your decision theory and probability theory frame.
Making progress on research questions about which you are extremely confused is mostly a problem of finding and iterating on a frame for the problem.
And the project of “raising the sanity waterline” and of “evidence-based self-help”, are different still.
I feel strongly about this part:
I want to keep pushing for people to disambiguate for what precisely they mean when they use the word “rationality”. It seems to me that there are a bunch separate project that plausibly, but not necessarily, overlap, which have been lumped together under a common term, which causes people to overestimate how much they do overlap.
In particular, “effective decision-making in the real world”, “how to make progress on natural philosophy when you don’t have traction on the problem” are much more different than one might think from reading the sequences (which talks about both in the same breath, and under the same label).
Problems where “rational choice under uncertainty” / necessarily, problems where you already have a frame to operate in. If nothing else, you have your decision theory and probability theory frame.
Making progress on research questions about which you are extremely confused is mostly a problem of finding and iterating on a frame for the problem.
And the project of “raising the sanity waterline” and of “evidence-based self-help”, are different still.