This post feels like an important part of what I’ve referred to as The CFAR Development Branch Git Merge. Between 2013ish and 2017ish, a lot of rationality development happened in person, which built off the sequences. I think some of that work turned out to be dead ends, or a bit confused, or not as important as we thought at the time. But a lot of it was been quite essential to rationality as a practice. I’m glad it has gotten written up.
The felt sense, and focusing, have been two surprisingly important tools for me. One use case not quite mentioned here – and I think perhaps the most important one for rationality, is for getting a handle on what I actually think. Kaj discusses using it for figuring out how to communicate better, getting a sense of what your interlocutor is trying to understand and how it contrasts with what you’re trying to say. But I think this is also useful in single-player mode. i.e. I say “I think X”, and then I notice “no, there’s a subtle wrongness to my description of what X is”. This is helpful both for clarifying my beliefs about subtle topics, or for following fruitful trails of brainstorming.
This post feels like an important part of what I’ve referred to as The CFAR Development Branch Git Merge. Between 2013ish and 2017ish, a lot of rationality development happened in person, which built off the sequences. I think some of that work turned out to be dead ends, or a bit confused, or not as important as we thought at the time. But a lot of it was been quite essential to rationality as a practice. I’m glad it has gotten written up.
The felt sense, and focusing, have been two surprisingly important tools for me. One use case not quite mentioned here – and I think perhaps the most important one for rationality, is for getting a handle on what I actually think. Kaj discusses using it for figuring out how to communicate better, getting a sense of what your interlocutor is trying to understand and how it contrasts with what you’re trying to say. But I think this is also useful in single-player mode. i.e. I say “I think X”, and then I notice “no, there’s a subtle wrongness to my description of what X is”. This is helpful both for clarifying my beliefs about subtle topics, or for following fruitful trails of brainstorming.