I don’t have great faith in the epistemics of postrats as they exist today. My somewhat limited experience of post-rattish meetups and TPOT is that it’s a mix of people who are either indistinguishable from rats (and indeed lots are just rats), people who are mostly normie-ish and don’t think about epistemics, and totally woo people who are obviously wrong about lots of things (astrology, karma, UFOs) with no epistemic gain.
My guess is what’s happening is that the rationalist frame is 80% correct, and the best alternative is normie epistemics in the remaining 20% of time. The first type of “postrats” just use the rationalist frame. The second type swap in some amount of normie epistemology, but not in a way which correlates with the times they should actually be swapping in normie epistemology. The third type of postrats are swapping a woo/religious frame into the rationalist frame, which seems mostly just worse than the rationalist.
The second and third groups do have better interpersonal skills than rats, but I think this is mostly just regression to the mean.
I also have pretty limited experience with post-rattish meetups and probably mostly see the parts of TPOT I agree with, but I feel like typical post-rats don’t fit into any of these “types,” except maybe the first one.
The kind of post-rat I think is good, and perhaps even common, is something like what’s described in Gordon’s comment. Basically a rationalist who realizes that it is possible to have and act on non-legibilized knowledge, and who knows when to make decisions based on what feels right rather than what they can convincingly argue will maximize their utility function. I think this mindset is valuable and not so common among vanilla rationalists.
I don’t have great faith in the epistemics of postrats as they exist today.
Yeah, you and me both.
I’ve said this elsewhere before, but in hindsight it was a mistake for us to promote terms like “postrationality” and “metarationality” to the point of fixation. They’re exactly the type of words that invite pre/post confusion and allows pre-rats to masquerade as post-rats if there’s insufficient gatekeeping (and there usually is).
And yet, there’s something in the desire of folks like myself to point to a place that says “hey, I think rationalists are doing a lot of things right, but are screwing up in fundamental ways that are contrary to the vibe of rationality, and it’s useful to give that thing a name so we can easily point to it”.
In my ideal world, people would be trained in rationality-as-it-exists-today first, and then be trained in the limits of those methods so they know how to transcend them safely when they break down. Then post-rat would really mean something: one who fully trained as a rationalist, and then used that as the bedrock on which to learn how to handle the situations the methods of rationality are not good at dealing with.
Some people will argue that’s just rationality, and sure maybe it is some ideal version of rationality as proposed in The Sequences, but as I see it, actual rationalists screw up in predictable ways, those ways are related to the rationalist vibe, and thus the internal experience must be to transcend that vibe, whatever we want to label it.
there’s something in the desire of folks like myself to point to a place that says “hey, I think rationalists are doing a lot of things right, but are screwing up in fundamental ways that are contrary to the vibe of rationality, and it’s useful to give that thing a name so we can easily point to it”.
Have you written about this anywhere? I’d be interested in reading it. Especially if it’s pretty direct.
What I’d like to point you to is my book, Fundamental Uncertainty. Alas, it’s still in revisions and not published yet, though I did put the first draft up on LessWrong to get feedback. It ultimately makes this case, but does so by gently building up to it. If you want to skip ahead, most of the “rationalist are screwing up; here’s how to do better” content is in Chapters 8 and 9 (though because I target a wider audience, rationalists are not explicitly named).
Yeah, I’m familiar with Chapman and loosely follow him. His latest post was especially interesting to me, because it was unusually concrete about what people actually get wrong in practice. It’s one thing to say “There’s water in an eggplant” to which, “Yes, and?”, and quite another to show someone’s life falling apart because of things they’re systematically not seeing. To which it’s much more “Could that happen to me? And if not, what exactly is stopping it?”
I’m probably not your intended audience because I’m already familiar with Chapman’s work and mostly agree with your position in the quoted statement, but that’s the kind of writing I personally would be most interested in reading from you, for whatever that’s worth. I’ll read some more of your stuff though and see what jumps out at me.
Thanks for the feedback. I tend to be an abstractor, as I think of it: gathering data and building abstractions that structure it. This means my output tends to be to tell you about cool abstractions I figured out.
I’m less excited about performing the opposite task, which is presenting raw data for others to build abstractions on. I mostly only do it when I feel like, rhetorically, I’m going to lose the reader if I don’t. I freely admit this is a weakness on my part and I’m trying to get better about providing data in posts, but I admit I’m not excited about it.
If you want something more like Chapman’s most recent post, Kegan’s In Over Our Heads is really good: it tells a stylized story of a person going through the struggle of the stage 3 to 4 transition and is based on an amalgamation of real stories (aggregated and anonymized for privacy). Or, alternatively, I’ve gotten a lot of milage out of both slice-of-life type stories that follow realistic characters and reading memoirs and other true stories from people. I have to do the work of applying whatever theory I’m trying to work out to the story, but learning more about the experiences of many people is the raw data necessary for making sense of a great many theories (and for formulating your own!).
Oh geez, looking back at my comment I was extremely unclear. Sorry about that.
Probably not useful as feedback, but the specific things I’m most interested in here are your conclusions. Like, “Not gonna justify this yet, but I think rationalists are susceptible to getting seduced by witches in ways that will turn their lives upside down. The abstractions that predict this are after the fold, and you gotta apply it to your own raw data”. I’m mostly curious about this because I’m trying to figure out how similar our perspectives are. The more similar our conclusions, the more it seems like “Water in the eggplant” type stuff is true and important just not for me. The more dissimilar, the more I have to wonder “Wait, what do you actually mean by that. I must be missing some patterns he’s matching while thinking I get it”.
Separately from that, and what may or may not be useful, is that in general I find it helpful to have more concrete applications spelled out. Not raw data necessarily as the point isn’t to fuel independent abstraction, but a minimal set of simulated/curated data to highlight the connection between your abstractions and concrete use cases. Sounds like you’re mostly on board with this though, at least in theory.
I’m with you on the “I just wanna tell you about the cool abstractions I figured out!” thing by the way, hehe. It’s a lot easier, and more fun, and genuinely worth doing first I think… just also a lot harder to get through to people IME because grounding the abstractions is much harder than holding them in the abstract.
I don’t have great faith in the epistemics of postrats as they exist today. My somewhat limited experience of post-rattish meetups and TPOT is that it’s a mix of people who are either indistinguishable from rats (and indeed lots are just rats), people who are mostly normie-ish and don’t think about epistemics, and totally woo people who are obviously wrong about lots of things (astrology, karma, UFOs) with no epistemic gain.
My guess is what’s happening is that the rationalist frame is 80% correct, and the best alternative is normie epistemics in the remaining 20% of time. The first type of “postrats” just use the rationalist frame. The second type swap in some amount of normie epistemology, but not in a way which correlates with the times they should actually be swapping in normie epistemology. The third type of postrats are swapping a woo/religious frame into the rationalist frame, which seems mostly just worse than the rationalist.
The second and third groups do have better interpersonal skills than rats, but I think this is mostly just regression to the mean.
I also have pretty limited experience with post-rattish meetups and probably mostly see the parts of TPOT I agree with, but I feel like typical post-rats don’t fit into any of these “types,” except maybe the first one.
The kind of post-rat I think is good, and perhaps even common, is something like what’s described in Gordon’s comment. Basically a rationalist who realizes that it is possible to have and act on non-legibilized knowledge, and who knows when to make decisions based on what feels right rather than what they can convincingly argue will maximize their utility function. I think this mindset is valuable and not so common among vanilla rationalists.
Yeah, you and me both.
I’ve said this elsewhere before, but in hindsight it was a mistake for us to promote terms like “postrationality” and “metarationality” to the point of fixation. They’re exactly the type of words that invite pre/post confusion and allows pre-rats to masquerade as post-rats if there’s insufficient gatekeeping (and there usually is).
And yet, there’s something in the desire of folks like myself to point to a place that says “hey, I think rationalists are doing a lot of things right, but are screwing up in fundamental ways that are contrary to the vibe of rationality, and it’s useful to give that thing a name so we can easily point to it”.
In my ideal world, people would be trained in rationality-as-it-exists-today first, and then be trained in the limits of those methods so they know how to transcend them safely when they break down. Then post-rat would really mean something: one who fully trained as a rationalist, and then used that as the bedrock on which to learn how to handle the situations the methods of rationality are not good at dealing with.
Some people will argue that’s just rationality, and sure maybe it is some ideal version of rationality as proposed in The Sequences, but as I see it, actual rationalists screw up in predictable ways, those ways are related to the rationalist vibe, and thus the internal experience must be to transcend that vibe, whatever we want to label it.
Have you written about this anywhere? I’d be interested in reading it. Especially if it’s pretty direct.
What I’d like to point you to is my book, Fundamental Uncertainty. Alas, it’s still in revisions and not published yet, though I did put the first draft up on LessWrong to get feedback. It ultimately makes this case, but does so by gently building up to it. If you want to skip ahead, most of the “rationalist are screwing up; here’s how to do better” content is in Chapters 8 and 9 (though because I target a wider audience, rationalists are not explicitly named).
An older post that I no longer fully endorse that’s in the vein you’re looking for is: https://www.lesswrong.com/posts/5LP6Jc8ztwcyb296X/outline-of-metarationality-or-much-less-than-you-wanted-to
But maybe the best take on this is not mine, but David Chapman’s Meaningness. I’d recommend starting with this essay.
Yeah, I’m familiar with Chapman and loosely follow him. His latest post was especially interesting to me, because it was unusually concrete about what people actually get wrong in practice. It’s one thing to say “There’s water in an eggplant” to which, “Yes, and?”, and quite another to show someone’s life falling apart because of things they’re systematically not seeing. To which it’s much more “Could that happen to me? And if not, what exactly is stopping it?”
I’m probably not your intended audience because I’m already familiar with Chapman’s work and mostly agree with your position in the quoted statement, but that’s the kind of writing I personally would be most interested in reading from you, for whatever that’s worth. I’ll read some more of your stuff though and see what jumps out at me.
Thanks for the feedback. I tend to be an abstractor, as I think of it: gathering data and building abstractions that structure it. This means my output tends to be to tell you about cool abstractions I figured out.
I’m less excited about performing the opposite task, which is presenting raw data for others to build abstractions on. I mostly only do it when I feel like, rhetorically, I’m going to lose the reader if I don’t. I freely admit this is a weakness on my part and I’m trying to get better about providing data in posts, but I admit I’m not excited about it.
If you want something more like Chapman’s most recent post, Kegan’s In Over Our Heads is really good: it tells a stylized story of a person going through the struggle of the stage 3 to 4 transition and is based on an amalgamation of real stories (aggregated and anonymized for privacy). Or, alternatively, I’ve gotten a lot of milage out of both slice-of-life type stories that follow realistic characters and reading memoirs and other true stories from people. I have to do the work of applying whatever theory I’m trying to work out to the story, but learning more about the experiences of many people is the raw data necessary for making sense of a great many theories (and for formulating your own!).
Oh geez, looking back at my comment I was extremely unclear. Sorry about that.
Probably not useful as feedback, but the specific things I’m most interested in here are your conclusions. Like, “Not gonna justify this yet, but I think rationalists are susceptible to getting seduced by witches in ways that will turn their lives upside down. The abstractions that predict this are after the fold, and you gotta apply it to your own raw data”. I’m mostly curious about this because I’m trying to figure out how similar our perspectives are. The more similar our conclusions, the more it seems like “Water in the eggplant” type stuff is true and important just not for me. The more dissimilar, the more I have to wonder “Wait, what do you actually mean by that. I must be missing some patterns he’s matching while thinking I get it”.
Separately from that, and what may or may not be useful, is that in general I find it helpful to have more concrete applications spelled out. Not raw data necessarily as the point isn’t to fuel independent abstraction, but a minimal set of simulated/curated data to highlight the connection between your abstractions and concrete use cases. Sounds like you’re mostly on board with this though, at least in theory.
I’m with you on the “I just wanna tell you about the cool abstractions I figured out!” thing by the way, hehe. It’s a lot easier, and more fun, and genuinely worth doing first I think… just also a lot harder to get through to people IME because grounding the abstractions is much harder than holding them in the abstract.