I really like the concept and agree that a lot of Rationalist-community folks are walking around without it. I also think the concept is likely to be mishandled by a lot of people and to really make a mess of community dynamics because of how the current presentation isn’t paired with any helpful tips at how to use it.
I’ve seen this play out a couple times since the post’s publication:
Alice bumps into Bob while walking past.
Bob, who dislikes Alice, says “Ouch!” and makes a big fuss.
Bob tells the Meetup Organizer about this.
Alice says she just accidentally bumped into Bob and it’s not a big deal.
Bob tells the Meetup Organizer that, sure, it’s not a big deal, but Alice bumped into him 2 months ago as well, and she’s just Thresholding, which means Doing Bad Things But Not Often Enough To Be A Big Deal, so even if it’s not a big deal, we should make it one, and punish Alice.
The Meetup Organizer, who has always had a hard time with social situations, feels very proud to get a chance to apply their new Thresholding skill, and Alice gets in trouble.
The concept of Thresholding is really useful: it lets you stop doing rules lawyering and start using common sense. However, the people who need to learn about Thresholding do not have social common sense, and can often apply the idea in a very rigid way, making things even more exploitable for bad actors.
I do not have a good enough view of the whole of the community to know whether this second-order effect is small enough that the first-order effect is a good purchase at this price, and I really really liked Duncan’s essay, but I’m reluctant to promote the essay further for this reason.
This sure does seem like a failure mode someone could make after reading Thresholding.
Cards on the table, I think I’ve got one of the better views on the whole of the community, I do actually think in-person ACX meetups on average are too open, and if I had only one dial that said “trust more” or “trust less” I’d turn it about 5% towards “trust less.” Not 20%, or even 10%! We can be more precise than one big dial, and should use that precision.
I don’t know what to do about the general case where there’s a good tool, properly integrating that tool makes you more effective overall, but learning and starting to use that tool is likely going to lead to some mistakes, and also it’s hard to get good practice in to smooth those out.
For Thresholding in particular, the addendum I’d make is to just start counting at lower thresholds, make small nudges earlier, and be comfortable counting higher? Like, write down the 2.9, give a quick and light “aww, I’d rather you did better” with no other comment, and patiently wait until the number of 2.9s smacks you over the head?
Basically, you make a reasonable point, it’s going to be hard to measure the places where this improves vs makes things worse, but I still think it’s on net worth circulating the concept.
I don’t know what to do about the general case where there’s a good tool, properly integrating that tool makes you more effective overall, but learning and starting to use that tool is likely going to lead to some mistakes, and also it’s hard to get good practice in to smooth those out.
I really like your response but want to highlight that I’m concerned about a different thing. I don’t think this is a Valley of Bad Rationality per se, but more a tightrope walk. You made it to the other end, but a lot of people fall off before then and become paranoid about social interactions.
Because of this, I’m not worried about minimizing risk while getting people onboarded, but mitigating risk that is perpetually ongoing.
I really like the concept and agree that a lot of Rationalist-community folks are walking around without it. I also think the concept is likely to be mishandled by a lot of people and to really make a mess of community dynamics because of how the current presentation isn’t paired with any helpful tips at how to use it.
I’ve seen this play out a couple times since the post’s publication:
Alice bumps into Bob while walking past.
Bob, who dislikes Alice, says “Ouch!” and makes a big fuss.
Bob tells the Meetup Organizer about this.
Alice says she just accidentally bumped into Bob and it’s not a big deal.
Bob tells the Meetup Organizer that, sure, it’s not a big deal, but Alice bumped into him 2 months ago as well, and she’s just Thresholding, which means Doing Bad Things But Not Often Enough To Be A Big Deal, so even if it’s not a big deal, we should make it one, and punish Alice.
The Meetup Organizer, who has always had a hard time with social situations, feels very proud to get a chance to apply their new Thresholding skill, and Alice gets in trouble.
The concept of Thresholding is really useful: it lets you stop doing rules lawyering and start using common sense. However, the people who need to learn about Thresholding do not have social common sense, and can often apply the idea in a very rigid way, making things even more exploitable for bad actors.
I do not have a good enough view of the whole of the community to know whether this second-order effect is small enough that the first-order effect is a good purchase at this price, and I really really liked Duncan’s essay, but I’m reluctant to promote the essay further for this reason.
This sure does seem like a failure mode someone could make after reading Thresholding.
Cards on the table, I think I’ve got one of the better views on the whole of the community, I do actually think in-person ACX meetups on average are too open, and if I had only one dial that said “trust more” or “trust less” I’d turn it about 5% towards “trust less.” Not 20%, or even 10%! We can be more precise than one big dial, and should use that precision.
I don’t know what to do about the general case where there’s a good tool, properly integrating that tool makes you more effective overall, but learning and starting to use that tool is likely going to lead to some mistakes, and also it’s hard to get good practice in to smooth those out.
For Thresholding in particular, the addendum I’d make is to just start counting at lower thresholds, make small nudges earlier, and be comfortable counting higher? Like, write down the 2.9, give a quick and light “aww, I’d rather you did better” with no other comment, and patiently wait until the number of 2.9s smacks you over the head?
Basically, you make a reasonable point, it’s going to be hard to measure the places where this improves vs makes things worse, but I still think it’s on net worth circulating the concept.
Guess who has written extensively about the general case of this failure mode of new conceptual handles 😅
“You see, the problems caused by reading the first essay can be mitigated by the second essay.”
“Ah, so the essays will continue until the problems improve.”
I joke, but “Thresholding is a Sazen” sure is a sentence I’d call at least 20% correct.
I really like your response but want to highlight that I’m concerned about a different thing. I don’t think this is a Valley of Bad Rationality per se, but more a tightrope walk. You made it to the other end, but a lot of people fall off before then and become paranoid about social interactions.
Because of this, I’m not worried about minimizing risk while getting people onboarded, but mitigating risk that is perpetually ongoing.
I don’t know how to solve this issue either FWIW.