This sure does seem like a failure mode someone could make after reading Thresholding.
Cards on the table, I think I’ve got one of the better views on the whole of the community, I do actually think in-person ACX meetups on average are too open, and if I had only one dial that said “trust more” or “trust less” I’d turn it about 5% towards “trust less.” Not 20%, or even 10%! We can be more precise than one big dial, and should use that precision.
I don’t know what to do about the general case where there’s a good tool, properly integrating that tool makes you more effective overall, but learning and starting to use that tool is likely going to lead to some mistakes, and also it’s hard to get good practice in to smooth those out.
For Thresholding in particular, the addendum I’d make is to just start counting at lower thresholds, make small nudges earlier, and be comfortable counting higher? Like, write down the 2.9, give a quick and light “aww, I’d rather you did better” with no other comment, and patiently wait until the number of 2.9s smacks you over the head?
Basically, you make a reasonable point, it’s going to be hard to measure the places where this improves vs makes things worse, but I still think it’s on net worth circulating the concept.
I don’t know what to do about the general case where there’s a good tool, properly integrating that tool makes you more effective overall, but learning and starting to use that tool is likely going to lead to some mistakes, and also it’s hard to get good practice in to smooth those out.
I really like your response but want to highlight that I’m concerned about a different thing. I don’t think this is a Valley of Bad Rationality per se, but more a tightrope walk. You made it to the other end, but a lot of people fall off before then and become paranoid about social interactions.
Because of this, I’m not worried about minimizing risk while getting people onboarded, but mitigating risk that is perpetually ongoing.
This sure does seem like a failure mode someone could make after reading Thresholding.
Cards on the table, I think I’ve got one of the better views on the whole of the community, I do actually think in-person ACX meetups on average are too open, and if I had only one dial that said “trust more” or “trust less” I’d turn it about 5% towards “trust less.” Not 20%, or even 10%! We can be more precise than one big dial, and should use that precision.
I don’t know what to do about the general case where there’s a good tool, properly integrating that tool makes you more effective overall, but learning and starting to use that tool is likely going to lead to some mistakes, and also it’s hard to get good practice in to smooth those out.
For Thresholding in particular, the addendum I’d make is to just start counting at lower thresholds, make small nudges earlier, and be comfortable counting higher? Like, write down the 2.9, give a quick and light “aww, I’d rather you did better” with no other comment, and patiently wait until the number of 2.9s smacks you over the head?
Basically, you make a reasonable point, it’s going to be hard to measure the places where this improves vs makes things worse, but I still think it’s on net worth circulating the concept.
Guess who has written extensively about the general case of this failure mode of new conceptual handles 😅
“You see, the problems caused by reading the first essay can be mitigated by the second essay.”
“Ah, so the essays will continue until the problems improve.”
I joke, but “Thresholding is a Sazen” sure is a sentence I’d call at least 20% correct.
I really like your response but want to highlight that I’m concerned about a different thing. I don’t think this is a Valley of Bad Rationality per se, but more a tightrope walk. You made it to the other end, but a lot of people fall off before then and become paranoid about social interactions.
Because of this, I’m not worried about minimizing risk while getting people onboarded, but mitigating risk that is perpetually ongoing.
I don’t know how to solve this issue either FWIW.