I’ve heard some describe my recent posts as “overconfident”.
I think I used to calibrate how confident I sound based on how much I expect the people reading/listening-to me to agree with what I’m saying, kinda out of “politeness” for their beliefs; and I think I also used to calibrate my confidence based on how much they match with the apparent consensus, to avoid seeming strange.
I think I’ve done a good job learning over time to instead report my actual inside-view, including how confident I feel about it.
There’s already an immense amount of outside-view double-counting going on in AI discourse, the least I can do is provide {the people who listen to me} with my inside-view beliefs, as opposed to just cycling other people’s opinions through me.
Hence, how confident I sound while claiming things that don’t match consensus. I actually am that confident in my inside-view. I strive to be honest by hedging what I say when I’m in doubt, but that means I also have to sound confident when I’m confident.
I’ve heard some describe my recent posts as “overconfident”.
I think I used to calibrate how confident I sound based on how much I expect the people reading/listening-to me to agree with what I’m saying, kinda out of “politeness” for their beliefs; and I think I also used to calibrate my confidence based on how much they match with the apparent consensus, to avoid seeming strange.
I think I’ve done a good job learning over time to instead report my actual inside-view, including how confident I feel about it.
There’s already an immense amount of outside-view double-counting going on in AI discourse, the least I can do is provide {the people who listen to me} with my inside-view beliefs, as opposed to just cycling other people’s opinions through me.
Hence, how confident I sound while claiming things that don’t match consensus. I actually am that confident in my inside-view. I strive to be honest by hedging what I say when I’m in doubt, but that means I also have to sound confident when I’m confident.