It’s a good point.
The options are about how you talk to others, rather than how you listen to others. So if you talk with someone who knows more than you, “humble” means that you don’t act overconfidently, because they could call you out on it. It does not mean that you aren’t skeptical of what they have to say.
I definitely agree that you should often begin skeptical. Epistemic learned helplessness seems like a good phrase, thanks for the link.
One specific area I could see this coming up is when you have to debate someone you are sure is wrong, but has way more practice debating. They may know all the arguments and counter-arguments, and would destroy you in any regular debate, but that doesn’t mean you should trust them, especially if you know there are better experts on the other side. You could probably find great debaters on all controversial topics, on both sides.
I would definitely agree that people are generally reluctant to blatantly deceive themselves. There is definitely some cost to incorrect beliefs, though it can vary greatly in magnitude depending on the situation.
For instance, just say all of your friends go to one church, and you start suspecting your local minister of being less accurate than others. If you actually don’t trust them, you could either pretend you do and live as such, or be honest and possibly have all of your friends dislike you. You clearly have a strong motivation to believe something specific here, and I think generally incentives trump internal honesty.
On the end part, I don’t think that “hostile talking up” is what the hostile actors want to be seen as doing :) Rather, they would be trying to make it seem like the people previously above them are really below them. To them and their followers, they seem to be at the top of their relevant distribution.
1) There’s been a lot about politics being tribal being discussed recently, and I think it makes a lot of pragmatic sense. link
In response to your last point, I didn’t really get into differences between similar areas of knowledge in this post, it definitely becomes a messy topic. I’d definitely agree that for “making a suspension bridge”, I’d look at people who seem to have knowledge in “making suspension bridges” than knowledge in “physics, in general.”
Dangit, fixed. I switched between markdown and the other format a few times, I think that was responsible.
To be a bit more specific; I think there are multiple reasons why you would communicate in different ways to people on different levels of knowledge. One is because you could “get away with more” around people who know less than you. But another is that you would expect people at different parts of the curve to know different things and talk in different ways, so if you just optimized for their true learning, the results would be quite different.
That’s a good point. My communication changes a lot too and it’s one reason why I’m often reluctant to explain ideas in public rather than in private; it’s much harder to adjust the narrative and humility-level.
Perhaps if you have a large definition of politicized. To me this applies to many areas where people are overconfident (which happens everywhere). Lots of entrepreneurs, academics, “thought leaders”, and all the villains of Expert Political Judgement.
To give you a very different example, take a tour guide of San Francisco. They probably know way more about SF history than the people they teach. If they happen to be overconfident for different reasons, no one is necessarily checking them. I would imagine that if they ever give tour guides to SF history experts, their stated level of confidence in their statements would be at least somewhat different.
It’s a frequency distribution ordered by amount of knowledge on a topic. The Y axis for a distribution is frequency, but the units aren’t very useful for these (the shape is the important part, because it’s normalized to total 1).
Good point, fixed. I think about terms the “AI safety community” and “EA safety community” to be focusing on the same thing, use them interchangeably sometimes.
I found the ideas behind Radical Candor to be quite useful. I think they’re similar to ones here.
Apparently, my assumption that the universe would be 6 Billion years old was very incorrect. Seems like it’s possible that useful computation could be done in 10^2500 years, which is much better.
I’m imagining that the predictors would often fall in-line with the user, especially if the user were reasonable enough to be making decisions using them.
Agreed it could be gamed in net-negative ways if there was enough incentive in the prediction system. I think that in many practical cases, the incentives are going to be much smaller than the deltas between decisions (otherwise it seems surprisingly costly to have them.)
Also, predictor meddling is also a thing in the other prediction alternatives, like decision markets. Individuals could try to sabotage outcomes selectively. I don’t believe any of these approaches are perfectly safe. I’m definitely recommending them for humans only at this point; though perhaps if there is a lot of testing we could get a better sense of what the exact incentives will be, and use that knowledge for simple AI use.
To be a bit more specific, it’s answering a question by having other people predict which answer you will choose; but yes, it’s very bootstrap-y.
I consider this proposal an alternative to decision markets and prediction-augmented evaluations, so I don’t think this system suffers from the challenge of information more than those two proposals. All are of course limited to a significant extent by information.
One nice point for these systems is that individuals are often predictably biased, even though they are knowledgeable. So in many cases it seems like more ignorant but less biased predictors with a few base rates of a problem can do better.
I imagine that if there were a bunch of forecasters doing this, they would eventually collect and organize tables of public data of the base rates at which agents make decisions. I expect the public data to be really good if it were properly organized. After that, agents could, of course, select to provide additional information.
Interesting. Looks like a book is coming out too:
I’m happy to talk theoretically, though have the suspicion that there are a whole lot of different ways to approach this problem and experimentation really is the most tractable way to make progress on it.
That said, ideally, a prediction system would include ways of predicting the EVs of predictions and predictors, and people could get paid somewhat accordingly; in this world, high-EV predictions would be ones which may influence decisions counterfactually. You may be able to have a mix of judgments from situations that will never happen, and ones that are more precise but only applicable to ones that do.
I would be likewise suspicious that naive decision markets that use one or two techniques like that would be enough to really make a system robust, but could imagine those ideas being integrated with others for things that are useful.