More importantly, that BB has approximately no knowledge of the experiences and priors that led to those pessimistic posteriors. In general I think it’s wise to stick to discussing ideas (using probability as a tool for doing so) and avoid focusing on whether someone has the right posterior probabilities.
I don’t understand this idea at all. If someone someone told me they thought the probability that they would be murdered within the next year is 62%, I’d probably point out that the murder rate per capita makes that seem extremely unlikely. I think that would be a reasonable response even if I didn’t fully understand the experiences that lead them to have this belief. Likewise I think posterior probabilties are relevant to decisions and so should be “on the table” for discussion, and also can’t be so cleanly seperated from the “ideas”. If someone is worried about murder due to overestimating its likelihood that suggests their reasoning is based on different ideas than if they have a good estimate of the likelihood but are worried for another reason like extreme risk aversion.
I claim that even in the case of the murder rate, you don’t actually care about posterior probabilities, you care about evidence and likelihood ratios (but I agree that you should care about their likelihoods!). If you are sure that you share priors with someone, like with sane people and murder rates, their posterior probability lets you deduce that they have strong evidence that is surprising to you. But this is a special case, and certainly doesn’t apply here.
Posterior probabilities can be a reasonable tool for getting a handle on where you agree/disagree with someone (though alas, not perfect since you might incidentally agree because your priors mismatch in exactly the opposite way that your evidence does), but once you’ve identified that you disagree you should start double-clicking on object-level claims and trying to get a handle on their evidence and what likelihoods it implies, rather than criticizing them for having the wrong bottom-line number. If Eliezer’s prior is 80% and Bentham’s Bulldog has a prior of 0.2%, it’s fine if they have respective posteriors of 99% and 5% after seeing the same evidence.
One major exception is if you’re trying to figure out how someone will behave. I agree that in that case you want to know their posterior, all-things-considered view. But that basically never applies when we’re sitting around trying to figure things out.
I can definitely see the benefits of focusing on likelihoods but I think in practice when we are talking about differences that are like 99% vs 5% this difference usually has its roots in something highly relevant to the ideas. So to take the murder example, lets say I talk to someone and they say that their best friend was murdered, and they have had two best friends and use an empirical bayes approach that gives a prior of 50% that they will be murdered. Sure this is phrased as being about a prior but functionally speaking its about a likelihood, how should the observation that their best friend was murdered influence the estimated risk of murder?
I think something like this often explains larger differences in posteriors. So as an example, lets say hypothetically that I think the evolution analogy for AI risk is a good idea and is essentially correct, but for me it increases my estimated risk by a little bit but for someone else it increase their estimated risk a lot. This will cash out as a large difference in posteriors, and so addressing differences in posteriors can be a reasonable way of triangulating the most relevant differences in likelihood.
I don’t understand this idea at all. If someone someone told me they thought the probability that they would be murdered within the next year is 62%, I’d probably point out that the murder rate per capita makes that seem extremely unlikely. I think that would be a reasonable response even if I didn’t fully understand the experiences that lead them to have this belief. Likewise I think posterior probabilties are relevant to decisions and so should be “on the table” for discussion, and also can’t be so cleanly seperated from the “ideas”. If someone is worried about murder due to overestimating its likelihood that suggests their reasoning is based on different ideas than if they have a good estimate of the likelihood but are worried for another reason like extreme risk aversion.
I claim that even in the case of the murder rate, you don’t actually care about posterior probabilities, you care about evidence and likelihood ratios (but I agree that you should care about their likelihoods!). If you are sure that you share priors with someone, like with sane people and murder rates, their posterior probability lets you deduce that they have strong evidence that is surprising to you. But this is a special case, and certainly doesn’t apply here.
Posterior probabilities can be a reasonable tool for getting a handle on where you agree/disagree with someone (though alas, not perfect since you might incidentally agree because your priors mismatch in exactly the opposite way that your evidence does), but once you’ve identified that you disagree you should start double-clicking on object-level claims and trying to get a handle on their evidence and what likelihoods it implies, rather than criticizing them for having the wrong bottom-line number. If Eliezer’s prior is 80% and Bentham’s Bulldog has a prior of 0.2%, it’s fine if they have respective posteriors of 99% and 5% after seeing the same evidence.
One major exception is if you’re trying to figure out how someone will behave. I agree that in that case you want to know their posterior, all-things-considered view. But that basically never applies when we’re sitting around trying to figure things out.
Does that make sense?
I can definitely see the benefits of focusing on likelihoods but I think in practice when we are talking about differences that are like 99% vs 5% this difference usually has its roots in something highly relevant to the ideas. So to take the murder example, lets say I talk to someone and they say that their best friend was murdered, and they have had two best friends and use an empirical bayes approach that gives a prior of 50% that they will be murdered. Sure this is phrased as being about a prior but functionally speaking its about a likelihood, how should the observation that their best friend was murdered influence the estimated risk of murder?
I think something like this often explains larger differences in posteriors. So as an example, lets say hypothetically that I think the evolution analogy for AI risk is a good idea and is essentially correct, but for me it increases my estimated risk by a little bit but for someone else it increase their estimated risk a lot. This will cash out as a large difference in posteriors, and so addressing differences in posteriors can be a reasonable way of triangulating the most relevant differences in likelihood.