I can definitely see the benefits of focusing on likelihoods but I think in practice when we are talking about differences that are like 99% vs 5% this difference usually has its roots in something highly relevant to the ideas. So to take the murder example, lets say I talk to someone and they say that their best friend was murdered, and they have had two best friends and use an empirical bayes approach that gives a prior of 50% that they will be murdered. Sure this is phrased as being about a prior but functionally speaking its about a likelihood, how should the observation that their best friend was murdered influence the estimated risk of murder?
I think something like this often explains larger differences in posteriors. So as an example, lets say hypothetically that I think the evolution analogy for AI risk is a good idea and is essentially correct, but for me it increases my estimated risk by a little bit but for someone else it increase their estimated risk a lot. This will cash out as a large difference in posteriors, and so addressing differences in posteriors can be a reasonable way of triangulating the most relevant differences in likelihood.
Glad the comment was helpful. I will register my prediction that BB most likely meant the “relative to priors” meaning rather than the one that you use in the OP. I also think among people who aren’t steeped in the background of AI risk, this would be the significantly more common interpretation upon reading what BB wrote.