Have you heard of the Kullback-Leibler divergence? One way of thinking about it is that it quantifies the amount you learn about one random variable when you learn something about another random variable. I.e., if your variables are X and Y, then D(p(X|Y=y),p(X)) is the information gain about X when you learn Y=y. It isn’t a metric, as it isn’t symmetric: D(p(X|Y=y),p(X)) != D(p(X),p(X|Y=y)). Nevertheless, with two people with different probability distributions on some underlying space, it’s a good way of representing how much more one knows than the other.
As jimrandomh says, the representation of beliefs that you use isn’t very practical. However your question is a good one, as it applies whatever representation you use.
Your comment about taking emotional salience into account is leaving the realm of probability and epistemic rationality—I’m less familiar with what tools are available to formalize differences in what’s valued than I am with tools to formalize differences in what’s known.
Ah okay, thanks for the reply. Yes, I’ve heard about the KL divergence, although I haven’t really worked with it before.
“I’m less familiar with what tools are available to formalize differences in what’s valued than I am with tools to formalize differences in what’s known.”
Oh, good points. LessWrong is more concerned with what’s known than what’s valued. Although what’s valued does matter, since what’s valued is of relevance when we want to operationalize utility.
Have you heard of the Kullback-Leibler divergence? One way of thinking about it is that it quantifies the amount you learn about one random variable when you learn something about another random variable. I.e., if your variables are X and Y, then D(p(X|Y=y),p(X)) is the information gain about X when you learn Y=y. It isn’t a metric, as it isn’t symmetric: D(p(X|Y=y),p(X)) != D(p(X),p(X|Y=y)). Nevertheless, with two people with different probability distributions on some underlying space, it’s a good way of representing how much more one knows than the other.
As jimrandomh says, the representation of beliefs that you use isn’t very practical. However your question is a good one, as it applies whatever representation you use.
Your comment about taking emotional salience into account is leaving the realm of probability and epistemic rationality—I’m less familiar with what tools are available to formalize differences in what’s valued than I am with tools to formalize differences in what’s known.
Ah okay, thanks for the reply. Yes, I’ve heard about the KL divergence, although I haven’t really worked with it before.
“I’m less familiar with what tools are available to formalize differences in what’s valued than I am with tools to formalize differences in what’s known.”
Oh, good points. LessWrong is more concerned with what’s known than what’s valued. Although what’s valued does matter, since what’s valued is of relevance when we want to operationalize utility.