That’s an interesting post. Let me throw in some comments.
I am not sure about the Cassandra’s world. Here’s why:
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Heroes in myth defy predictions essentially by taking a wider view—by getting out of the box (or by smashing the box altogether, or by altering the box, etc.). Almost all predictions are conditional and by messing with conditions you can affect predictions—what will come to pass and what will not. That is not a low-level world property, that’s just a function of how wide your framework is. Kobayashi Maru and all that.
As to the Buddha’s world, it seems to be mostly about goals and values—things on the subject of which the Bayes’ world is notably silent.
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, “the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in.”
I don’t particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for “Cassandra World” reasons.
Alternately, in such a world, it could be that improving your predictive capacity necessarily decreases your ability to achieve your goals.
Hence the classical example of Cassandra, who was given the power of foretelling the future, but with the curse that nobody would ever believe her. To paraphrase Aladdin’s genie: “Phenomenal cosmic predictive capacity … itty bitty evidential status.”
Yes, a Zelazny or Smullyan character could find ways to subvert the curse, depending on just how literal-minded Apollo’s “install prophecy” code was. If Cassandra took a lesson in lying from Epimenides, she mightn’t have had any problems.
You’re right about the prisoner. (Which also reminds me of Locke’s locked-room example regarding voluntariness.) That particular situation doesn’t distinguish those worlds.
(I should clarify that in each of these “worlds”, I’m talking about situations that occur to humans, specifically. For instance, Bayes math clearly works for abstract agents with predefined goals. What I want to ask is, to what extent does this provide humans with good advice as to how they should explicitly think about their beliefs and goals? What System-2 meta beliefs should we adopt and what System-1 habits should we cultivate?)
Heroes in myth defy predictions essentially by taking a wider view—by getting out of the box (or by smashing the box altogether, or by altering the box, etc.).
I think we’re thinking about different myths. I’m thinking mostly of tragic heroes and anti-heroes who intentionally attempt to avoid their fate, only to be caught by it anyway — Oedipus, Agamemnon, or Achilles, say; or Macbeth. With hints of Dr. Manhattan and maybe Morpheus from Sandman. If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes, to drive them towards our goals. If we think we’re in Cassandra’s world, we expect to be in situations where that doesn’t work.
As to the Buddha’s world, it seems to be mostly about goals and values—things on the subject of which the Bayes’ world is notably silent.
That’s pretty much exactly one of my concerns with the Bayes-world view. If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes
No, not really. Bayes gives you information, but doesn’t give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Exactly! That’s why I asked: “To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?”
We clearly do live in a world where Bayes math works. But that’s a different question from whether it represents good advice for human beings’ explicit, trained thinking about their goals.
Edit: I’ve updated the post above to make this more clear.
That’s an interesting post. Let me throw in some comments.
I am not sure about the Cassandra’s world. Here’s why:
Knowing X and being able to do something about X are quite different things. A death-row prisoner might be able to make the correct prediction that he will be hanged tomorrow, but that does not “enable goal-accomplishing actions” for him—in the Bayes’ world as well. Is the Cassandra’s world defined by being powerless?
Heroes in myth defy predictions essentially by taking a wider view—by getting out of the box (or by smashing the box altogether, or by altering the box, etc.). Almost all predictions are conditional and by messing with conditions you can affect predictions—what will come to pass and what will not. That is not a low-level world property, that’s just a function of how wide your framework is. Kobayashi Maru and all that.
As to the Buddha’s world, it seems to be mostly about goals and values—things on the subject of which the Bayes’ world is notably silent.
Powerlessness seems like a good way to conceptualize the Cassandra alternative. Perhaps power and well-being are largely random and the best-possible predictions only give you a marginal improvement over the baseline. Or else perhaps the real limit is willpower, and the ability to take decisive action based on prediction is innate and cannot be easily altered. Put in other terms, “the world is divided into players and NPCs and your beliefs are irrelevant to which of those categories you are in.”
I don’t particularly think either of these is likely but if you believed the world worked in either of those ways, it would follow that optimizing your beliefs was wasted effort for “Cassandra World” reasons.
So then the Cassandra’s world is essentially a predetermined world where fate rules and you can’t change anything. None of your choices matter.
Alternately, in such a world, it could be that improving your predictive capacity necessarily decreases your ability to achieve your goals.
Hence the classical example of Cassandra, who was given the power of foretelling the future, but with the curse that nobody would ever believe her. To paraphrase Aladdin’s genie: “Phenomenal cosmic predictive capacity … itty bitty evidential status.”
Yes, a Zelazny or Smullyan character could find ways to subvert the curse, depending on just how literal-minded Apollo’s “install prophecy” code was. If Cassandra took a lesson in lying from Epimenides, she mightn’t have had any problems.
You’re right about the prisoner. (Which also reminds me of Locke’s locked-room example regarding voluntariness.) That particular situation doesn’t distinguish those worlds.
(I should clarify that in each of these “worlds”, I’m talking about situations that occur to humans, specifically. For instance, Bayes math clearly works for abstract agents with predefined goals. What I want to ask is, to what extent does this provide humans with good advice as to how they should explicitly think about their beliefs and goals? What System-2 meta beliefs should we adopt and what System-1 habits should we cultivate?)
I think we’re thinking about different myths. I’m thinking mostly of tragic heroes and anti-heroes who intentionally attempt to avoid their fate, only to be caught by it anyway — Oedipus, Agamemnon, or Achilles, say; or Macbeth. With hints of Dr. Manhattan and maybe Morpheus from Sandman. If we think we’re in Bayes’ world, we expect to be in situations where getting better predictions gives us more control over outcomes, to drive them towards our goals. If we think we’re in Cassandra’s world, we expect to be in situations where that doesn’t work.
That’s pretty much exactly one of my concerns with the Bayes-world view. If you can be misinformed about what your goals are, then you can be doing Bayes really well — optimizing for what you think your goals are — and still end up dissatisfied.
No, not really. Bayes gives you information, but doesn’t give you capabilities. A perfect Bayesian will find the optimal place/path within the constraints of his capabilities, but no more. Someone with worse predictions but better abilities might (or might not) do better.
Um, Bayes doesn’t give you any promises, never mind guarantees, about your satisfaction. It’s basically like classical logic—it tells you the correct way to manipulate certain kinds of statements. “Satisfaction” is nowhere near its vocabulary.
Exactly! That’s why I asked: “To what extent does [Bayes] provide humans with good advice as to how they should explicitly think about their beliefs and goals?”
We clearly do live in a world where Bayes math works. But that’s a different question from whether it represents good advice for human beings’ explicit, trained thinking about their goals.
Edit: I’ve updated the post above to make this more clear.