Everyone is allowed to believe they’re saving the world. It is two other things, both quite obvious. First, we do not say it out loud if we don’t want to appear kooky. Second, if someone really believes that he is literally saving the world, then he can be sure that he has a minor personality disorder [1], regardless of whether he will eventually save the world or not. Most great scientists are eccentric, so this is not a big deal, if you manage to incorporate it into your probability estimates while doing your job. I mean, this bias obviously affects your validity estimate for each and every argument you hear against hard AI takeoff. (I don’t think your debaters so far did a good job bringing up such counterarguments, but that’s beside the point.)
[1] by the way, in this case (in your case) grandiosity is the correct term, not delusions of grandeur.
if someone really believes that he is literally saving the world, then he can be sure that he has a minor personality disorder, regardless of whether he will eventually save the world or not.
Stanislav Petrov had this disorder? In thinking he was making the world a safer place, Gorbachev had this disorder? It seems a stretch to me to diagnose a personality disorder based on an accurate view of the world.
Gorbachev was leading an actual superpower, so his case is not very relevant in a psychological analysis of grandiosity. At the time of the famous incident, Petrov was too busy to think about his status as a world-savior. And it is not very relevant here what he believed after saving the world.
It seems a stretch to me to diagnose a personality disorder based on an accurate view of the world.
I didn’t mean to talk about an accurate view of the world. I meant to talk about a disputed belief about a future outcome. I am not interested in the few minutes while Petrov may had the accurate view that he is currently saving the world.
Second, if someone really believes that he is literally saving the world, then he can be sure that he has a minor personality disorder [1], regardless of whether he will eventually save the world or not.
So you’d prohibit someone of accurate belief? I generally regard that as a reductio.
So you’d prohibit someone of accurate belief? I generally regard that as a reductio.
If a billion people buy into a 1-in-a-billion raffle, each believing that he or she will win, then every one of them has a “prohibited” belief, even though that belief is accurate in one case.
I wasn’t making an analogy. I am surprised by that interpretation. I was providing a counterexample to the claim that it is absurd to prohibit accurate beliefs. One of my raffle-players has an accurate belief, but that player’s belief is nonetheless prohibited by the norms of rationality.
That’s not true for any reasonable definition of “belief,” least of all a Bayesian one. If all the raffle participants believed “I am likely to win,” or “I am certain to win,” then they are all holding irrational beliefs, regardless of which one of them wins. If all the raffle participants believed “I have a one in a billion chance to win,” then they are all holding rational beliefs, regardless of which one of them wins.
That’s not true for any reasonable definition of “belief,” least of all a Bayesian one. If all the raffle participants believed “I am likely to win,” or “I am certain to win,” then they are all wrong and they will all remain wrong after one of them wins. If all the raffle participants believed “I have a one in a billion chance to win,” then they are all correct and they will all remain correct.
???
Of course. But no English speaker would utter the phrase “I will win this raffle” as a gloss for “I have a one in a billion chance to win”.
I seem to have posed my scenario in a confusing way. To be more explicit: Each of my hypothetical players would assert “I will win this raffle” with the intention of accurately representing his or her beliefs about the world. That doesn’t imply literal 100% certainty under standard English usage. The amount of certainty implied is vague, but there’s no way it’s anywhere close to the rational amount of certainty. That is why the players’ beliefs are prohibited by the norms of rationality, even though one of them is making a true assertion when he or she says “I will win this raffle”.
ETA: Cata deleted his/her comment. I’m leaving my reply here because its clarification of the original scenario might still be necessary.
Doesn’t this just indicate that even very low-probability alternate hypotheses are stronger than the focal hypothesis, like a p>.05 result on a telepathy test?
Everyone is allowed to believe they’re saving the world. It is two other things, both quite obvious. First, we do not say it out loud if we don’t want to appear kooky. Second, if someone really believes that he is literally saving the world, then he can be sure that he has a minor personality disorder [1], regardless of whether he will eventually save the world or not. Most great scientists are eccentric, so this is not a big deal, if you manage to incorporate it into your probability estimates while doing your job. I mean, this bias obviously affects your validity estimate for each and every argument you hear against hard AI takeoff. (I don’t think your debaters so far did a good job bringing up such counterarguments, but that’s beside the point.)
[1] by the way, in this case (in your case) grandiosity is the correct term, not delusions of grandeur.
Stanislav Petrov had this disorder? In thinking he was making the world a safer place, Gorbachev had this disorder? It seems a stretch to me to diagnose a personality disorder based on an accurate view of the world.
Gorbachev was leading an actual superpower, so his case is not very relevant in a psychological analysis of grandiosity. At the time of the famous incident, Petrov was too busy to think about his status as a world-savior. And it is not very relevant here what he believed after saving the world.
I didn’t mean to talk about an accurate view of the world. I meant to talk about a disputed belief about a future outcome. I am not interested in the few minutes while Petrov may had the accurate view that he is currently saving the world.
So you’d prohibit someone of accurate belief? I generally regard that as a reductio.
If a billion people buy into a 1-in-a-billion raffle, each believing that he or she will win, then every one of them has a “prohibited” belief, even though that belief is accurate in one case.
I don’t think that analogy holds up.
I wasn’t making an analogy. I am surprised by that interpretation. I was providing a counterexample to the claim that it is absurd to prohibit accurate beliefs. One of my raffle-players has an accurate belief, but that player’s belief is nonetheless prohibited by the norms of rationality.
That’s not true for any reasonable definition of “belief,” least of all a Bayesian one. If all the raffle participants believed “I am likely to win,” or “I am certain to win,” then they are all holding irrational beliefs, regardless of which one of them wins. If all the raffle participants believed “I have a one in a billion chance to win,” then they are all holding rational beliefs, regardless of which one of them wins.
???
Of course. But no English speaker would utter the phrase “I will win this raffle” as a gloss for “I have a one in a billion chance to win”.
I seem to have posed my scenario in a confusing way. To be more explicit: Each of my hypothetical players would assert “I will win this raffle” with the intention of accurately representing his or her beliefs about the world. That doesn’t imply literal 100% certainty under standard English usage. The amount of certainty implied is vague, but there’s no way it’s anywhere close to the rational amount of certainty. That is why the players’ beliefs are prohibited by the norms of rationality, even though one of them is making a true assertion when he or she says “I will win this raffle”.
ETA: Cata deleted his/her comment. I’m leaving my reply here because its clarification of the original scenario might still be necessary.
Yeah, I deleted it because I wasn’t doing a good job of distinguishing between “rational” and “correct”, so my criticism was muddled.
Doesn’t this just indicate that even very low-probability alternate hypotheses are stronger than the focal hypothesis, like a p>.05 result on a telepathy test?
I wouldn’t prohibit anything to anyone. See my reply below to cipergoth.