The illustration that immediately sprung to my mind was of the characters Samantha Carter and Jack O’Neill in the television sci-fi show...
Beware fictional evidence! Main characters in serial TV shows really should follow “Pascal’s Goldpan”, because that’s the way their universe (usually) works! The episode wouldn’t have been written if there wasn’t a way out of the problem. I suspect that experiencing just two or three such “insoluble” problems get resolved ought to make a proper rationalist wonder if they were living in a fictional universe.
But our universe doesn’t seem to have that property. (Or perhaps I’m just not a main character.) What is true seems to be true independent from how much utility I get from believing it.
BTW, that isn’t keeping me from loving your coined expressions!
It’s worked for me many times in the past, but thus far I’ve refused to use it as a prior for future events simply because I am afraid of jinxing it. Which means yes, I’ve explicitly held an anti-inductive prior because pride comes before the fall, and holding this anti-inductive prior has resulted in continued positive utility. (I would say I was a main character, or at least that some unseen agenty process wanted me to believe I was a main character, but that would be like asking to be deceived. Or does even recognizing the possibility count as asking to be deceived?) Note that if my sub-conscious inductive biases are non-truth-trackingly schizophrenic then holding an explicit meta-level anti-inductive interpretation scheme is what saves me from insanity. I would hold many traditionally schizophrenic beliefs if I was using a truly inductive meta-level interpretation scheme, and I’m not actually sure I shouldn’t be using such an inductive interpretation scheme. Given my meta-level uncertainty I refuse to throw away evidence that does not corroborate my anti-schizophrenic/anti-inductive prior. It’s a tricky epistemic and moral situation to be in. “To not forget scenarios consistent with the evidence, even at the cost of overweighting them.”
Yes, like I said, it’s a tricky epistemic situation to be in.
Utility is more valuable than money. And the universe doesn’t have to follow Pascal’s Goldpan for you or for most people. It happens to for me, or so I anticipate but do not believe.
Malthusian crisis: Solved for the foreseeable future.
Cold war: solved, (mostly)
Global warming: looked unsolvable, now appears to have feasible solutions.
It would appear that quite a number of problems that seemed unsolvable have been solved in the past. Of course, that could just be the anthropic principle talking.
It certainly does seem to have that property to me. Although I’d guess Eliezer is the main character and I’m (currently the backstory of) the sidekick or villain or something.
Beware fictional evidence! Main characters in serial TV shows really should follow “Pascal’s Goldpan”, because that’s the way their universe (usually) works! The episode wouldn’t have been written if there wasn’t a way out of the problem. I suspect that experiencing just two or three such “insoluble” problems get resolved ought to make a proper rationalist wonder if they were living in a fictional universe.
But our universe doesn’t seem to have that property. (Or perhaps I’m just not a main character.) What is true seems to be true independent from how much utility I get from believing it.
BTW, that isn’t keeping me from loving your coined expressions!
It’s worked for me many times in the past, but thus far I’ve refused to use it as a prior for future events simply because I am afraid of jinxing it. Which means yes, I’ve explicitly held an anti-inductive prior because pride comes before the fall, and holding this anti-inductive prior has resulted in continued positive utility. (I would say I was a main character, or at least that some unseen agenty process wanted me to believe I was a main character, but that would be like asking to be deceived. Or does even recognizing the possibility count as asking to be deceived?) Note that if my sub-conscious inductive biases are non-truth-trackingly schizophrenic then holding an explicit meta-level anti-inductive interpretation scheme is what saves me from insanity. I would hold many traditionally schizophrenic beliefs if I was using a truly inductive meta-level interpretation scheme, and I’m not actually sure I shouldn’t be using such an inductive interpretation scheme. Given my meta-level uncertainty I refuse to throw away evidence that does not corroborate my anti-schizophrenic/anti-inductive prior. It’s a tricky epistemic and moral situation to be in. “To not forget scenarios consistent with the evidence, even at the cost of overweighting them.”
Error: Conflict between belief module and anticipation module detected!
If the universe really followed Pascal’s Goldpan, it seems like there ought to be some way to reliably turn that into a large amount of money...
Yes, like I said, it’s a tricky epistemic situation to be in.
Utility is more valuable than money. And the universe doesn’t have to follow Pascal’s Goldpan for you or for most people. It happens to for me, or so I anticipate but do not believe.
Prior probabilities are a feature of maps, not of territories… or am I missing something?
Malthusian crisis: Solved for the foreseeable future. Cold war: solved, (mostly) Global warming: looked unsolvable, now appears to have feasible solutions.
It would appear that quite a number of problems that seemed unsolvable have been solved in the past. Of course, that could just be the anthropic principle talking.
It certainly does seem to have that property to me. Although I’d guess Eliezer is the main character and I’m (currently the backstory of) the sidekick or villain or something.