Suddenly looking for explanations, versus explanations suddenly begin occurring to us.
OK, that’s what I first thought. But then I can’t make sense of what you say about these: “One is a very good mathematical explanation” and “the other is why ‘Gods having a laugh’ would actually cross your mind”. From the “actually” in the second, it seems as if you’re endorsing that one, in which case presumably “a very good mathematical explanation” is intended as a criticism. Which doesn’t make any sense to me.
How do you know why I did it?
Because of how humor works.
But your analysis on the basis of “how humor works” doesn’t give any reason at all for any preference between “suddenly start looking for explanations” and “explanations start occurring to us”. It hinges only on the fact that, one way or another, many people in such a weird situation would start considering hypotheses like “gods messing with us” even if they have previously been very sure that no gods exist.
any sane person would be questioning -everything- in lieu of believing it happened by chance
That may very well be correct. But in so far as we frame that as “I don’t believe X happened because its probability is very low”, all that indicates is that we intuitively think about probabilities (or at least express our thinking about probabilities) wrongly. The thing that triggers such thoughts is the occurrence of a low-probability event that feels like it should have a better explanation, even if the thought we’re then inclined to think doesn’t explicitly have that last bit in it.
(It’s not necessarily that concrete better explanations occur to us. It’s that we have a heuristic that tells us there should be one. What I wrote before kinda equivocates between those, for which I apologize; I am not sure which I had in mind, but what I endorse after further thought is the latter, together with the observation that what makes this heuristic useful is the fact that its telling us “there should be a better explanation” correlates with there actually being one.)
From the “actually” in the second, it seems as if you’re endorsing that one, in which case presumably “a very good mathematical explanation” is intended as a criticism. Which doesn’t make any sense to me.
I was implying that it is a rationalization. Perhaps a fair one—I have no ready counterargument available—but not the real reason for the behavior.
It’s not necessarily that concrete better explanations occur to us. It’s that we have a heuristic that tells us there should be one.
Yes! Exactly. And moreover—that heuristic is, as you say, useful. What is the heuristic measuring, and why?
Skipping ahead a bit: The ability to notice which improbable things require explanations is, perhaps, the heart of scientific progress (think of data mining—why can’t we just run a data mining rig and discover the fundamental equations of the universe? I’d bet all the necessary data already exists to improve our understanding of reality by as much again as the difference between Newtonian and Relativistic understandings of reality). Why does it work, and how can we make it work better?
OK, that’s what I first thought. But then I can’t make sense of what you say about these: “One is a very good mathematical explanation” and “the other is why ‘Gods having a laugh’ would actually cross your mind”. From the “actually” in the second, it seems as if you’re endorsing that one, in which case presumably “a very good mathematical explanation” is intended as a criticism. Which doesn’t make any sense to me.
But your analysis on the basis of “how humor works” doesn’t give any reason at all for any preference between “suddenly start looking for explanations” and “explanations start occurring to us”. It hinges only on the fact that, one way or another, many people in such a weird situation would start considering hypotheses like “gods messing with us” even if they have previously been very sure that no gods exist.
That may very well be correct. But in so far as we frame that as “I don’t believe X happened because its probability is very low”, all that indicates is that we intuitively think about probabilities (or at least express our thinking about probabilities) wrongly. The thing that triggers such thoughts is the occurrence of a low-probability event that feels like it should have a better explanation, even if the thought we’re then inclined to think doesn’t explicitly have that last bit in it.
(It’s not necessarily that concrete better explanations occur to us. It’s that we have a heuristic that tells us there should be one. What I wrote before kinda equivocates between those, for which I apologize; I am not sure which I had in mind, but what I endorse after further thought is the latter, together with the observation that what makes this heuristic useful is the fact that its telling us “there should be a better explanation” correlates with there actually being one.)
I was implying that it is a rationalization. Perhaps a fair one—I have no ready counterargument available—but not the real reason for the behavior.
Yes! Exactly. And moreover—that heuristic is, as you say, useful. What is the heuristic measuring, and why?
Skipping ahead a bit: The ability to notice which improbable things require explanations is, perhaps, the heart of scientific progress (think of data mining—why can’t we just run a data mining rig and discover the fundamental equations of the universe? I’d bet all the necessary data already exists to improve our understanding of reality by as much again as the difference between Newtonian and Relativistic understandings of reality). Why does it work, and how can we make it work better?