That’s related to Science Doesn’t Trust Your Rationality.
What I’d say is this:
Personally, I find the lucid-dreaming example rather absurd, because I tend to believe a friend who claims they’ve had a mental experience. I might not agree with their analysis of their mental experience; for example, if they say they’ve talked to God in a dream, then I would tend to suspect them of mis-interpreting their experience. I do tend to believe that they’re honestly trying to convey an experience they had, though. And it’s plausible (though far from certain) that the steps which they took in order to get that experience will also work for me.
So, I can imagine a skeptic who brushes off a friend’s report of lucid dreaming as “unscientific”, but I have no sympathy for it. My model of the skeptic is: they have the crazy view that observations made by someone who has a phd, works at a university, and publishes in an academic journal are of a different kind than observations made by other people. Perhaps the lucid-dreaming studies have some interesting MRI scans to show differences in brain activity (I haven’t read them), but they must still rely on descriptions of internal experience which come from human beings in order to establish the basic facts about lucid dreams, right? In no sense is the skeptic’s inability to go beyond the current state of science “rational”; in fact, it strikes me as rather irrational.
This is an especially easy mistake for non-Bayesian rationalists to make because they lack a notion of degrees of belief. There must be a set of trusted beliefs, and a process for beliefs to go from untrusted to trusted. It’s natural for this process to involve the experimental method and peer review. But this kind of naive scientism only makes sense for a consumer of science. If scientists used the kind of “rationality” described in your post, they would never do the experiments to determine whether lucid dreaming is a real thing, because the argument in your post concludes that you can’t rationally commit time and effort to testing uncertain hypotheses. So this kind of naive scientific-rationalism is somewhat self-contradictory.
As I argued, assigning accurate (perhaps low, perhaps high) probabilities to the truth of such claims (of the general category which lucid dreaming falls into) does not make it harder—not even a little harder—to discover the truth about lucid dreaming. What makes it hard is the large number of similar but bogus claims to sift through, as well as the difficulty of lucid dreaming itself. Assigning an appropriate probability based on past experience with these sorts of claims only helps us because it allows us to make good decisions about how much of our time to spend investigating such claims.
What you seem to be missing (maybe?) is that we need to have a general policy which we can be satisfied with in “situations of this kind”. You’re saying that what we should really do is trust our friend who is telling us about lucid dreaming (and, in fact, I agree with that policy). But if it’s rational for us to ascribe a really low probability (I don’t think it is), that’s because we see a lot of similar claims to this which turn out to be false. We can still try a lot of these things, with an experimental attitude, if the payoff of finding a true claim balances well against the number of false claims we expect to sift through in the process. However, we probably don’t have the attention of looking at all such cases, which means we may miss lucid dreaming by accident. But this is not a flaw in the strategy; this is just a difficulty of the situation.
I’m frustrated because it seems like you are misunderstanding a part of the response Kindly and I are making, but you’re doing a pretty good job of engaging with our replies and trying to sift out what you think and where you start disagreeing with our arguments. I’m just not quite sure yet where the gap between our views is.