You can’t really “not think in terms of probability” by refusing to think about them explicitly.
That’s just mistakes how the human mind and human intelligence works. Our brain is not made to think in terms of probability.
if your beliefs are coherent they imply an underlying probabilistic model, whether you acknowledge it or not.
I think that most people who think that their beliefs are completely coherent delude themselves. Assuming that beliefs being completely coherent is a natural state of the human mind mistakes a lot about what goes on in human minds. When building an AI for a long-time there was the belief that AI will likely have coherent beliefs. With GPT we see that the best intellience we can build on computers doesn’t seem to have that feature of coherent beliefs either.
Julia Galef writes about how noticing confusion is a key skill of a rationalist. The state of noticing confusion is one where you see that the evidence doesn’t seem to really fit and you don’t have a good idea of the right hypothesis.
Confusion calls for more investigation. It’s normal that you don’t have clear hypnothesis when you investigate when you are confused.
Thomas Kuhn writes about how new scientific paradigms always start with seeing some anamolies and investigating them. If you don’t engage in that investigation because you don’t have a decent probability hypothesis of how the facts fit together, you are not going to find new paradigms because that involves working a decent amount of time in a space with a lot of unknowns.
That’s just mistakes how the human mind and human intelligence works. Our brain is not made to think in terms of probability.
I didn’t intend to claim anything about how the brain or human intelligence works. Rather, I’m saying probability theory points at a correct way to reason for ideal agents, which humans can try to approximate. I expect approximations which involve thinking explicitly in terms of probabilities (not necessarily only in terms of probabilities) will tend to outperform approximations that don’t.
Anyway, back to the object level: I would welcome more evidence on the question of aliens, but I personally don’t feel that confused by current observations, and believe they are well-explained by higher prior probability hypotheses that do not involve aliens.
Perhaps the reason this post received some downvotes: it reads somewhat as a call for others to do expensive investigatory work and / or deductive thinking. Personally, I feel I’ve already done enough investigation and deduction on my own on this topic, and more (by myself or others) is probably not worth the effort.
Note, there’s sometimes a tradeoff between gathering more facts and thinking longer to deduce more from the facts you already have. In this case, I think there’s already more than enough evidence available for an ideal agent to conclude from a cursory inspection that the observed evidence is not well-explained by actual aliens. But you don’t need to be an ideal agent to draw similar conclusions: you merely need to apply some effort and reasoning skills which are pretty common among LW readers, but not so common outside these circles (some of the skills I have in mind are those described by the bullet points in my reply here.)
Rather, I’m saying probability theory points at a correct way to reason for ideal agents, which humans can try to approximate.
Probability theory does not do that. It does not make your reasoning robust against unknown unknowns.
In this case, I think there’s already more than enough evidence available for an ideal agent to conclude from a cursory inspection that the observed evidence is not well-explained by actual aliens.
From my perspective it doesn’t look like there is an explanation that well-explains the available evidence. That goes both for alien-involving explanations and for non-alien-involving explanations. That’s what makes the situation confusing.
But you don’t need to be an ideal agent to draw similar conclusions: you merely need to apply some effort and reasoning skills which are pretty common among LW readers, but not so common outside these circles
I’m unsure why you believe that LW readers are that much better at reasoning than highly promoted intelligence analysts.
That’s just mistakes how the human mind and human intelligence works. Our brain is not made to think in terms of probability.
I think that most people who think that their beliefs are completely coherent delude themselves. Assuming that beliefs being completely coherent is a natural state of the human mind mistakes a lot about what goes on in human minds. When building an AI for a long-time there was the belief that AI will likely have coherent beliefs. With GPT we see that the best intellience we can build on computers doesn’t seem to have that feature of coherent beliefs either.
Julia Galef writes about how noticing confusion is a key skill of a rationalist. The state of noticing confusion is one where you see that the evidence doesn’t seem to really fit and you don’t have a good idea of the right hypothesis.
Confusion calls for more investigation. It’s normal that you don’t have clear hypnothesis when you investigate when you are confused.
Thomas Kuhn writes about how new scientific paradigms always start with seeing some anamolies and investigating them. If you don’t engage in that investigation because you don’t have a decent probability hypothesis of how the facts fit together, you are not going to find new paradigms because that involves working a decent amount of time in a space with a lot of unknowns.
I didn’t intend to claim anything about how the brain or human intelligence works. Rather, I’m saying probability theory points at a correct way to reason for ideal agents, which humans can try to approximate. I expect approximations which involve thinking explicitly in terms of probabilities (not necessarily only in terms of probabilities) will tend to outperform approximations that don’t.
Anyway, back to the object level: I would welcome more evidence on the question of aliens, but I personally don’t feel that confused by current observations, and believe they are well-explained by higher prior probability hypotheses that do not involve aliens.
Perhaps the reason this post received some downvotes: it reads somewhat as a call for others to do expensive investigatory work and / or deductive thinking. Personally, I feel I’ve already done enough investigation and deduction on my own on this topic, and more (by myself or others) is probably not worth the effort.
Note, there’s sometimes a tradeoff between gathering more facts and thinking longer to deduce more from the facts you already have. In this case, I think there’s already more than enough evidence available for an ideal agent to conclude from a cursory inspection that the observed evidence is not well-explained by actual aliens. But you don’t need to be an ideal agent to draw similar conclusions: you merely need to apply some effort and reasoning skills which are pretty common among LW readers, but not so common outside these circles (some of the skills I have in mind are those described by the bullet points in my reply here.)
Probability theory does not do that. It does not make your reasoning robust against unknown unknowns.
From my perspective it doesn’t look like there is an explanation that well-explains the available evidence. That goes both for alien-involving explanations and for non-alien-involving explanations. That’s what makes the situation confusing.
I’m unsure why you believe that LW readers are that much better at reasoning than highly promoted intelligence analysts.