From inside it may even look similar to cyclical reasoning, after all you can only learn about evolution that justifies your learning ability, by using the learning ability created by evolution. But this is just a map-territory confusion. The actual causal process in the reality that makes our cognition engines work is straightforward and non-paradoxical. The cognition engine works even if it’s not certain about it.
Can you explain what you mean here? If you assume that out in the territory, you have a brain that has these properties, you can explain how you come to know things. But that’s an assumption. Doesn’t lead you to know things. “from the inside” is what we care about here, you knowledge is in the map too, at least to you.
I’ll be talking more about Münchhausen trilemma in details in a separate post. But here are some highlights:
“from the inside” is what we care about here
There are two things that we care about
We have a map representing the territory to a good degree
We are justifiably certain in 1.
If our brains are created by natural selection and then they discover natural selection we have a straightforward non-paradoxical causal history for how our maps correlate to the territory:
Natural Selection → Brain → Natural Selection*
Where A* means map of A.
But that’s an assumption. Doesn’t lead you to know things.
In the map it is an assumption. But in the territory it’s either true or false and it doesn’t matter whether we’ve assumed it or not.
A map can correspond to the territory even if noone knows about it. We can have 1. without 2.
you knowledge is in the map too
You can map the map-territory correspondence as well. As soon as we considered it we are getting
(Natural Selection → Brain → Natural Selection*)*
This allows us to get a map of map-territory correspondence and then a map of map-territory correspondence of a map of map-territory correspondence and so on.
So, even though we do not have 2. and, in fact we can not have 2. in principle, as any reason for certainty will have to go through our mind.
We can have
2′. We are somewhat justifiably somewhat confident in 1
and
3′. We are somewhat justifiably somewhat confident in 2
and so on as long as we have enough data to construct a meta-model of the next level.
The point is to refine the wrong question about certainties 2 into a better question about probabilistic knowledge 2′. If you just want to get an answer to 2 - then this answer is ‘no’. We can’t be certain that our knowledge is true. Then again, neither we need to.
If your question is how it’s different from cyclical reasoning then consider the difference:
Evolution produced my brain that discovered evolution. After I’ve considered this thought I’m now certain in both of them and so case closed. No more arguments or evidence can ever persuade me otherwise.
I can never be certain of anything but to the best of my knowledge the situation looks exactly the way it would looked if my brain was produced by evolution which allowed my brain to discover evolution. I’ll be on lookout for counter-evidence, because maybe everything I know is a lie, but for now on a full reflection of my knowledge, including techniques of rationality and notions of simplicity and computational complexity, this seems to be the most plausible hypothesis.
Can you explain what you mean here? If you assume that out in the territory, you have a brain that has these properties, you can explain how you come to know things. But that’s an assumption. Doesn’t lead you to know things. “from the inside” is what we care about here, you knowledge is in the map too, at least to you.
I’ll be talking more about Münchhausen trilemma in details in a separate post. But here are some highlights:
There are two things that we care about
We have a map representing the territory to a good degree
We are justifiably certain in 1.
If our brains are created by natural selection and then they discover natural selection we have a straightforward non-paradoxical causal history for how our maps correlate to the territory:
Natural Selection → Brain → Natural Selection*
Where A* means map of A.
In the map it is an assumption. But in the territory it’s either true or false and it doesn’t matter whether we’ve assumed it or not.
A map can correspond to the territory even if noone knows about it. We can have 1. without 2.
You can map the map-territory correspondence as well. As soon as we considered it we are getting
(Natural Selection → Brain → Natural Selection*)*
This allows us to get a map of map-territory correspondence and then a map of map-territory correspondence of a map of map-territory correspondence and so on.
So, even though we do not have 2. and, in fact we can not have 2. in principle, as any reason for certainty will have to go through our mind.
We can have
2′. We are somewhat justifiably somewhat confident in 1
and
3′. We are somewhat justifiably somewhat confident in 2
and so on as long as we have enough data to construct a meta-model of the next level.
No offense. This sounds rather trivial, and like you’re sidestepping rather than answering the question.
I have some story for how my mind knows things.
I’m justified in believing (1) is true
Of course (1) might be true. The problem is you can’t know that. Like you said
And yeah, you can bump it up meta levels. Which also doesn’t solve the issue.
The point is to refine the wrong question about certainties 2 into a better question about probabilistic knowledge 2′. If you just want to get an answer to 2 - then this answer is ‘no’. We can’t be certain that our knowledge is true. Then again, neither we need to.
If your question is how it’s different from cyclical reasoning then consider the difference:
Evolution produced my brain that discovered evolution. After I’ve considered this thought I’m now certain in both of them and so case closed. No more arguments or evidence can ever persuade me otherwise.
I can never be certain of anything but to the best of my knowledge the situation looks exactly the way it would looked if my brain was produced by evolution which allowed my brain to discover evolution. I’ll be on lookout for counter-evidence, because maybe everything I know is a lie, but for now on a full reflection of my knowledge, including techniques of rationality and notions of simplicity and computational complexity, this seems to be the most plausible hypothesis.