If you are a philosopher whose daily work is to write papers, criticize other people’s papers, and respond to others’ criticisms of your own papers, then you may look at Occam’s Razor and shrug. Here is an end to justifying, arguing and convincing. You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs. And as the symbol of your treaty, your white flag, you use the phrase “a priori truth”.
Or the word “intuition”.
But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying “a priori” doesn’t explain why the brain-engine runs. If the brain has an amazing “a priori truth factory” that works to produce accurate beliefs, it makes you wonder why a thirsty hunter-gatherer can’t use the “a priori truth factory” to locate drinkable water. It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.
The claim that there is some non-infernetial apriori truth, or accurate intuition, is not equivalent to the claim that apriori truth is available about everything. Moreever, non-inferential, soundness-style, apriori truth has an evolutionary
justification: we might believe X, despite not having seen it with our own eyes, because only those of our
ancestors who believed X survived. Innate knowlede, the naturalistic apriori, must be sharply distinguished from
the mystical apriori (and both must be distinguished from inference-from-premises).
James R. Newman said: “The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2. The Internet Encyclopedia of Philosophy defines “a priori” propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are “discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe.” You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.”
And that is quite uncontentious, providing that it applies to truth as validity (correct inference from possibly
arbitrary premises), and not as soundness, or the non-inferential apriori (for instance, the question of whether
ones chosen premises are really true).
“You could see someone else’s engine operating materially, through material chains of cause and effect, to compute by “pure thought” that 1 + 1 = 2. How is observing this pattern in someone else’s brain any different, as a way of knowing, from observing your own brain doing the same thing? When “pure thought” tells you that 1 + 1 = 2, “independently of any experience or observation”, you are, in effect, observing your own brain as evidence.”
And when your Pure Thought tell that the principal of non-contradiction is true (something you need in order to infer 1+1=2), you may be benefitting from your ancestor’s hard won experience. The problem is that the apriori
needs to be defined in terms of isolated systems, and no system is ultimately isolated.
″ this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern. In other words, for every form of a priori knowledge obtained by “pure thought”, you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation. The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.”
If something can only learnt through empiricism, then offloading to another Engine that doesn’t have
the appropriate sensors doesn’t help. On the other hand, the claim that any apriori inference can
be offloaded to another Engine, an isolated external processor does not disprove the existence of the apriori.
The aposteriori is that which cannot be learnt by an isolated (no sensors) system; the inferenital apriori
is that which can—but it doesn’t matter which Engine is doing the processing.
“There is nothing you can know “a priori”, which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain. What do you think you are, dear reader?”
So long as you are talking about inference from premises. But observing their brain is not going to tell
me that their premises are true.
“Are the sort of neural flashes that philosophers label “a priori beliefs”, arbitrary? ”
Those flashes are noninferential/soundness style apriori intuitions, and are not addressed by the forgoing.
You can’t excuse calling a proposition “a priori” by pointing out that other philosophers are having trouble justifying their propositions. If a philosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for accurate beliefs. There’s no truce, no white flag, until you understand why the engine works.
If the engine does nothing but infer conclusions from premises, however computationally or materialistiically,
you still don’t know some important things: whether the premises are true, and the conclusions sound.
If you clear your mind of justification, of argument, then it seems obvious why Occam’s Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found.
We do, according to our explanations...which were selected for simplicity in the first place. You don’t
have an insight into the universe separate from explanations.
Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam’s Razor, just as you cannot argue anything to a rock. A mind needs a certain amount of dynamic structure to be an argument-acceptor. If a mind doesn’t implement Modus Ponens, it can accept “A” and “A->B” all day long without ever producing “B”. How do you justify Modus Ponens to a mind that hasn’t accepted it? How do you argue a rock into becoming a mind?
Then apriori truths of a non-inferential kind are preconditions of rationality. Which has nothing to do with materialism or computationalism
Or the word “intuition”.
The claim that there is some non-infernetial apriori truth, or accurate intuition, is not equivalent to the claim that apriori truth is available about everything. Moreever, non-inferential, soundness-style, apriori truth has an evolutionary justification: we might believe X, despite not having seen it with our own eyes, because only those of our ancestors who believed X survived. Innate knowlede, the naturalistic apriori, must be sharply distinguished from the mystical apriori (and both must be distinguished from inference-from-premises).
And that is quite uncontentious, providing that it applies to truth as validity (correct inference from possibly arbitrary premises), and not as soundness, or the non-inferential apriori (for instance, the question of whether ones chosen premises are really true).
“You could see someone else’s engine operating materially, through material chains of cause and effect, to compute by “pure thought” that 1 + 1 = 2. How is observing this pattern in someone else’s brain any different, as a way of knowing, from observing your own brain doing the same thing? When “pure thought” tells you that 1 + 1 = 2, “independently of any experience or observation”, you are, in effect, observing your own brain as evidence.”
And when your Pure Thought tell that the principal of non-contradiction is true (something you need in order to infer 1+1=2), you may be benefitting from your ancestor’s hard won experience. The problem is that the apriori needs to be defined in terms of isolated systems, and no system is ultimately isolated.
″ this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern. In other words, for every form of a priori knowledge obtained by “pure thought”, you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation. The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.”
If something can only learnt through empiricism, then offloading to another Engine that doesn’t have the appropriate sensors doesn’t help. On the other hand, the claim that any apriori inference can be offloaded to another Engine, an isolated external processor does not disprove the existence of the apriori. The aposteriori is that which cannot be learnt by an isolated (no sensors) system; the inferenital apriori is that which can—but it doesn’t matter which Engine is doing the processing.
“There is nothing you can know “a priori”, which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain. What do you think you are, dear reader?”
So long as you are talking about inference from premises. But observing their brain is not going to tell me that their premises are true.
“Are the sort of neural flashes that philosophers label “a priori beliefs”, arbitrary? ”
Those flashes are noninferential/soundness style apriori intuitions, and are not addressed by the forgoing.
If the engine does nothing but infer conclusions from premises, however computationally or materialistiically, you still don’t know some important things: whether the premises are true, and the conclusions sound.
We do, according to our explanations...which were selected for simplicity in the first place. You don’t have an insight into the universe separate from explanations.
Then apriori truths of a non-inferential kind are preconditions of rationality. Which has nothing to do with materialism or computationalism