I don’t see the advantage of treating states of knowledge as arbitrary complex numbers (quantum amplitudes) rather than real numbers on the closed interval [0,1] (probabilities).
I think that for there to be an advantage of one type or another, you have to have some kind of goal or cost functional in mind. If you’re talking about survival, belief propagation, etc., then it certainly is often advantageous to encode large, unwieldy descriptors of states of knowledge down into probabilities.
There are different types of knowledge that we categorize things into. What comes to my mind is the difference between a conclusion drawn by investigating axioms of logic and conclusions drawn from empirical evidence. When facing the claim that, “empirical reasoning cannot play any role in conclusions determined by investigating logical axioms”, I am curious about the rebuttal: “but conclusions determined by investigating logical axioms are themselves in principle experimentally detectable and thus, in the extreme limit of sensitivity of measuring devices, one could draw conclusions of logic experimentally.”
There doesn’t exist such a thing as a conclusion drawn from logic which differs from that conclusion’s instantiation on some brain-hardware somewhere. I guess what I am saying is that either we embrace logically proper names (something that philosophy seems to have abandoned) in the sense that we agree that a cognitive object is an ontologically existing entity and that our local instantiation of that object is merely an encoded representation of it… or else, what we say is “a conclusion from the axioms of logic” is really just the label we attach to a cluster over a subspace in the all-of-physics-amplitude-distribution.
Just because it’s complicated doesn’t mean it has that particular complicated feature.
You can build a non-yourself machine that does logic, but knowing that the machine’s function corresponds to logical reasoning requires that you can do logic using the same machine that is the referent of “you”.
I guess I don’t really see where you’re going with this. In what circumstances might you need to know the answer to your question? Can you reduce it to an empirical or decision-theoretic question?
I am not trying to assess whether or not it is “good” or “practically useful” to pose questions about knowledge in terms of quantum mechanical descriptions of brains. I’m trying to find resources for questions about philosophy of mind and discovery of logical knowledge.
For example, someone might say that for propositions A and B, (if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work. Thus, the (if … then …) I just mentioned is “true” and its truth exists in a wholly separate magisterium from propositions that can be subjected to empirical inquiry and arise as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense.
My point is that, fundamentally, the knowledge that “(if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work” is itself “subjectable to empirical inquiry and arises as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense” (i.e., the empirical evidence would be some examination of amplitudes in a quantum configuration subspace dealing with human minds).
I’m specifically trying to get at aspects of the theory of knowledge which whole branches of philosophers claim are outside of the magisterium in which Bayesian decision theory is applicable, and that this therefore entitles them to hold certain beliefs on the basis that they are “true” in that magisterium that can’t be touched by Bayes. My counterargument is that such knowledge about the alleged other magisteria must itself be (at least in principle) experimentally detectable in brains, at the level of QM.
Whether we can do such detection or have useful, specific models for it is a whole different ball of wax that doesn’t concern me in this specific question.
I don’t see the advantage of treating states of knowledge as arbitrary complex numbers (quantum amplitudes) rather than real numbers on the closed interval [0,1] (probabilities).
I think that for there to be an advantage of one type or another, you have to have some kind of goal or cost functional in mind. If you’re talking about survival, belief propagation, etc., then it certainly is often advantageous to encode large, unwieldy descriptors of states of knowledge down into probabilities.
There are different types of knowledge that we categorize things into. What comes to my mind is the difference between a conclusion drawn by investigating axioms of logic and conclusions drawn from empirical evidence. When facing the claim that, “empirical reasoning cannot play any role in conclusions determined by investigating logical axioms”, I am curious about the rebuttal: “but conclusions determined by investigating logical axioms are themselves in principle experimentally detectable and thus, in the extreme limit of sensitivity of measuring devices, one could draw conclusions of logic experimentally.”
There doesn’t exist such a thing as a conclusion drawn from logic which differs from that conclusion’s instantiation on some brain-hardware somewhere. I guess what I am saying is that either we embrace logically proper names (something that philosophy seems to have abandoned) in the sense that we agree that a cognitive object is an ontologically existing entity and that our local instantiation of that object is merely an encoded representation of it… or else, what we say is “a conclusion from the axioms of logic” is really just the label we attach to a cluster over a subspace in the all-of-physics-amplitude-distribution.
Just because it’s complicated doesn’t mean it has that particular complicated feature.
You can build a non-yourself machine that does logic, but knowing that the machine’s function corresponds to logical reasoning requires that you can do logic using the same machine that is the referent of “you”.
I guess I don’t really see where you’re going with this. In what circumstances might you need to know the answer to your question? Can you reduce it to an empirical or decision-theoretic question?
I am not trying to assess whether or not it is “good” or “practically useful” to pose questions about knowledge in terms of quantum mechanical descriptions of brains. I’m trying to find resources for questions about philosophy of mind and discovery of logical knowledge.
For example, someone might say that for propositions A and B, (if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work. Thus, the (if … then …) I just mentioned is “true” and its truth exists in a wholly separate magisterium from propositions that can be subjected to empirical inquiry and arise as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense.
My point is that, fundamentally, the knowledge that “(if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work” is itself “subjectable to empirical inquiry and arises as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense” (i.e., the empirical evidence would be some examination of amplitudes in a quantum configuration subspace dealing with human minds).
I’m specifically trying to get at aspects of the theory of knowledge which whole branches of philosophers claim are outside of the magisterium in which Bayesian decision theory is applicable, and that this therefore entitles them to hold certain beliefs on the basis that they are “true” in that magisterium that can’t be touched by Bayes. My counterargument is that such knowledge about the alleged other magisteria must itself be (at least in principle) experimentally detectable in brains, at the level of QM.
Whether we can do such detection or have useful, specific models for it is a whole different ball of wax that doesn’t concern me in this specific question.
In general, if you find yourself stuck or confused on a question of philosophy, try the following things in order:
Try to reduce it to a decision problem.
Walk away and come back to it later with a fresh perspective.
Ignore the question, it probably didn’t matter anyway.