Just because it’s complicated doesn’t mean it has that particular complicated feature.
You can build a non-yourself machine that does logic, but knowing that the machine’s function corresponds to logical reasoning requires that you can do logic using the same machine that is the referent of “you”.
I guess I don’t really see where you’re going with this. In what circumstances might you need to know the answer to your question? Can you reduce it to an empirical or decision-theoretic question?
I am not trying to assess whether or not it is “good” or “practically useful” to pose questions about knowledge in terms of quantum mechanical descriptions of brains. I’m trying to find resources for questions about philosophy of mind and discovery of logical knowledge.
For example, someone might say that for propositions A and B, (if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work. Thus, the (if … then …) I just mentioned is “true” and its truth exists in a wholly separate magisterium from propositions that can be subjected to empirical inquiry and arise as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense.
My point is that, fundamentally, the knowledge that “(if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work” is itself “subjectable to empirical inquiry and arises as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense” (i.e., the empirical evidence would be some examination of amplitudes in a quantum configuration subspace dealing with human minds).
I’m specifically trying to get at aspects of the theory of knowledge which whole branches of philosophers claim are outside of the magisterium in which Bayesian decision theory is applicable, and that this therefore entitles them to hold certain beliefs on the basis that they are “true” in that magisterium that can’t be touched by Bayes. My counterargument is that such knowledge about the alleged other magisteria must itself be (at least in principle) experimentally detectable in brains, at the level of QM.
Whether we can do such detection or have useful, specific models for it is a whole different ball of wax that doesn’t concern me in this specific question.
Just because it’s complicated doesn’t mean it has that particular complicated feature.
You can build a non-yourself machine that does logic, but knowing that the machine’s function corresponds to logical reasoning requires that you can do logic using the same machine that is the referent of “you”.
I guess I don’t really see where you’re going with this. In what circumstances might you need to know the answer to your question? Can you reduce it to an empirical or decision-theoretic question?
I am not trying to assess whether or not it is “good” or “practically useful” to pose questions about knowledge in terms of quantum mechanical descriptions of brains. I’m trying to find resources for questions about philosophy of mind and discovery of logical knowledge.
For example, someone might say that for propositions A and B, (if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work. Thus, the (if … then …) I just mentioned is “true” and its truth exists in a wholly separate magisterium from propositions that can be subjected to empirical inquiry and arise as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense.
My point is that, fundamentally, the knowledge that “(if A → B then ~B → ~A) is a discovered piece of knowledge about the way all truth functions work” is itself “subjectable to empirical inquiry and arises as the arg max of some posterior probability distribution and be thought of as “true” (or “exceedingly probable give current evidence”) in that sense” (i.e., the empirical evidence would be some examination of amplitudes in a quantum configuration subspace dealing with human minds).
I’m specifically trying to get at aspects of the theory of knowledge which whole branches of philosophers claim are outside of the magisterium in which Bayesian decision theory is applicable, and that this therefore entitles them to hold certain beliefs on the basis that they are “true” in that magisterium that can’t be touched by Bayes. My counterargument is that such knowledge about the alleged other magisteria must itself be (at least in principle) experimentally detectable in brains, at the level of QM.
Whether we can do such detection or have useful, specific models for it is a whole different ball of wax that doesn’t concern me in this specific question.
In general, if you find yourself stuck or confused on a question of philosophy, try the following things in order:
Try to reduce it to a decision problem.
Walk away and come back to it later with a fresh perspective.
Ignore the question, it probably didn’t matter anyway.