Hi, I come from a regular science background (university grad student) so I may be biased, but I still have some questions.
Reasoning under uncertainty sounds a lot like Fuzzy Logic. Can you elaborate on the difference of your approach?
By contrast, realistic reasoners must operate under logical uncertainty: we often know how a machine works, but not precisely what it will do.
What do you exactly mean with “we know how it works”? Is there a known or estimated probability for what the machine will do?
Logically uncertain reasoning, then, requires the consideration of logically impossible possibilities.
“impossible possibilities” sounds like a contradiction (an Oxymoron ?) which I think is unusual for science writing. Does this add something to the paper or wouldn’t another term be better?
Why do you consider that the black box implements a “Rube Goldberg machine”? I looked up Rube Goldberg machine on Wikipedia and for me it sounds more like a joke than something that requires scientific assessment. Is there other literature on that?
Have you considered sending your work to a peer-reviewed conference or journal? This could give you some feedback and add more credibility to what you are doing.
Best regards and I hope this doesn’t sound too critical. Just want to help.
Hmm, you seem to have missed the distinction between environmental uncertainty and logical uncertainty.
Imagine a black box with a Turing machine inside. You don’t know which Turing machine is inside; all you get to see are the inputs and the outputs. Even if you had unlimited deductive capability, you wouldn’t know how the black box behaved: this is because of your environmental uncertainty, of not knowing which Turing machine the box implemented.
Now imagine a python computer program. You might read the program and understand it, but not know what it outputs (for lack of deductive capability). As a simple concrete example, imagine that the program searches for a proof of the Riemann hypothesis using less than a googol symbols: in this case, the program may be simple, but the output is unknown (and very difficult to determine). Your uncertainty in this case is logical uncertainty: you know how the machine works, but not what it will do.
Existing methods for reasoning under uncertainty (such as standard Bayesian probability theory) all focus on environmental uncertainty: they assume that you have unlimited deductive capability. A principled theory of reasoning under logical uncertainty does not yet exist.
“impossible possibilities” sounds like a contradition
Consider that python program that searches for a proof of the Riemann hypothesis: you can imagine it outputting either “proof found” or “no proof found”, but one of these possibilities is logically impossible. The trouble is, you don’t know which possibility is logically impossible. Thus, when you reason about these two possibilities, you are considering at least one logically impossible possibility.
I hope this helps answer your other questions, but briefly:
Fuzzy logic is only loosely related. It’s traditionally used in scenarios where the objects themselves can be “partly true”, whereas in most simple models of logical uncertainty we consider cases where the objects are always either true or false but you haven’t been able to deduce which is which yet. That said, probabilistic logics (such as this one) bear some resemblance to fuzzy logics.
When we say that you know how the machine works, we mean that you understand all the construction of the macihne and all the physical rules governing it. That is, we assert that you could write a computer program which would output “0” if the machine drops the ball into the top slot, and “1″ if it drops the ball into the bottom slot. (Trouble is, while you could write the program, you may not know what it will output.)
The Rube Goldberg machine thought experiment is used to demonstrate the difference between environmental uncertainty (not knowing which machine is being used) and logical uncertainty (not being able to deduce how the machine acts). I’m sorry if the reference to physical Rube Goldberg machines confused you.
This is an overview document, it mostly just describes the field (which many are ignorant of) and doesn’t introduce any particularly new results. Therefore, I highly doubt we’ll put it through peer review. But rest assured, I really I don’t think we’re hurting for credibility in this domain :-)
(The field was by no means started by us. If it’s arguments from authority that you’re looking for, you can trace this topic back to Boole in 1854 and Bernoulli in 1713, picked up in a more recent century by Los, Gaifman, Halpern, Hutter, and many many more in modern times. See also the intro to this paper, which briefly overviews the history of the field, and which is on the same topic, and which is peer reviewed. See also many of the references in that paper; it contains a pretty extensive list.)
Thanks a lot, that clears up a lot of things. I guess I have to read up about the Riemann hypothesis, etc.
Maybe your Introduction could benefit from discussing the previous work of Huttner, etc. instead of putting all references in one sentence. Then the lay person would know that you are not making all the terms up.
A more precise way to avoid the oxymoron is “logically impossible epistemic possibility”. I think ‘Epistemic possibility’ is used in philosophy in approximately the way you’re using the term.
Hi, I come from a regular science background (university grad student) so I may be biased, but I still have some questions.
Reasoning under uncertainty sounds a lot like Fuzzy Logic. Can you elaborate on the difference of your approach?
What do you exactly mean with “we know how it works”? Is there a known or estimated probability for what the machine will do?
“impossible possibilities” sounds like a contradiction (an Oxymoron ?) which I think is unusual for science writing. Does this add something to the paper or wouldn’t another term be better?
Why do you consider that the black box implements a “Rube Goldberg machine”? I looked up Rube Goldberg machine on Wikipedia and for me it sounds more like a joke than something that requires scientific assessment. Is there other literature on that?
Have you considered sending your work to a peer-reviewed conference or journal? This could give you some feedback and add more credibility to what you are doing.
Best regards and I hope this doesn’t sound too critical. Just want to help.
Hmm, you seem to have missed the distinction between environmental uncertainty and logical uncertainty.
Imagine a black box with a Turing machine inside. You don’t know which Turing machine is inside; all you get to see are the inputs and the outputs. Even if you had unlimited deductive capability, you wouldn’t know how the black box behaved: this is because of your environmental uncertainty, of not knowing which Turing machine the box implemented.
Now imagine a python computer program. You might read the program and understand it, but not know what it outputs (for lack of deductive capability). As a simple concrete example, imagine that the program searches for a proof of the Riemann hypothesis using less than a googol symbols: in this case, the program may be simple, but the output is unknown (and very difficult to determine). Your uncertainty in this case is logical uncertainty: you know how the machine works, but not what it will do.
Existing methods for reasoning under uncertainty (such as standard Bayesian probability theory) all focus on environmental uncertainty: they assume that you have unlimited deductive capability. A principled theory of reasoning under logical uncertainty does not yet exist.
Consider that python program that searches for a proof of the Riemann hypothesis: you can imagine it outputting either “proof found” or “no proof found”, but one of these possibilities is logically impossible. The trouble is, you don’t know which possibility is logically impossible. Thus, when you reason about these two possibilities, you are considering at least one logically impossible possibility.
I hope this helps answer your other questions, but briefly:
Fuzzy logic is only loosely related. It’s traditionally used in scenarios where the objects themselves can be “partly true”, whereas in most simple models of logical uncertainty we consider cases where the objects are always either true or false but you haven’t been able to deduce which is which yet. That said, probabilistic logics (such as this one) bear some resemblance to fuzzy logics.
When we say that you know how the machine works, we mean that you understand all the construction of the macihne and all the physical rules governing it. That is, we assert that you could write a computer program which would output “0” if the machine drops the ball into the top slot, and “1″ if it drops the ball into the bottom slot. (Trouble is, while you could write the program, you may not know what it will output.)
The Rube Goldberg machine thought experiment is used to demonstrate the difference between environmental uncertainty (not knowing which machine is being used) and logical uncertainty (not being able to deduce how the machine acts). I’m sorry if the reference to physical Rube Goldberg machines confused you.
This is an overview document, it mostly just describes the field (which many are ignorant of) and doesn’t introduce any particularly new results. Therefore, I highly doubt we’ll put it through peer review. But rest assured, I really I don’t think we’re hurting for credibility in this domain :-)
(The field was by no means started by us. If it’s arguments from authority that you’re looking for, you can trace this topic back to Boole in 1854 and Bernoulli in 1713, picked up in a more recent century by Los, Gaifman, Halpern, Hutter, and many many more in modern times. See also the intro to this paper, which briefly overviews the history of the field, and which is on the same topic, and which is peer reviewed. See also many of the references in that paper; it contains a pretty extensive list.)
Thanks a lot, that clears up a lot of things. I guess I have to read up about the Riemann hypothesis, etc.
Maybe your Introduction could benefit from discussing the previous work of Huttner, etc. instead of putting all references in one sentence. Then the lay person would know that you are not making all the terms up.
A more precise way to avoid the oxymoron is “logically impossible epistemic possibility”. I think ‘Epistemic possibility’ is used in philosophy in approximately the way you’re using the term.