A bit late, a related point. Let me start with probability theory. Probability theory is considerably more magic than logic, since only the latter is “extensional” or “compositional”, the former is not. Which just means the truth values of A and B determine the truth value of complex statements like A∧B (“A and B”). The same is not the case for probability theory: The probabilities of A and B do not determine the probability of A∧B, they only constrain it to a certain range of values.
For example, if A and B have probabilities 0.6 and 0.5 respectively, the probability of the conjunction, A∧B, is merely restricted to be somewhere between 0.1 and 0.5. This is why we can’t do much “probabilistic deduction” as opposed to logical deduction. In propositional logic, all the truth values of complex statements are determined by the truth values of atomic statements.
In probability theory we need much more given information than in logic, we require a “probability distribution” over all statements, including the complex ones (which grow exponentially with the number of of atomic statements), and only require them to not violate the rather permissive axioms of probability theory. In essence, probability theory requires most inference questions to be already settled in advance. By magic.
This already means a purely “Bayesian” AI can’t be built, as magic doesn’t exist, and some other algorithmic means is required to generate a probability distribution in the first place. After all, probability distributions are not directly given by observation.
(Though while logic allows for inference, it ultimately also fails as an AI solution, partly because purely deductive logical inference is not sufficient, or even very important, for intelligence, and partly also because real world inputs and outputs of an AI do not usually come in form of discrete propositional truth values. Nor as probabilities, for that matter.)
The point about probability theory generalizes to utility theory. Utility functions (utility “distributions”) are not extensional either. Nor are preference orderings extensional in any sense. A preference order between atomic propositions implies hardly anything about preferences between complex propositions. We (as humans) can easily infer that someone who likes lasagna better than pizza, and lasagna better than spaghetti, probably also likes lasagna better than pizza AND spaghetti. Utility theory doesn’t allow for such “inductive” inferences.
But while these theories are not theories that solve the general problem of inductive algorithmic inference (i.e., artificial intelligence), they at least set, for us humans, some weak coherence constraints on rational sets of beliefs and desires. They are useful for the study of rationality, if not for AI.
Great points. I would only add that I’m not sure the “atomic” propositions even exist. The act of breaking a real-world scenario into its “atomic” bits requires magic, meaning in this case a precise truncation of intuited-to-be-irrelevant elements.
Yeah. In logic it is usually assumed that sentences are atomic when they do not contain logical connectives like “and”. And formal (Montaigne style) semantics makes this more precise, since logic may be hidden in linguistic form. But of course humans don’t start out with language. We have some sort of mental activity, which we somehow synthesize into language, and similar thoughts/propositions can be expressed alternatively with an atomic or a complex sentence. So atomic sentences seem definable, but not abstract atomic propositions as object of belief and desire.
A bit late, a related point. Let me start with probability theory. Probability theory is considerably more magic than logic, since only the latter is “extensional” or “compositional”, the former is not. Which just means the truth values of A and B determine the truth value of complex statements like A∧B (“A and B”). The same is not the case for probability theory: The probabilities of A and B do not determine the probability of A∧B, they only constrain it to a certain range of values.
For example, if A and B have probabilities 0.6 and 0.5 respectively, the probability of the conjunction, A∧B, is merely restricted to be somewhere between 0.1 and 0.5. This is why we can’t do much “probabilistic deduction” as opposed to logical deduction. In propositional logic, all the truth values of complex statements are determined by the truth values of atomic statements.
In probability theory we need much more given information than in logic, we require a “probability distribution” over all statements, including the complex ones (which grow exponentially with the number of of atomic statements), and only require them to not violate the rather permissive axioms of probability theory. In essence, probability theory requires most inference questions to be already settled in advance. By magic.
This already means a purely “Bayesian” AI can’t be built, as magic doesn’t exist, and some other algorithmic means is required to generate a probability distribution in the first place. After all, probability distributions are not directly given by observation.
(Though while logic allows for inference, it ultimately also fails as an AI solution, partly because purely deductive logical inference is not sufficient, or even very important, for intelligence, and partly also because real world inputs and outputs of an AI do not usually come in form of discrete propositional truth values. Nor as probabilities, for that matter.)
The point about probability theory generalizes to utility theory. Utility functions (utility “distributions”) are not extensional either. Nor are preference orderings extensional in any sense. A preference order between atomic propositions implies hardly anything about preferences between complex propositions. We (as humans) can easily infer that someone who likes lasagna better than pizza, and lasagna better than spaghetti, probably also likes lasagna better than pizza AND spaghetti. Utility theory doesn’t allow for such “inductive” inferences.
But while these theories are not theories that solve the general problem of inductive algorithmic inference (i.e., artificial intelligence), they at least set, for us humans, some weak coherence constraints on rational sets of beliefs and desires. They are useful for the study of rationality, if not for AI.
Great points. I would only add that I’m not sure the “atomic” propositions even exist. The act of breaking a real-world scenario into its “atomic” bits requires magic, meaning in this case a precise truncation of intuited-to-be-irrelevant elements.
Yeah. In logic it is usually assumed that sentences are atomic when they do not contain logical connectives like “and”. And formal (Montaigne style) semantics makes this more precise, since logic may be hidden in linguistic form. But of course humans don’t start out with language. We have some sort of mental activity, which we somehow synthesize into language, and similar thoughts/propositions can be expressed alternatively with an atomic or a complex sentence. So atomic sentences seem definable, but not abstract atomic propositions as object of belief and desire.