Return on investment in the field of AI seems to be sub-linear beyond a certain point. Because it’s still the sort of domain that relies on specific breakthroughs, it’s dubious how effective parallel research can be. Hence, my guess would be that we don’t scale because we can’t currently scale.
LESS
What you need to remember is that all of this applies to probabilistic arguments with probabilistic results—of course deductive reasoning can be done backward. However, when evidence is presented as contribution to a belief, omitting some (as you will, inevitably, when reasoning backward) disentangles the ultimate belief from the object thereof. If some evidence doesn’t contribute, the (probabilistic) belief can’t reflect reality. You seem to conceptualize arguments as requiring the outcome if they’re valid and their premises are true, which doesn’t describe the vast majority.
I’ve just realized that there’s a footnote addressing this. My apologies.
“The conditional probability P(cults|aliens) isn’t less than P(cults|aliens) ”
Shouldn’t this be ” The conditional probability P(cults|~aliens) isn’t less than P(cults|aliens) ?” It seems trivial that a probability is not less than itself, and the preceding text seems to propose the modified version included in this comment.
I haven’t quite observed this; even extremely broad patterns of behavior frequently seem to deviate from any effective strategy (where said strategy is built around a reasonable utility function). In the other direction, how would this model be falsified? Retrospective validation might be available (though I personally can’t find it), but anticipation using this dichotomy seems ambiguous.
This is quite a charming allegory, though my notion of truth was already simple and absolute. It’s certainly an argument worth reading.
That’s very much true. However, it appears to me the object of frustration is the gesture’s sentiment (as evidenced by the girlfriend’s focus on the gesture specifically). Thus, I find it dubious that the girlfriend’s primary concern was the changes in her own beliefs regarding her cooking.
I don’t quite see how, in the hot sauce example, the girlfriend is “treating [the OP] like he retroactively made her cooking worse. ” Hot sauce tends to improve the taste of food, so it appears that she perceives his addition of the condiment (increasing his appreciation of the food) as implying that her food isn’t of sufficient quality to be palatable on its own.
Each has no grounds to believe in the other’s existence, so rationally they ought to both say that the other doesn’t exist.
The irony is palpable.