The manner in which explanations are knocked down seems under-specified, if you’re not
doing Bayesian updating.
Criticism enables us to see flaws in explanations. What is under-specified about finding a flaw?
In your way, you need to come up with criticisms and also with probabilities associated with those criticisms. Criticisms of real world theories can be involved and complex. Isn’t it enough to expose a flaw in an explanatory theory? Must one also go to the trouble of calculating probabilities—a task that is surely fraught with difficulty for any realistic idea of criticism? You’re adding a huge amount of auxilliary theory and your evaluation is then also dependent on the truth of all this auxilliary theory.
I just don’t know what in particular you mean by ‘explanation’. I know what the word means
in general, but not your specific conception.
You don’t seem to be actually saying very much then; is LW really short on explanations, in the conventional sense? Explanation seems well evidenced by the last couple of top level posts. Similarly, do we really fail to criticise one another? A large number of the comments seem to be criticisms. If you’re essentially criticising us for not having learn rationality 101 - the sort of rationality you learn as a child of 12, arguing against god—then obviously it would be a problem if we didn’t bare in mind the stuff. But without providing evidence that we succumb to these faults, it’s hard to see what the problem is.
Your other points, however, are substantive. If humans could solve any problem, or it was impossible to design an agent which could learn some but not all things, or confirmation didn’t increase subjective plausibility, these would be important claims.
Criticism enables us to see flaws in explanations. What is under-specified about finding a flaw?
In your way, you need to come up with criticisms and also with probabilities associated with those criticisms. Criticisms of real world theories can be involved and complex. Isn’t it enough to expose a flaw in an explanatory theory? Must one also go to the trouble of calculating probabilities—a task that is surely fraught with difficulty for any realistic idea of criticism? You’re adding a huge amount of auxilliary theory and your evaluation is then also dependent on the truth of all this auxilliary theory.
My conception is the same as the general one.
You don’t seem to be actually saying very much then; is LW really short on explanations, in the conventional sense? Explanation seems well evidenced by the last couple of top level posts. Similarly, do we really fail to criticise one another? A large number of the comments seem to be criticisms. If you’re essentially criticising us for not having learn rationality 101 - the sort of rationality you learn as a child of 12, arguing against god—then obviously it would be a problem if we didn’t bare in mind the stuff. But without providing evidence that we succumb to these faults, it’s hard to see what the problem is.
Your other points, however, are substantive. If humans could solve any problem, or it was impossible to design an agent which could learn some but not all things, or confirmation didn’t increase subjective plausibility, these would be important claims.