Well, it seems a little tautological to me: only in hindsight can you be sure that your optimism was rational.
Wrong. When you act under uncertainty, the outcome is not the judge of the propriety of your reason, although it may point out a probable problem.
What you think of the matter intellectually is of relatively little account, just as one’s intellectual disbelief in ghosts has relatively little to do with whether you’ll be able to sleep soundly in a “haunted” house.
I understand that the connection isn’t direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.
I therefore don’t care which models or beliefs are true, only which ones are useful. To the extent that you care about the “truth” of a model, you will find conversing with me frustrating, or at least uninformative.
Yet you can’t help but care which claims about models being useful are true.
I understand that the connection isn’t direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.
Perhaps. My point was that your intellectual conclusion doesn’t have much direct impact on your behavior, so the emotional belief has more practical relevance.
Yet you can’t help but care which claims about models being useful are true.
No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.
Yet you can’t help but care which claims about models being useful are true.
No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.
You misunderstood Vladimir Nesov. His point was that “which model is (really, truly) useful” is itself a truth claim. You care which models are in fact useful to you—and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won’t be useful to you).
It’s an awkward claim to word; I’m not sure if my rephrases helped.
His point was that “which model is (really, truly) useful” is itself a truth claim. You care which models are in fact useful to you—and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won’t be useful to you).
That may be true, but I don’t see how it’s useful. ;-)
Actually, I don’t even see that it’s always true. I only need accurate predictions of which models will be useful when the cost of testing them is high compared to their expected utility. If the cost of testing is low, I’m better off testing them myself, than worrying about whether they’re in fact going to be useful.
In fact, excessive pre-prediction of what models are likely to be useful is probably a bad idea; I could’ve made more progress in improving myself, a lot sooner, if I hadn’t been so quick to assume that I could predict the usefulness of a method without having first experienced it.
By way of historical example, Benjamin Franklin concluded that hypnosis was nonsense because Mesmer’s (incorrect) model of how it worked was nonsense… and so he passed up the opportunity to learn something useful.
More recently, I’ve tried to learn from his example by ignoring the often-nonsensical models that people put forth for their methods, focusing instead on whether the method itself produces the claimed results, when approached with an open mind.
Then, if possible, I try to construct a simpler, saner, more rigorous model for the method—though still without any claim of absolute truth.
Less-wrongness is often useful; rejecting apparent wrongness, much less so.
Sigh.
Wrong. When you act under uncertainty, the outcome is not the judge of the propriety of your reason, although it may point out a probable problem.
I understand that the connection isn’t direct, and in some cases may be hard to establish at all, but you are always better off bringing all sides of yourself to agreement.
Yet you can’t help but care which claims about models being useful are true.
Perhaps. My point was that your intellectual conclusion doesn’t have much direct impact on your behavior, so the emotional belief has more practical relevance.
No, I care which ones are useful to me, which is only incidentally correlated with which claims about the models are true.
You misunderstood Vladimir Nesov. His point was that “which model is (really, truly) useful” is itself a truth claim. You care which models are in fact useful to you—and that means that on a meta-level, you are concerned with true predictions (specifically, with true predictions as to which instrumental models will or won’t be useful to you).
It’s an awkward claim to word; I’m not sure if my rephrases helped.
That may be true, but I don’t see how it’s useful. ;-)
Actually, I don’t even see that it’s always true. I only need accurate predictions of which models will be useful when the cost of testing them is high compared to their expected utility. If the cost of testing is low, I’m better off testing them myself, than worrying about whether they’re in fact going to be useful.
In fact, excessive pre-prediction of what models are likely to be useful is probably a bad idea; I could’ve made more progress in improving myself, a lot sooner, if I hadn’t been so quick to assume that I could predict the usefulness of a method without having first experienced it.
By way of historical example, Benjamin Franklin concluded that hypnosis was nonsense because Mesmer’s (incorrect) model of how it worked was nonsense… and so he passed up the opportunity to learn something useful.
More recently, I’ve tried to learn from his example by ignoring the often-nonsensical models that people put forth for their methods, focusing instead on whether the method itself produces the claimed results, when approached with an open mind.
Then, if possible, I try to construct a simpler, saner, more rigorous model for the method—though still without any claim of absolute truth.
Less-wrongness is often useful; rejecting apparent wrongness, much less so.