Usefulness can come in a great many ways besides greater predictive accuracy.
I think that’s something that people will generally agree on. As a trivial example, a car is in many ways more useful than a horse drawn carriage, but it’s not more useful due to greater predictive accuracy. The more common objection to a pragmatist theory of truth is one of an essentially moral value to truth. If believing something that is false will give you a net benefit (perhaps believing in religion makes you happier?), should you believe in it, even though it is false?
But do you agree that beliefs can have more uses than predictive accuracy?
If believing something that is false will give you a net benefit should you believe in it, even though it is false?
Without equivocating, net benefit should mean net benefit, so the answer is “of course”. Saying no is making predictive accuracy override net benefit to all your values—making it into a fetish, a fixed idea.
I think that’s something that people will generally agree on. As a trivial example, a car is in many ways more useful than a horse drawn carriage, but it’s not more useful due to greater predictive accuracy. The more common objection to a pragmatist theory of truth is one of an essentially moral value to truth. If believing something that is false will give you a net benefit (perhaps believing in religion makes you happier?), should you believe in it, even though it is false?
Did you really mean “in some way,” rather than “net”?
I didn’t. Fixed.
But do you agree that beliefs can have more uses than predictive accuracy?
Without equivocating, net benefit should mean net benefit, so the answer is “of course”. Saying no is making predictive accuracy override net benefit to all your values—making it into a fetish, a fixed idea.