believing something is basically the same as treating it as a fact, and I know how to treat something as a fact
Not quite. The whole point here is the rider—elephant distinction and no, your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Your comment history contains many flat out factual claims without any such qualification
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
you should also admit that there is no binary division of people who care about truth and people who don’t.
Sure, I’ll admit this :-)
It assigns a utility of 1 to believing a truth
Fair point, I forgot about this.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.
Not quite. The whole point here is the rider—elephant distinction and no, your conscious mind explicitly deciding to accept something as a fact does not automatically imply that you (the whole you) now believe this.
Correct. The distinction between what you (internally) believe and what you (externally) express is rather large. Not in the sense of lying, but in the sense that internal beliefs contain non-verbal parts and are generally much more complex than their representations in any given conversation.
Sure, I’ll admit this :-)
Fair point, I forgot about this.
I think my my main claim still stands: if what you (sincerely) accept as true is a function of your utility function, appropriate manipulation of incentives can make you (sincerely) believe anything at all—thus the Big Brother.
Belief is a vague generalization, not a binary bit in reality that you could determinately check for. The question is what is the best way to describe that vague generalization. I say it is “the person treats this claim as a fact.” It is true that you could try to make yourself treat something as a fact, and do it once or twice, but then on a bunch of other occasions not treat it as a fact, in which case you failed to make yourself believe it—but not because the algorithm is unknown. Or you might treat it as a fact publicly, and treat it as not a fact privately, in which case you do not believe it, but are lying. And so on. But if you consistently treat it as a fact in every way that you can (e.g. you bet that it will turn out true if it is tested, you act in ways that will have good results if it is true, you say it is true and defend that by arguments, you think up reasons in its favor, and so on) then it is unreasonable not to describe that as you believing the thing.
I already agreed that the fact that you treat some things as facts would not necessarily prevent you from assigning them probabilities and admitting that you might be wrong about them.
That depends on the details of the utility function, and does not necessarily follow. In real life people tend to act like this. In other words, rather than someone deciding not to believe something that has a probability of 80%, the person first decides to believe that it has a probability of 20%, or whatever. And then he decides not to believe it, and says that he simply decided not to believe something that was probably false. My utility function would assign an extreme negative value to allowing my assessment of the probability of something to be manipulated in that way.