Rationality can be about Winning, or it can be about The Truth, but it can’t be about both. Sooner or later, your The Truth will demand you shoot yourself in the foot, while Winning will offer you a pretty girl with a country-sized dowry. The only price will be presenting various facts about yourself in the most seductive order instead of the most informative one.
If your highest value isn’t Winning, you do not get to be surprised when you lose. You do not even get to be disappointed. By revealed preference, you have to have a mad grin across your face, that you were able to hold fast to your highest-value-that-isn’t-winning all the way to the bitter end.
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth?
Then, if I only get one of those things, it’s worse than getting both of those things (and possibly so much worse that I don’t even consider them worthwhile to pursue individually; but this part is optional). Is there a law against having values like this? There is not. Do you get to insist that nevertheless I have to choose, and can’t get both? Nope, because the utility function is not up for grabs[1]. Now, maybe I will get both and maybe I won’t, but there’s no reason I can’t have “actually, both, please” as my highest value.
The fallacy here is simply that you want to force me to accept your definition of Winning, which you construct so as not to include The Truth. But why should I do that? The only person who gets to define what counts as Winning for me is me.
In short, no, Rationality absolutely can be about both Winning and about The Truth. This is no more paradoxical than the fact that Rationality can be about saving Alice’s life and also about saving Bob’s life. You may at some point end up having to choose between saving Alice and saving Bob, and that would be sad; and you would end up making some choice, in some manner, as fits the circumstance. The existence of this possibility is not particularly interesting, and has no deep implications. The goal is “both”.
(That said, the bit about the two armies was A++, and I strong-upvoted the post just for that.)
Technical Truth is as Bad as Lying
I wholly agree with this section…
… except for the last paragraph—specifically, this:
The purpose of a word is to carve reality at a joint useful for the discussion taking place, and we should pause here to note that the joint in question isn’t “emits true statements”, it’s “emits statements that the other party is better off for listening to”.
No, it’s not.
I’m not sure where this meme comes from[2], but it’s just wrong. Unless you are, like, literally my mother, “is the other party, speficially, better off for listening to this thing that I am saying” constitutes part of my motivation for saying things approximately zero percent of the time. It’s just not a relevant consideration at all—and I don’t think I’m even slightly unusual in this.
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
Sometimes I am additionally motivated by some specific usefulness of some specific utterance, but even in the edge case where the expected usefulness is exclusive to the single person to whom the utterance is addressed, I don’t consider whether that person will be better off for having listened to the thing in question. Maybe they won’t be! Maybe they will somehow be harmed. That’s not my business; they have the relevant information, which is true (and which I violated no moral precepts by conveying to them)—the rest is up to them.
Therefore if someone says “but if you lie, they’ll be better off”, my response is “weird thing to bring up; what’s the relevance?”.
Biting the Bullet
Basically correct. I will add that not only is it not always morally obligatory to tell the truth, but in fact it is sometimes morally obligatory to lie. Sometimes, telling the truth is wrong, and doing so makes you a bad person. Therefore the one who resolves to always tell the truth, no matter what, can in fact end up predictably doing evil as a direct result of that resolution.
There is no royal road to moral perfection. There is no way to get around the fact that you will always need to apply all of your faculties, the entirety of your reason and your conscience and everything else that is part of you, in order to be maximally sure (but never perfectly sure!) that you are doing the right thing. The moment you replace your brain with an algorithm, you’ve gone wrong. This fact does not become any less true even if the algorithm is “always tell the truth”. You can and should make rules, and you can and should follow them (rule consequentialism is superior to act consequentialism for all finite agents), and yet even this offers no escape from that final responsibility, which is always yours and cannot be offloaded to anyone or anything, ever.
Lie by default whenever you think it passes an Expected Value Calculation to do so, just as for any other action.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. It doesn’t work no matter how much we say “yeah but just make up numbers” or “yeah you can’t actually do the calculation, but let’s pretend we can”. The numbers are fake and meaningless and we can’t do the calculation.
It’s still a better policy than just trusting people.
When your friends ask you about how trustworthy you are, make no implications that you are abnormally honest. Tell them truthfully (if it is safe to do so) about all the various bad incentives, broken social systems, and ordinary praxis that compel dishonesty from you and any other person, even among friends, and give them sincere advice about how to navigate these issues.
This, I agree with.
Cooperative Epistemics
I agree with all of this section. What I’ll note here is that there are people who will campaign very hard against the sort of thing you are advocating for here (“Do them the charity of not pretending they wouldn’t be making a terrible mistake by imagining they can take you or anyone else at their word. Build your Cooperative Epistemics on distrust instead.”), and for the whole “trying to start a communist utopia on the expectation that everybody just” thing. I agree that this actually has the opposite result to what anyone sensible would want.
Saying words is just an action, like any other action. You judge actions by their consequences. Are people made worse off or not? Most of the time, you’re not poisoning a shared epistemic well. The well was already poisoned when you got here. It’s more of a communal dumping ground at this point. Mostly you’d just be doing the sensible thing like everybody else does, except that you lack the instinct and intuition and need to learn to do it by rote.
When it makes sense to do so, when the consequences are beneficial, when society is such that you have to, when nobody wants the truth, when nobody is expecting the truth, when nobody is incentivising the truth: just lie to people.
This, on the other hand, is once again a terrible idea.
Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
“When the consequences are beneficial”—no, you can’t tell when the consequences will be beneficial, and anyhow act consequentialism does not and cannot work, so instead you should be a rule consequentialist and adopt rules about when lying is right, and when lying is wrong, and only lie in the first case and not the second case. (And you should have meta-rules about when you make your rules known to other people—hint, the answer is “almost always”—because much of the value of rules like this comes from them being public knowledge. And so on, applying all the usual ethical and meta-ethical considerations.)
“When society is such that you have to”—too general; furthermore, people, and by “people” I mean “dirty stinking liars who lie all the time, the bastards”, use this sort of excuse habitually, so you should be extremely wary of it. However, sometimes it actually is true. Once again you cannot avoid having to actually think about this sort of thing in much more detail than the OP.
“When nobody wants the truth”—situations like this are often the ones where telling the truth is exceptionally important and the right thing to do. But sometimes, the opposite of that.
“When nobody expects the truth”—ditto.
“When nobody is incentivizing the truth”—ditto.
The well was already poisoned when you got here. It’s more of a communal dumping ground at this point.
Wells can be cleaned, and new wells can be dug. (The latter is often a prerequisite for the former.)
That is, things which should be construed as in some sense possibly being true or false. In other words, I do not include here things like jokes, roleplaying, congratulatory remarks, flirting, requests, exclamations, etc.
I consider you to be basically agreeing with me for 90% of what I intended and your disagreements for the other 10% to be the best written of any so far, and basically valid in all the places I’m not replying to it. I still have a few objections:
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth? … In short, no, Rationality absolutely can be about both Winning and about The Truth.
I agree the utility function isn’t up for grabs and that that is a coherent set of values to have, but I have this criticism that I want to make that I feel I don’t have the right language to make. Maybe you can help me. I want to call that utility function perverse. The kind of utilityfunction that an entity is probably mistaken to imagine itself as having.
For any particular situation you might find yourself in, for any particular sequence of actions you might do in that situation, there is a possible utilityfunction you could be said to have such that the sequence of actions is the rational behaviour of a perfect omniscient utility maximiser. If nothing else, pick the exact sequence of events that will result, declare that your utility function is +100 for that sequence of events and 0 for anything else, and then declare yourself a supremely efficient rationalist.
Actually doing that would be a mistake. It wouldn’t be making you better. This is not a way to succeed at your goals, this is a way to observe what you’re inclined to do anyway and paint the target around it. Your utility function (fake or otherwise) is supposed to describe stuff you actually want. Why would you want specifically that in particular?
I think the stronger version of Rationality is the version that phrases it as about getting the things you want, whatever those things might be. In that sense, if The Truth is merely a value, you should carefully segment it in your brain out from your practice of rationality: Your rationality is about mirroring the mathematical structure best suited for obtaining goals, and then to whatever degree you value The Truth above its normal instrumental value is something you buy where it’s cheapest like all your other values. Mixing the two makes both worse, you pollute your concept of rational behaviour with a love of the truth (and therefore, for example, are biased towards imagining that other people who display rationality are probably honest, or other people who display honesty are probably rational) and you damage your ability to pursue the truth by not putting in the values category where it belongs where it will lead you to try to cheaply buy more of it.
Of course maybe you’re just the kind of guy who really loves mixing his value for The Truth in with his rationality into a weird soup. That’d explain your actiosn without making you a walking violation of any kind of mathematical law, it’d just be a really weird thing for you to innately want.
I am still trying to find a better way to phrase this argument such that someone might find it persuasive of something, because I don’t expect this phrasing to work.
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
I think I meant something subtly different that what you’ve taken that part to mean. I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you’d lose the ability to get other people to know things, which is a useful ability to have. This is basically my position! Whether the specific person you address is better off in each specific case isn’t materal because you aren’t trying to always make them better off, you’re just trying to avoid being seen as someone who predictibly doesn’t make them better off. I agree that calculating the full expected consequences to every person of every thing you say isn’t necessary for this purpose.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. … Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
I agree that Act Consequentialism doesn’t really work. I was trying to be a Rule consequentialist instead wben I wrote the above rule. I agree that that sounds fatuous, but I think the immediate feeling is pointing at a valid retort: You haven’t operationalized this position into a decision process that a person can actually do (or even pretend to do).
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can’t be a real Rule Consequentialist without actually having a Rule. What is the rule for “Only lie when doing so is the right thing to do”? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule “only lie when doing so is the right thing to do” into that as a backup I’m just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I’ve put above (that is also better than the “never” or “only by coming up with galaxy-brained ways it technically isn’t lying” or Eliezer’s meta-honesty idea that I’ve read before) I’d consider you to have (possibly) won this issue, but that’s the real price of entry. It’s not enough to point out the flaws where all my rules don’t work, you have to produce rules that work better.
Well, there’s a couple of things to say in response to this… one is that wanting to get the girl / dowry / happiness / love / whatever tangible or intangible goals as such, and also wanting to be virtuous, doesn’t seem to me to be a weird or perverse set of values. In a sense, isn’t this sort of thing the core of the project of living a human life, when you put it like this? “I want to embody all the true virtues, and also I want to have all the good things.” Seems pretty natural to me! Of course, it’s also a rather tall order (uh, to put it mildly…), but that just means that it provides a challenge worthy of one who does not fear setting high goals for himself.
Somewhat orthogonally to this, there is also the fact that—well, I wrote the footnote about the utility function being metaphorical for a reason. I don’t actually think that humans (with perhaps very rare exceptions) have utility functions; that is, I don’t think that our preferences satisfy the VNM axioms—and nor should they. (And indeed I am aware of so-called “coherence theorems” and I don’t believe in them.)
With that constraint (which I consider an artificial and misguided one) out of the way, I think that we can reason about things like this in ways that make more sense. For instance, trying to fit truth and honesty into a utility framework makes for some rather unnatural formulations and approaches, like talking about buying more of it, or buying it more cheaply, etc. I just don’t think that this makes sense. If the question is “is this person honest, trustworthy, does he have integrity, is he committed to truth”, then the answer can be “yes”, and it can be “no”, and it could perhaps be some version of “ehhh”, but if it’s already “yes” then you basically can’t buy any more of it than that. And if it’s not “yes” and you’re talking about how cheaply you can buy more of it, then it’s still not “yes” even after you complete your purchase.
(This is related to the notion that while consequentialism may be the proper philosophical grounding for morality, and deontology the proper way to formulate and implement your morality so that it’s tractable for a finite mind, nevertheless virtue ethics is the “descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind, once you’ve decided on your object-level moral views”. Thus you can embody the virtue of honesty, or fail to do so. You can’t buy more of embodying some virtue by trading away some other virtue; that’s just not how it works.)
I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you’d lose the ability to get other people to know things, which is a useful ability to have.
Yes, of course; but…
Whether the specific person you address is better off in each specific case isn’t materal because you aren’t trying to always make them better off, you’re just trying to avoid being seen as someone who predictibly doesn’t make them better off.
… but the preceding fact just doesn’t really have much to do with this business of “do you make people better off by what you say”.
My claim is that people (other than “rationalists”, and not even all or maybe even most “rationalists” but only some) just do not think of things in this way. They don’t think of whether their words will make their audience better off when they speak, and they don’t think of whether the words of other people are making them better off when they listen. This entire framing is just alien to how most people do, and should, think about communication in most circumstances. Yeah, if you lie all the time, people will stop believing you. That’s just directly the causation here, it doesn’t go through another node where people compute the expected value of your words and find it to be negative.
(Maybe this point isn’t particularly important to the main discussion. I can’t tell, honestly!)
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can’t be a real Rule Consequentialist without actually having a Rule. What is the rule for “Only lie when doing so is the right thing to do”? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule “only lie when doing so is the right thing to do” into that as a backup I’m just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I’ve put above (that is also better than the “never” or “only by coming up with galaxy-brained ways it technically isn’t lying” or Eliezer’s meta-honesty idea that I’ve read before) I’d consider you to have (possibly) won this issue, but that’s the real price of entry. It’s not enough to point out the flaws where all my rules don’t work, you have to produce rules that work better.
Well… let’s start with the last bit, actually. No, it totally is enough to point out the flaws. I mean, we should do better if we can, of course; if we can think of a working solution, great. But no, pointing out the flaws in a proffered solution is valuable and good all by itself. (“What should we do?” “Well, not that.” “How come?” “Because it fails to solve the problem we’re trying to solve.” “Ok, yeah, that’s a good reason.”) In other words: “any solution that solves the problem is acceptable; any solution that does not solve the problem is not acceptable”. Act consequentialism does not solve the problem.
But as far as my own actual solution goes… I consider Robin Hanson’s curve-fitting approach (outlined in sections II and III of his paper “Why Health is Not Special: Errors in Evolved Bioethics Intuitions”) to be the most obviously correct approach to (meta)ethics. In brief: sometimes we have very strong moral intuitions (when people speak of listening to their conscience, this is essentially what they are referreing to), and as those intuitions are the ultimate grounding for any morality we might construct, if the intuitions are sufficiently strong and consistent, we can refer to them directly. Sometimes we are more uncertain. But we also value consistency in our moral judgments (for various good reasons). So we try to “fit a curve” to our moral intuitions—that is, we construct a moral system that tries to capture those intuitions. Sometimes the intuitions are quite strong, and we adjust the curve to fit them; sometimes we find weak intuitions which are “outliers”, and we judge them to be “errors”; sometimes we have no data points at all for some region of the graph, and we just take the output of the system we’ve constructed. This is necessarily an iterative process.
If the police arrest your best friend for murder, but you know that said friend spent the whole night of the alleged crime with you (i.e. you’re his only alibi and your testimony would completely clear him of suspicion), should you tell the truth to the police when they question you, or should you betray your friend and lie, for no reason at all other than that it would mildly inconvenience you to have to go down to the police station and give a statement? Pretty much nobody needs any kind of moral system to answer this question. It’s extremely obvious what you should do. What does act and/or rule consequentialism tell us about this? What about deontology, etc.? Doesn’t matter, who cares, anyone who isn’t a sociopath (and probably even most sociopaths who aren’t also very stupid) can see the answer here, it’s absurdly easy and requires no thought at all.
What if you’re in Germany in 1938 and the Gestapo show up at your door to ask whether you’re hiding any Jews in your attic (which you totally are)—what should you do? Once again the answer is easy, pretty much any normal person gets this one right without hesitation (in order to get it wrong, you need to be smart enough to confuse yourself with weird philosophy).
So here we’ve got two situations where you can ask “is it right to lie here, or to tell the truth?” and the answer is just obvious. Well, we start with cases like this, we think about other cases where the answer is obvious, and yet other cases where the answer is less obvious, and still other cases where the answer is not obvious at all, and we iteratively build a curve that fits them as well as possible. This curve should pass right through the obvious-answer points, and the other data points should be captured with an accuracy that befits their certainty (so to speak). The resulting curve will necessarily have at least a few terms, possibly many, definitely not just one or two. In other words, there will be many Rules.
(How to evaluate these rules? With great care and attention. We must be on the lookout for complexity, we must continually question whether we are in fact satisfying our values / embodying our chosen virtues, etc.)
Here’s an example rule, which concerns situations of a sort of which I have written before: if you voluntarily agree to keep a secret, then, when someone who isn’t in on the secret asks you about the secret, you should behave as you would if you didn’t know the secret. If this involves lying (that is, saying things which you know to be false, but which you would believe to be true if you were not in possession of this secret which you have agreed, of your own free will, to keep), then you should lie. Lying in this case is right. Telling the truth in this case is wrong. (And, yes, trying to tell some technical truth that technically doesn’t reveal anything is also wrong.)
Is that an obvious rule? Certainly not as obvious as the rules you’d formulate to cover the two previous example scenarios. Is it correct? Well, I’m certainly prepared to defend it (indeed, I have done so, though I can’t find the link right now; it’s somewhere in my comment history). Is a person who follows a rule like this an honest and trustworthy person, or a dishonest and untrustworthy liar? (Assuming, naturally, that they also follow all the other rules about when it is right to tell the truth.) I say it’s the former, and I am very confident about this.
I’m not going to even try to enumerate all the rules that apply to when lying is wrong and when it’s right. Frankly, I think that it’s not as hard as some people make it out to be, to tell when it is necessary to tell the truth and when one should instead lie. Mostly, the right answer is obvious to everyone, and the debates, such as they are, mostly boil down to people trying to justify things that they know perfectly well cannot be justified.
Indeed, there is a useful heuristic that comes out of that. In these discussions, I have often made this point (as I did in my top-level comment) that it is sometimes obligatory to lie, and wrong to tell the truth. The reason I keep emphasizing this is that there’s a pattern one sees: the arguments most often concern whether it’s permissible to lie. Note: not, “is it obligatory to tell the truth, or is it obligatory to lie”—but “is it obligatory to tell the truth, or do I have no obligation here and can I just lie”.
I think that this is very telling. And what it tells us (with imperfect but nevertheless non-trivial certainty) is that the person asking the question, or making the argument against the obligation, knows perfectly well what the real—which is to say, moral—answer is. Yes, the right thing to do is to tell the truth. Yes, you already know this. You have reasons for not wanting to tell the truth. Well, nobody promised you that doing the right thing will always be personally convenient! Nevertheless, very often, there is no actual moral uncertainty in anyone’s mind, it’s just “… ok, but do I really have to do the right thing, though”.
This heuristic is not infallible. For example, it does not apply to the case of “lying to someone who has no right to ask the question that they’re asking”: there, it is indeed permissible to lie[1], but no particular obligation either to lie or to tell the truth. (Although one can make the case for the obligation to lie even in some subset of such cases, having to do with the establishment and maintenance of certain communicative norms.) But it applies to all of these, for instance.
The bottom line is that if you want to be honest, to be trustworthy, to have integrity, you will end up constructing a bunch of rules to aid you in epitomizing these virtues. If you want to try to put together a complete list of such rules, that’s certainly a project, and I may even contribute to it, but there’s not much point in expecting this to be a definitively completable task. We’re fitting a curve to the data provided by our values, which cannot be losslessly compressed.
(Maybe this point isn’t particularly important to the main discussion. I can’t tell, honestly!)
Yeah I think it’s an irrelevant tangent where we’re describing the same underlying process a bit differently, not really disagreeing.
Frankly, I think that it’s not as hard as some people make it out to be, to tell when it is necessary to tell the truth and when one should instead lie. Mostly, the right answer is obvious to everyone, and the debates, such as they are, mostly boil down to people trying to justify things that they know perfectly well cannot be justified.
… the arguments most often concern whether it’s permissible to lie. Note: not, “is it obligatory to tell the truth, or is it obligatory to lie”—but “is it obligatory to tell the truth, or do I have no obligation here and can I just lie”. I think that this is very telling. And what it tells us (with imperfect but nevertheless non-trivial certainty) is that the person asking the question, or making the argument against the obligation, knows perfectly well what the real—which is to say, moral—answer is. Yes, the right thing to do is to tell the truth.
I think I disagree with this framing. In my model of the sort of person who asks that, they’re sometimes selfish-but-honourable people who have noticed telling the truth ends badly for them and will do it if it is an obligation but would prefer to help themselves otherwise, but they are just as often altruistic-and-honourable people who have noticed telling the truth ends badly for everyone and are trying to convince themselves it’s okay to do the thing that will actually help. There are also selfish-but-cowardly people who just care if they’ll be socially punished for lying, or selfish-and-cruel people chewing at the bit to punish someone else for it, and similar, but moral arguments don’t move to them either way so it doesn’t matter.
More strongly I disagree because I think a lot of people have harmed themselves or their altruistic causes by failing to correctly determine where the line is, either lying when they shouldn’t or not lying when they should, and it is too the communities shame that we haven’t been more help with illuminating how to tell those cases apart. If smart hardworking people are getting it wrong so often, you can’t just say the task is easy.
If you want to try to put together a complete list of such rules, that’s certainly a project, and I may even contribute to it, but there’s not much point in expecting this to be a definitively completable task. We’re fitting a curve to the data provided by our values, which cannot be losslessly compressed.
This is in total a fair response. I am not sure I can say that you have changed my mind without more detail and I’m not going to take down my original post (as long as there isn’t a better post to take its place) because it’s still I think directionally correct but thank you for your words.
Detailed commentary, as promised:
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth?
Then, if I only get one of those things, it’s worse than getting both of those things (and possibly so much worse that I don’t even consider them worthwhile to pursue individually; but this part is optional). Is there a law against having values like this? There is not. Do you get to insist that nevertheless I have to choose, and can’t get both? Nope, because the utility function is not up for grabs[1]. Now, maybe I will get both and maybe I won’t, but there’s no reason I can’t have “actually, both, please” as my highest value.
The fallacy here is simply that you want to force me to accept your definition of Winning, which you construct so as not to include The Truth. But why should I do that? The only person who gets to define what counts as Winning for me is me.
In short, no, Rationality absolutely can be about both Winning and about The Truth. This is no more paradoxical than the fact that Rationality can be about saving Alice’s life and also about saving Bob’s life. You may at some point end up having to choose between saving Alice and saving Bob, and that would be sad; and you would end up making some choice, in some manner, as fits the circumstance. The existence of this possibility is not particularly interesting, and has no deep implications. The goal is “both”.
(That said, the bit about the two armies was A++, and I strong-upvoted the post just for that.)
I wholly agree with this section…
… except for the last paragraph—specifically, this:
No, it’s not.
I’m not sure where this meme comes from[2], but it’s just wrong. Unless you are, like, literally my mother, “is the other party, speficially, better off for listening to this thing that I am saying” constitutes part of my motivation for saying things approximately zero percent of the time. It’s just not a relevant consideration at all—and I don’t think I’m even slightly unusual in this.
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
Sometimes I am additionally motivated by some specific usefulness of some specific utterance, but even in the edge case where the expected usefulness is exclusive to the single person to whom the utterance is addressed, I don’t consider whether that person will be better off for having listened to the thing in question. Maybe they won’t be! Maybe they will somehow be harmed. That’s not my business; they have the relevant information, which is true (and which I violated no moral precepts by conveying to them)—the rest is up to them.
Therefore if someone says “but if you lie, they’ll be better off”, my response is “weird thing to bring up; what’s the relevance?”.
Basically correct. I will add that not only is it not always morally obligatory to tell the truth, but in fact it is sometimes morally obligatory to lie. Sometimes, telling the truth is wrong, and doing so makes you a bad person. Therefore the one who resolves to always tell the truth, no matter what, can in fact end up predictably doing evil as a direct result of that resolution.
There is no royal road to moral perfection. There is no way to get around the fact that you will always need to apply all of your faculties, the entirety of your reason and your conscience and everything else that is part of you, in order to be maximally sure (but never perfectly sure!) that you are doing the right thing. The moment you replace your brain with an algorithm, you’ve gone wrong. This fact does not become any less true even if the algorithm is “always tell the truth”. You can and should make rules, and you can and should follow them (rule consequentialism is superior to act consequentialism for all finite agents), and yet even this offers no escape from that final responsibility, which is always yours and cannot be offloaded to anyone or anything, ever.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. It doesn’t work no matter how much we say “yeah but just make up numbers” or “yeah you can’t actually do the calculation, but let’s pretend we can”. The numbers are fake and meaningless and we can’t do the calculation.
Definitely don’t just trust people. Trust, but verify. (See also.)
This, I agree with.
I agree with all of this section. What I’ll note here is that there are people who will campaign very hard against the sort of thing you are advocating for here (“Do them the charity of not pretending they wouldn’t be making a terrible mistake by imagining they can take you or anyone else at their word. Build your Cooperative Epistemics on distrust instead.”), and for the whole “trying to start a communist utopia on the expectation that everybody just” thing. I agree that this actually has the opposite result to what anyone sensible would want.
This, on the other hand, is once again a terrible idea.
Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
“When the consequences are beneficial”—no, you can’t tell when the consequences will be beneficial, and anyhow act consequentialism does not and cannot work, so instead you should be a rule consequentialist and adopt rules about when lying is right, and when lying is wrong, and only lie in the first case and not the second case. (And you should have meta-rules about when you make your rules known to other people—hint, the answer is “almost always”—because much of the value of rules like this comes from them being public knowledge. And so on, applying all the usual ethical and meta-ethical considerations.)
“When society is such that you have to”—too general; furthermore, people, and by “people” I mean “dirty stinking liars who lie all the time, the bastards”, use this sort of excuse habitually, so you should be extremely wary of it. However, sometimes it actually is true. Once again you cannot avoid having to actually think about this sort of thing in much more detail than the OP.
“When nobody wants the truth”—situations like this are often the ones where telling the truth is exceptionally important and the right thing to do. But sometimes, the opposite of that.
“When nobody expects the truth”—ditto.
“When nobody is incentivizing the truth”—ditto.
Wells can be cleaned, and new wells can be dug. (The latter is often a prerequisite for the former.)
The metaphorical utility function, that is.
Although I have certain suspicions.
That is, things which should be construed as in some sense possibly being true or false. In other words, I do not include here things like jokes, roleplaying, congratulatory remarks, flirting, requests, exclamations, etc.
I consider you to be basically agreeing with me for 90% of what I intended and your disagreements for the other 10% to be the best written of any so far, and basically valid in all the places I’m not replying to it. I still have a few objections:
I agree the utility function isn’t up for grabs and that that is a coherent set of values to have, but I have this criticism that I want to make that I feel I don’t have the right language to make. Maybe you can help me. I want to call that utility function perverse. The kind of utilityfunction that an entity is probably mistaken to imagine itself as having.
For any particular situation you might find yourself in, for any particular sequence of actions you might do in that situation, there is a possible utilityfunction you could be said to have such that the sequence of actions is the rational behaviour of a perfect omniscient utility maximiser. If nothing else, pick the exact sequence of events that will result, declare that your utility function is +100 for that sequence of events and 0 for anything else, and then declare yourself a supremely efficient rationalist.
Actually doing that would be a mistake. It wouldn’t be making you better. This is not a way to succeed at your goals, this is a way to observe what you’re inclined to do anyway and paint the target around it. Your utility function (fake or otherwise) is supposed to describe stuff you actually want. Why would you want specifically that in particular?
I think the stronger version of Rationality is the version that phrases it as about getting the things you want, whatever those things might be. In that sense, if The Truth is merely a value, you should carefully segment it in your brain out from your practice of rationality: Your rationality is about mirroring the mathematical structure best suited for obtaining goals, and then to whatever degree you value The Truth above its normal instrumental value is something you buy where it’s cheapest like all your other values. Mixing the two makes both worse, you pollute your concept of rational behaviour with a love of the truth (and therefore, for example, are biased towards imagining that other people who display rationality are probably honest, or other people who display honesty are probably rational) and you damage your ability to pursue the truth by not putting in the values category where it belongs where it will lead you to try to cheaply buy more of it.
Of course maybe you’re just the kind of guy who really loves mixing his value for The Truth in with his rationality into a weird soup. That’d explain your actiosn without making you a walking violation of any kind of mathematical law, it’d just be a really weird thing for you to innately want.
I am still trying to find a better way to phrase this argument such that someone might find it persuasive of something, because I don’t expect this phrasing to work.
I think I meant something subtly different that what you’ve taken that part to mean. I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you’d lose the ability to get other people to know things, which is a useful ability to have. This is basically my position! Whether the specific person you address is better off in each specific case isn’t materal because you aren’t trying to always make them better off, you’re just trying to avoid being seen as someone who predictibly doesn’t make them better off. I agree that calculating the full expected consequences to every person of every thing you say isn’t necessary for this purpose.
I agree that Act Consequentialism doesn’t really work. I was trying to be a Rule consequentialist instead wben I wrote the above rule. I agree that that sounds fatuous, but I think the immediate feeling is pointing at a valid retort: You haven’t operationalized this position into a decision process that a person can actually do (or even pretend to do).
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can’t be a real Rule Consequentialist without actually having a Rule. What is the rule for “Only lie when doing so is the right thing to do”? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule “only lie when doing so is the right thing to do” into that as a backup I’m just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I’ve put above (that is also better than the “never” or “only by coming up with galaxy-brained ways it technically isn’t lying” or Eliezer’s meta-honesty idea that I’ve read before) I’d consider you to have (possibly) won this issue, but that’s the real price of entry. It’s not enough to point out the flaws where all my rules don’t work, you have to produce rules that work better.
Well, there’s a couple of things to say in response to this… one is that wanting to get the girl / dowry / happiness / love / whatever tangible or intangible goals as such, and also wanting to be virtuous, doesn’t seem to me to be a weird or perverse set of values. In a sense, isn’t this sort of thing the core of the project of living a human life, when you put it like this? “I want to embody all the true virtues, and also I want to have all the good things.” Seems pretty natural to me! Of course, it’s also a rather tall order (uh, to put it mildly…), but that just means that it provides a challenge worthy of one who does not fear setting high goals for himself.
Somewhat orthogonally to this, there is also the fact that—well, I wrote the footnote about the utility function being metaphorical for a reason. I don’t actually think that humans (with perhaps very rare exceptions) have utility functions; that is, I don’t think that our preferences satisfy the VNM axioms—and nor should they. (And indeed I am aware of so-called “coherence theorems” and I don’t believe in them.)
With that constraint (which I consider an artificial and misguided one) out of the way, I think that we can reason about things like this in ways that make more sense. For instance, trying to fit truth and honesty into a utility framework makes for some rather unnatural formulations and approaches, like talking about buying more of it, or buying it more cheaply, etc. I just don’t think that this makes sense. If the question is “is this person honest, trustworthy, does he have integrity, is he committed to truth”, then the answer can be “yes”, and it can be “no”, and it could perhaps be some version of “ehhh”, but if it’s already “yes” then you basically can’t buy any more of it than that. And if it’s not “yes” and you’re talking about how cheaply you can buy more of it, then it’s still not “yes” even after you complete your purchase.
(This is related to the notion that while consequentialism may be the proper philosophical grounding for morality, and deontology the proper way to formulate and implement your morality so that it’s tractable for a finite mind, nevertheless virtue ethics is the “descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind, once you’ve decided on your object-level moral views”. Thus you can embody the virtue of honesty, or fail to do so. You can’t buy more of embodying some virtue by trading away some other virtue; that’s just not how it works.)
Yes, of course; but…
… but the preceding fact just doesn’t really have much to do with this business of “do you make people better off by what you say”.
My claim is that people (other than “rationalists”, and not even all or maybe even most “rationalists” but only some) just do not think of things in this way. They don’t think of whether their words will make their audience better off when they speak, and they don’t think of whether the words of other people are making them better off when they listen. This entire framing is just alien to how most people do, and should, think about communication in most circumstances. Yeah, if you lie all the time, people will stop believing you. That’s just directly the causation here, it doesn’t go through another node where people compute the expected value of your words and find it to be negative.
(Maybe this point isn’t particularly important to the main discussion. I can’t tell, honestly!)
Well… let’s start with the last bit, actually. No, it totally is enough to point out the flaws. I mean, we should do better if we can, of course; if we can think of a working solution, great. But no, pointing out the flaws in a proffered solution is valuable and good all by itself. (“What should we do?” “Well, not that.” “How come?” “Because it fails to solve the problem we’re trying to solve.” “Ok, yeah, that’s a good reason.”) In other words: “any solution that solves the problem is acceptable; any solution that does not solve the problem is not acceptable”. Act consequentialism does not solve the problem.
But as far as my own actual solution goes… I consider Robin Hanson’s curve-fitting approach (outlined in sections II and III of his paper “Why Health is Not Special: Errors in Evolved Bioethics Intuitions”) to be the most obviously correct approach to (meta)ethics. In brief: sometimes we have very strong moral intuitions (when people speak of listening to their conscience, this is essentially what they are referreing to), and as those intuitions are the ultimate grounding for any morality we might construct, if the intuitions are sufficiently strong and consistent, we can refer to them directly. Sometimes we are more uncertain. But we also value consistency in our moral judgments (for various good reasons). So we try to “fit a curve” to our moral intuitions—that is, we construct a moral system that tries to capture those intuitions. Sometimes the intuitions are quite strong, and we adjust the curve to fit them; sometimes we find weak intuitions which are “outliers”, and we judge them to be “errors”; sometimes we have no data points at all for some region of the graph, and we just take the output of the system we’ve constructed. This is necessarily an iterative process.
If the police arrest your best friend for murder, but you know that said friend spent the whole night of the alleged crime with you (i.e. you’re his only alibi and your testimony would completely clear him of suspicion), should you tell the truth to the police when they question you, or should you betray your friend and lie, for no reason at all other than that it would mildly inconvenience you to have to go down to the police station and give a statement? Pretty much nobody needs any kind of moral system to answer this question. It’s extremely obvious what you should do. What does act and/or rule consequentialism tell us about this? What about deontology, etc.? Doesn’t matter, who cares, anyone who isn’t a sociopath (and probably even most sociopaths who aren’t also very stupid) can see the answer here, it’s absurdly easy and requires no thought at all.
What if you’re in Germany in 1938 and the Gestapo show up at your door to ask whether you’re hiding any Jews in your attic (which you totally are)—what should you do? Once again the answer is easy, pretty much any normal person gets this one right without hesitation (in order to get it wrong, you need to be smart enough to confuse yourself with weird philosophy).
So here we’ve got two situations where you can ask “is it right to lie here, or to tell the truth?” and the answer is just obvious. Well, we start with cases like this, we think about other cases where the answer is obvious, and yet other cases where the answer is less obvious, and still other cases where the answer is not obvious at all, and we iteratively build a curve that fits them as well as possible. This curve should pass right through the obvious-answer points, and the other data points should be captured with an accuracy that befits their certainty (so to speak). The resulting curve will necessarily have at least a few terms, possibly many, definitely not just one or two. In other words, there will be many Rules.
(How to evaluate these rules? With great care and attention. We must be on the lookout for complexity, we must continually question whether we are in fact satisfying our values / embodying our chosen virtues, etc.)
Here’s an example rule, which concerns situations of a sort of which I have written before: if you voluntarily agree to keep a secret, then, when someone who isn’t in on the secret asks you about the secret, you should behave as you would if you didn’t know the secret. If this involves lying (that is, saying things which you know to be false, but which you would believe to be true if you were not in possession of this secret which you have agreed, of your own free will, to keep), then you should lie. Lying in this case is right. Telling the truth in this case is wrong. (And, yes, trying to tell some technical truth that technically doesn’t reveal anything is also wrong.)
Is that an obvious rule? Certainly not as obvious as the rules you’d formulate to cover the two previous example scenarios. Is it correct? Well, I’m certainly prepared to defend it (indeed, I have done so, though I can’t find the link right now; it’s somewhere in my comment history). Is a person who follows a rule like this an honest and trustworthy person, or a dishonest and untrustworthy liar? (Assuming, naturally, that they also follow all the other rules about when it is right to tell the truth.) I say it’s the former, and I am very confident about this.
I’m not going to even try to enumerate all the rules that apply to when lying is wrong and when it’s right. Frankly, I think that it’s not as hard as some people make it out to be, to tell when it is necessary to tell the truth and when one should instead lie. Mostly, the right answer is obvious to everyone, and the debates, such as they are, mostly boil down to people trying to justify things that they know perfectly well cannot be justified.
Indeed, there is a useful heuristic that comes out of that. In these discussions, I have often made this point (as I did in my top-level comment) that it is sometimes obligatory to lie, and wrong to tell the truth. The reason I keep emphasizing this is that there’s a pattern one sees: the arguments most often concern whether it’s permissible to lie. Note: not, “is it obligatory to tell the truth, or is it obligatory to lie”—but “is it obligatory to tell the truth, or do I have no obligation here and can I just lie”.
I think that this is very telling. And what it tells us (with imperfect but nevertheless non-trivial certainty) is that the person asking the question, or making the argument against the obligation, knows perfectly well what the real—which is to say, moral—answer is. Yes, the right thing to do is to tell the truth. Yes, you already know this. You have reasons for not wanting to tell the truth. Well, nobody promised you that doing the right thing will always be personally convenient! Nevertheless, very often, there is no actual moral uncertainty in anyone’s mind, it’s just “… ok, but do I really have to do the right thing, though”.
This heuristic is not infallible. For example, it does not apply to the case of “lying to someone who has no right to ask the question that they’re asking”: there, it is indeed permissible to lie[1], but no particular obligation either to lie or to tell the truth. (Although one can make the case for the obligation to lie even in some subset of such cases, having to do with the establishment and maintenance of certain communicative norms.) But it applies to all of these, for instance.
The bottom line is that if you want to be honest, to be trustworthy, to have integrity, you will end up constructing a bunch of rules to aid you in epitomizing these virtues. If you want to try to put together a complete list of such rules, that’s certainly a project, and I may even contribute to it, but there’s not much point in expecting this to be a definitively completable task. We’re fitting a curve to the data provided by our values, which cannot be losslessly compressed.
Assuming that certain conditions are met—but they usually are.
Yeah I think it’s an irrelevant tangent where we’re describing the same underlying process a bit differently, not really disagreeing.
I think I disagree with this framing. In my model of the sort of person who asks that, they’re sometimes selfish-but-honourable people who have noticed telling the truth ends badly for them and will do it if it is an obligation but would prefer to help themselves otherwise, but they are just as often altruistic-and-honourable people who have noticed telling the truth ends badly for everyone and are trying to convince themselves it’s okay to do the thing that will actually help. There are also selfish-but-cowardly people who just care if they’ll be socially punished for lying, or selfish-and-cruel people chewing at the bit to punish someone else for it, and similar, but moral arguments don’t move to them either way so it doesn’t matter.
More strongly I disagree because I think a lot of people have harmed themselves or their altruistic causes by failing to correctly determine where the line is, either lying when they shouldn’t or not lying when they should, and it is too the communities shame that we haven’t been more help with illuminating how to tell those cases apart. If smart hardworking people are getting it wrong so often, you can’t just say the task is easy.
This is in total a fair response. I am not sure I can say that you have changed my mind without more detail and I’m not going to take down my original post (as long as there isn’t a better post to take its place) because it’s still I think directionally correct but thank you for your words.