I consider you to be basically agreeing with me for 90% of what I intended and your disagreements for the other 10% to be the best written of any so far, and basically valid in all the places I’m not replying to it. I still have a few objections:
What if my highest value is getting a pretty girl with a country-sized dowry, while having not betrayed the Truth? … In short, no, Rationality absolutely can be about both Winning and about The Truth.
I agree the utility function isn’t up for grabs and that that is a coherent set of values to have, but I have this criticism that I want to make that I feel I don’t have the right language to make. Maybe you can help me. I want to call that utility function perverse. The kind of utilityfunction that an entity is probably mistaken to imagine itself as having.
For any particular situation you might find yourself in, for any particular sequence of actions you might do in that situation, there is a possible utilityfunction you could be said to have such that the sequence of actions is the rational behaviour of a perfect omniscient utility maximiser. If nothing else, pick the exact sequence of events that will result, declare that your utility function is +100 for that sequence of events and 0 for anything else, and then declare yourself a supremely efficient rationalist.
Actually doing that would be a mistake. It wouldn’t be making you better. This is not a way to succeed at your goals, this is a way to observe what you’re inclined to do anyway and paint the target around it. Your utility function (fake or otherwise) is supposed to describe stuff you actually want. Why would you want specifically that in particular?
I think the stronger version of Rationality is the version that phrases it as about getting the things you want, whatever those things might be. In that sense, if The Truth is merely a value, you should carefully segment it in your brain out from your practice of rationality: Your rationality is about mirroring the mathematical structure best suited for obtaining goals, and then to whatever degree you value The Truth above its normal instrumental value is something you buy where it’s cheapest like all your other values. Mixing the two makes both worse, you pollute your concept of rational behaviour with a love of the truth (and therefore, for example, are biased towards imagining that other people who display rationality are probably honest, or other people who display honesty are probably rational) and you damage your ability to pursue the truth by not putting in the values category where it belongs where it will lead you to try to cheaply buy more of it.
Of course maybe you’re just the kind of guy who really loves mixing his value for The Truth in with his rationality into a weird soup. That’d explain your actiosn without making you a walking violation of any kind of mathematical law, it’d just be a really weird thing for you to innately want.
I am still trying to find a better way to phrase this argument such that someone might find it persuasive of something, because I don’t expect this phrasing to work.
I say and write things[3] because I consider those things to be true, relevant, and at least somewhat important. That by itself is very often (possibly usually) sufficient for a thing to be useful in a general sense (i.e., I think that the world is better for me having said it, which necessarily involves the world being better for the people in it). Whether the specific person to whom the thing is nominally or factually addressed will be better off as a result of what I said or wrote is not my concern in any way other than that.
I think I meant something subtly different that what you’ve taken that part to mean. I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you’d lose the ability to get other people to know things, which is a useful ability to have. This is basically my position! Whether the specific person you address is better off in each specific case isn’t materal because you aren’t trying to always make them better off, you’re just trying to avoid being seen as someone who predictibly doesn’t make them better off. I agree that calculating the full expected consequences to every person of every thing you say isn’t necessary for this purpose.
No, this is a terrible idea. Do not do this. Act consequentialism does not work. … Look, this is going to sound fatuous, but there really isn’t any better general rule than this: you should only lie when doing so is the right thing to do.
I agree that Act Consequentialism doesn’t really work. I was trying to be a Rule consequentialist instead wben I wrote the above rule. I agree that that sounds fatuous, but I think the immediate feeling is pointing at a valid retort: You haven’t operationalized this position into a decision process that a person can actually do (or even pretend to do).
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can’t be a real Rule Consequentialist without actually having a Rule. What is the rule for “Only lie when doing so is the right thing to do”? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule “only lie when doing so is the right thing to do” into that as a backup I’m just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I’ve put above (that is also better than the “never” or “only by coming up with galaxy-brained ways it technically isn’t lying” or Eliezer’s meta-honesty idea that I’ve read before) I’d consider you to have (possibly) won this issue, but that’s the real price of entry. It’s not enough to point out the flaws where all my rules don’t work, you have to produce rules that work better.
Yeah I think it’s an irrelevant tangent where we’re describing the same underlying process a bit differently, not really disagreeing.
I think I disagree with this framing. In my model of the sort of person who asks that, they’re sometimes selfish-but-honourable people who have noticed telling the truth ends badly for them and will do it if it is an obligation but would prefer to help themselves otherwise, but they are just as often altruistic-and-honourable people who have noticed telling the truth ends badly for everyone and are trying to convince themselves it’s okay to do the thing that will actually help. There are also selfish-but-cowardly people who just care if they’ll be socially punished for lying, or selfish-and-cruel people chewing at the bit to punish someone else for it, and similar, but moral arguments don’t move to them either way so it doesn’t matter.
More strongly I disagree because I think a lot of people have harmed themselves or their altruistic causes by failing to correctly determine where the line is, either lying when they shouldn’t or not lying when they should, and it is too the communities shame that we haven’t been more help with illuminating how to tell those cases apart. If smart hardworking people are getting it wrong so often, you can’t just say the task is easy.
This is in total a fair response. I am not sure I can say that you have changed my mind without more detail and I’m not going to take down my original post (as long as there isn’t a better post to take its place) because it’s still I think directionally correct but thank you for your words.