“The monster will get me if I make a mistake” can be a deep concrete belief, one looks at it rationally, and thinks, that is ridiculous- but getting rid of it can be hard work.
abigailgem
The reasons for punishment are deterrence, retribution, rehabilitation and prevention. Criminal law balances these. English Law and Scots Law do not award punitive damages in civil actions, and it is hard to see why a Claimant should receive money which is more than his/her financial loss, in order to punish the Respondent. Should not that money go to the State?
Should punishment be allocated “rationally”? Perhaps, but I think human reactions to a wrongful act should be part of what is rationally assessed.
I do not have rational control of my feelings of anger. I can attempt to soothe my own feelings, or suppress and deny them.
If I dwell on an incident with the intention of making myself more angry about it, this seems to me to damage my own emotional responses.
A friend of mine recommends writing with the non-dominant hand to access alternative brain functions. I have done this, and found myself disagreeing with myself.
In addition, there are issues where it is not possible to be rational. In choosing goals, one cannot always be rational: the emotional response decides the goal. One can be rational in choosing ways of achieving that goal, or in making the map fit the territory.
EDIT: As I have been voted down, I will provide an example. I am transsexual. I decided it was “rational” to attempt to live as a man, and arguably it is: and yet I could not, and the most important thing for me was to change my presentation. I cannot assess that goal “rationally”: it means I cannot reproduce, it makes it more likely for me to be seen as a weirdo, it has been terribly difficult to achieve. And yet it was the most important thing in my life.
I think he meant it is Not rational to do something which will observably make you lose, in Newcomb’s problem. The process which the two-boxer goes through is not “rational”, it is simply a process which ignores the evidence that one boxers get more money. That process ignored evidence, and so is irrational. It looked at the map, not the territory- though it is impossible, here there really is an Omega which can predict what you will do. From the map, that is impossible, so we Two-box. However from the territory, that is what is, so we One-box.
I do not base my life on the fiction of Newcomb’s problem, but I do take lessons from it. Not the lesson that an amazingly powerful creature is going to offer me a million dollars, but the lesson that it is possible to try and fail to be rational, by missing a step, or that I may jump too soon to the conclusion that something is “impossible”, or that trying hard to learn more rationality tricks will profit me, even if not as much as that million dollars.
It might be useful to teach the Santa Claus myth in order to teach fantasising. It is necessary to know the difference between reality and fantasy, but fantasy is where one can explore how one might Like the world to be, and then begin to plan a way towards it; and fantasy can lead in to lateral thinking.
What can be done about it? We can fight.
The Master can argue for Creationism, and try to defeat the pupil’s refutation of it. We can argue for or against One-boxing on Newcomb’s problem. Or pretend to be the AI arguing that the Gatekeeper should free it. The Master is only Master for as long as s/he is undefeated.
I am not sure I can be rational about this at all, because I find suicide repulsive. Yet my society admires the bravery of a soldier who, say, throws himself on a grenade so that it will not kill the others in his dugout. I might see a tincture of dishonesty in the man’s actions, and yet he enters a contract, with a free contracting party, and performs his part of the contract.
So. Something to practice Rationality on. To consider the value of an emotional response. Thank you. I am afraid, I still have the emotional response, shameful. I cannot, now, see it as admirable.
Eliezer: “Similarly, if you find yourself saying “The rational thing to do is X, but the right thing to do is Y” then you are almost certainly using one of the words “rational” or “right” in a way that a huge chunk of readers won’t agree with. In this case—or in any other case where controversy threatens—you should substitute more specific language: “The self-benefiting thing to do is to run away, but I hope I would at least try to drag the girl off the railroad tracks”″
Yes. Rational does not equal “sensible” or “putting self first”.
So can we be rational in arguing about morality? If I decide that human life has value, I can argue from that prior, rationally, that it is Right to try and drag the girl off the railroad tracks.
I believe that human life has value, even though that is not a completely rigorous, defined statement of my belief about human life. I doubt I have the words to fully express my beliefs about the value of human life.
It is possible that I generalise “human life has value” from my own selfish needs, I do not like being alone for too long, I would have to adjust and learn a great deal before I could survive without Society.
So I believe that for me to believe “human life has value” is Right, or at least permissible, but not necessarily Rational (epistemic or instrumental) in itself, though I can take it as an axiom, and argue rationally based upon it.
Or if my belief that “human life has value” derives rationally from “I will base my values on my own selfish needs” which derives from “I want to survive”: in “I want to survive” there is a Want, which is not derived rationally from anything.
A psychic medium.
My colleague, let’s call her Sally, tells me she is a psychic medium. She tells me she first spoke to a dead person when she was three: she was talking to a woman on the stairs, and her mother was concerned when she went to tell her mother about it. Now, she tends not to see people, she realises they are not physically present in the way that a living person is present, but she senses them.
She reports three ways in which the Dead communicate. Normally, it is as if she hears them speaking, and relays the message to the living. During her meeting she will give a talk on a reading for about fifteen minutes, and it is as if the dead person speaks alongside her: there is equal control of what is said between her and the dead person. She tends not to do Trance mediumship, where the dead person takes over her body and speaks through her mouth, but has experience of it.
What am I to do with this information she gives me? There is a non-trivial possibility that she is a conscious shyster, a charlatan, a fraud, but that is not my experience of her in my working life. She tells me her beliefs have ruined one marriage. I do not consider it likely that she is deliberately lying.
She tells me that there are false mediums, and she hates them, because they bring the calling into disrepute. She can tell someone is a false medium because what they say is so non-specific. I intend to go to one of her meetings, because I am interested enough in the phenomenon- though I doubt I will be converted to believe she talks to the Dead.
I consider it a very small possibility that she is talking to the spirits of the dead. It is slightly more possible that she is inspired by some sort of Jungian “collective unconscious”. I think it most likely that she is unconsciously using the same cold reading techniques that a debunker of “psychics” such as Derren Brown or James Randi uses consciously. However, her experience of the phenomenon is such that she believes herself inspired.
I have written verse, like most people. Sometimes it comes so easily, one could almost believe in a Muse of poetry, as if something external was moving one to hear and write the words. You have probably heard of a dream of a snake eating its tail, leading to theorising about benzene rings. I think she has a genuine human experience, which she falsely ascribes to the words of the Dead.
I had vaguely thought of doing this as a post, but an open thread comment may be a better place for it.
Try to wean yourself off the need for warm fuzzies instead.
EDIT: No, don’t try to wean yourself off the warm fuzzies, but get the warm fuzzies from friends and family, not from people in distress in need of charity. Feel good about yourself because you are achieving your goals, including altruistic ones. (end of edit)
Carl Rogers, founder of person centred counselling, theorised that there is an “organismic self”, with all the attributes and abilities of the human organism within its own skin, and a “self-concept” built up from what the individual saw it was desirable to be. The conscious part of the human being builds up a map of how that human being’s unconscious motivations and desires are. Part of this map is mere falsehood, lies told to make the person feel better about himself, because he has introjected the idea that this is the way he ought to be. Disparity between the map and the territory causes cognitive dissonance, and may make the need for warm fuzzies: cognitive dissonance is painful, pretending to be your own self-concept gives a warm fuzzy.
If you can make your self-concept, your map of yourself, match your organismic self, the actual territory which you may be strongly motivated to deny, then your need for warm fuzzies may reduce.
You will be more efficient if instead of buying warm fuzzies, you spend energy on utilons or signaling.
I am strongly motivated to altruism. I have decided to stop asking whether this is selfish or not. Yes, it is selfish, it fulfils My goals. No, it is not selfish, it fulfils the goals of others too. Is it “good” or “bad”? Don’t know that either. I have decided that does not matter. It is what I want, perhaps merely for signalling purposes.
As Michael’s comment has been upvoted, I will respond. I have deluded myself a great deal, and decided some years ago to try to ferret out the lies I tell myself, and the motivation for these.
The main motivation was, “I lie to myself because I want to see myself as a Good person”.
In May 2008 I decided, “I am a human being”. I have the value of a human being. One among seven billion of us; but one evolved over four billion years, fitting beautifully into my environment, fitting into society with the attributes needed to live in society. Or some of the attributes. Or attributes needed to live in society in one way. Or something like that.
I am Good Enough.
So I want to stop morally judging myself. I am good enough. Does akrasia make me Bad? Am I not fulfilling my obligations to others? Am I Good? I have a neurotic flaw of taking such things too seriously, which makes me withdraw from action rather than taking the action I need to take.
Also, I am seeking to develop skills which reduce the effect of Akrasia, build better and deeper relationships, achieve goals. Life is Difficult. I have decided to stop beating myself up because I am not perfect at it.
I come at the problem with certain disordered personality traits.
Newport, South Wales (Casnewydd, De Cymru). Rarely in London, willing to travel to Bristol or Swansea.
I think you need evidence about what effect non-tug of war voting has.
Suppose I support the free ownership of weapons, but think a seven day waiting period is better than none.
If I vote for that waiting period, am I demoralising my fellow gun supporters, and invigorating the gun control types, who will therefore struggle harder for more restrictions? Or invigorating my side, which will make sure it does not get defeated next time? Too little evidence to make a prediction.
Or what if I say, well, seven days is OK, but if they win this the gun control types will then demand gun licencing, involving gun holders needing annual psychiatrist’s reports. So I have to tug against seven days, in case something worse comes along.
I would vote for the policy I supported. This has little enough effect on whether that policy gets made into law. I would think the effect on future changes is more negligible.
As a British citizen, I have never been eligible to vote in a referendum. It seems that American propositions are much more common.
Less Wrong SF quote: “The right to bear weapons is the right to be free”- The Weapon Shops of Isher.
Suggestion: “Rationalists seek to Win, not to be rational”.
Suggestion: “If what you think is rational appears less likely to Win than what you think is irrational, then you need to reassess probabilities and your understanding of what is rational and what is irrational”.
Suggestion: “It is not rational to do anything other than the thing which has the best chance of winning”.
If I have a choice between what I define as the “Rational” course of action, and a course of action which I describe as “irrational” but which I predict has a better chance of winning, I am either predicting badly or wrongly defining what is Rational.
I am not sure my suggestions are Better, but I am groping towards understanding and hope my gropings help.
EDIT: and the warning is that we may deceive ourselves into thinking that we are being rational, when we are missing something, using the wrong map, arguing fallaciously. So what about:
Suggestion: “If you are not Winning, consider whether you are really being rational”.
“If you are not Winning more than people you believe to be irrational, this may be evidence that you are not really being rational”.
On a different tack, “Rationalists win wherever rationality is an aid to winning”. I am not going to win millions on the Lottery, because I do not play it.
Yes, but...
Of course there is random noise and different starting points, but there is also some evidence of whether one is really rational. It is a question of epistemic rationality what Wins should accrue to Rational people, and what wins (eg, parentage, the lottery) do not.
James, when you say, “be rational”, I think this shows a misunderstanding.
It may be really important to impress people with a certain kind of reckless courage. Then it is Rational to play chicken as bravely as you can. This Wins in the sense of being better than the alternative open to you.
Normally, I do not want to take the risk of being knocked down by a car. Only in this case is it not rational to play chicken: because not playing achieves what I want.
I do not see why a rationalist should be less courageous, less able to estimate distances and speeds, and so less likely to win at Chicken.
I find Newcomb’s problem interesting. Omega predicts accurately. This is impossible in my experience. We are not discussing a problem any of us is likely to face. However I still find discussing counter-factuals interesting.
To make Newcomb’s problem more concrete we need a workable model of Omega
I do not think that is the case. Whether Omega predicts by time travel, mind-reading, or even removes money from the box by teleportation when it observes the subject taking two boxes is a separate discussion, considering laws of physics, SF, whatever. This might be quite fun, but is wholly separate from discussing Newcomb’s problem itself.
I think an ability to discuss a counter-factual without having some way of relating it to Reality is a useful skill. Playing around with the problem, I think, has increased my understanding of the real World. Then the “need” to explain how a real Omega might do what Omega is described as being able to do just gets in the way.
“I have the potential to be the sort of person who continues even in the face of adversity”, or “it is more in my interests to pass up that cookie”, or “I really do have a choice whether or not to pass up that cookie”. That is what I would recommend.
bill, below, has mentioned “Act as if”: “I choose to Act as If I can continue even in the face of adversity, and I intend in this precise moment to continue acting, even if I may just fall down again in two minutes’ time”.
These have the advantages of being more likely to be true.
Rambling on a little, to be the sort of person who continues in the face of adversity is Difficult, and requires practice, and that practice is very worthwhile. Stating that it is True might make you fail to do the practice, and instead beat yourself up when it appears not to be true.