Upvote here if your answer is ‘yes’.
(And downvote the downvote post to neutralize karma.)
Upvote here if your answer is ‘yes’.
(And downvote the downvote post to neutralize karma.)
Qualia are physical phenomena.
Yes, qualia are physical. But what does physical mean??
Physical means ‘interacting with us in the simulation’.
To us, the simulated Jupiters are not physical—they do not exert a real gravitational force—because we are not there with them in the simulation. However, if you add a moon to your simulation, and simulate its motion towards the spheres, the simulated moon would experience the real, physical gravity of the moons.
For a moment, my intuition argued that it isn’t ‘real’ gravity because the steps of the algorithm are so arbitrary—there are so many ways to model the motion of the moon towards the spheres why should any one chosen way be privileged as ‘real’? But then, think of it from the point of view of the moon. However the moon’s position is encoded, it must move toward the spheres. Because this is hard-coded into the algorithm. From the point of view of the moon (and the spheres, incidentally) this path and this interaction is entirely immutable. This is what ‘real’, and what ‘physical’, feels like.
Belief structures do not necessarily have to be internally logically consistent. However, consistent systems are better, for the following reason: belief systems are used for deriving actions to take.
I have a working hypotheses that most evil (from otherwise well-intentioned people) comes from forcing a very complex, context-dependent moral system into one that is “consistent” (i.e., defined by necessarily overly simplified rules that are global rather than context-dependent) and then committing to that system even in doubtful cases since it seems better that it be consistent.
(There’s no problem with looking for consistent rules or wanting consistent rules, the problem is settling on a system too early and applying or acting on insufficient, inadequate rules.)
Eliezer has written that religion can be an ‘off-switch’ for intuitively knowing what is moral … religion is the common example of any ideology that a person can allow to trump their intuition in deciding how to act. My pet example is, while I generally approve of the values of the religion I was brought up with, you can always find specific contexts (its not too difficult, actually) where their decided rules of implementation are entirely contrary to the values they are supposed to espouse.
By the way, this comment has had nothing to say about your friend’s comment. To relate to that, since I understand you were upset, my positive spin would be that (a) your friend’s belief about the relationship between ‘math’ and social justice is not strong evidence on the actual relationship (though regardless your emotional reaction is an indication that this is an area where you need to start gathering evidence, as you are doing with this post) and (b) if your friend thought about it more, or thought about it more in the way you do (Aumann’s theorem), I think they would agree that a consistent system would be “nicest”.
I observed an ugh field today: making sandwiches.
Around lunch time, I announced I was going out to pick up some lunch.
‘Why don’t you just make a sandwich?’
I thought about touching slices of deli meat (are they fresh enough? how would I know?) and placing them on the bread (how many slices? how to arrange them?), a quick flurry of negative associations all jostled together, and the definite outcome of the decision was, “No, I’ll just go pick something up.”
“How about if I make the sandwiches?”.
A roast beef sandwich toasted in the oven with melted cheese? Delicious! “Yes, please.”
“You are so lazy!”
Lazy? I was willing to get in the car and drive somewhere and pick up food. Certainly making sandwiches would be less effort. But for whatever reasons, I’ve conditioned myself through rather irrational negative associations to avoid even thinking about making them.
Our oldest utility monster is eight years old. (Did you have this example specifically in mind? Seems to fit the description very well.)
I woke up this morning with a set of goals. After reading this post, my goals abruptly pivoted: I had a strong desire to compose a reply. I like this post and think it is an excellent and appropriate reply to Lionhearted’s (also a nice post), and would have liked to proffer some different perspectives. Realizing that this was an exciting but transient passion, I didn’t allow my goals to be updated and persisted in my previous plans. An hour or two into my morning’s work, I finally recalled the motivation behind my original goals and was grateful. It took some time, though, before I felt emotionally that I had chosen the right set of goals for my morning. Working through those transient periods of no-emotional-reward is tough. You need to have faith in the goal decisions of previous selves, but not too much.
I’ll share this anecdote, on the chance that it is relevant.
At a rate of about once every two years, I am jolted awake in a peculiar mental state in which I feel very convinced that I have discovered something profound, and all experience till then has been an illusion. The next morning I would feel normal and unable to recall what I was thinking. So I resolved to write down my thoughts the next time it happened in order to analyze the experience.
It happened again about 3 months ago. I rushed to my desk and began writing. To my astonishment, what my hand was writing (in this case, my dominant hand) was completely independent of what I was thinking. It looked like gibberish to me.
The next morning I inspected the sheet and found I had scribbled vague tautologies like, {”If A then A” , “Also B. Then A + B”}. (That morning I also remembered what the “profound” realization was: it was that causality was perfectly bi-directional.) These experiences tend to happen when I am deeply involved in a math problem that is foreign.
Later edit I wrote this comment in response to the parent by abigailgem, having not yet read Yvain’s post. I just now read the post and find that my anecdote fits Dr. Ramachandran’s model in a couple ways:
The hypothesis that my right brain is “turning on” to revise models is consistent with the fact that these experiences occur when I am working on a new math problem. Perhaps at night my right brain is sifting through hypothesis, and then my brain (which isn’t very discriminating while asleep) wakes me up because it thinks it’s discovered a much better model for my whole life.
It is consistent that in the morning I have no recollection of what I was thinking.
Obviously, my left brain is working here, trying to fit the data into the theory. I suppose I should symmetrically consider what does not fit.
But I only found another thing that did fit:
When I tried to write down what I was thinking, I was unable to do so. This is consistent with my right brain being unable to communicate. When I instructed my hand to write, my left brain took over the task, but, without any context, just babbled some harmless tautologies.
So, for my own use, I add to the theory that I have some evidence that
I personally am unable to identify counter-evidence to things. I can only generate reasons on how something would fit, can only confabulate, so I would do better comparing two different models than evaluating one. I’ve suspected this for a while anyway. The only exception is if I can find a logical inconsistency, which is why I have only ever trusted my reasoning in a mathematical context.
The left brain is just a logical computer, based on my right hand scribbles (and the banana observation, below), and the right brain is what generates new but indiscriminately crazy ideas.
At this point, I can accurately be accused of babbling but this is the single moment where I have learned the most on Less Wrong.
The idea of my everyday reasoning and interactions just affording logical reasoning and being unable to decrease confidence in assumptions unless there is a logical inconsistency is extremely powerful. It explains why people rarely update their ideas, even in the face of contradicting evidence, and why upon coming to Less Wrong I felt convinced that I need only ascertain the consistency of a model. I felt (and still feel) that if belief in God is consistent, then there is no reason to update it. I suppose my left brain could suggest at any moment there is no God, and provide an alternate explanation for what God is currently explaining, but presumably it would need a reason to do so? Since theism is off-topic in this post, I’ve transplanted this question to the open forum here.
When you think about it, the period when parenthood conflicts directly with work is a very small proportion of your working life, unless you have lots of kids.
What’s disappointing to me about this overlapping proportion of your life is that the kids are small (and most demanding) exactly over the same period of time when your work is most demanding (when you’re trying to get tenure). I’m disappointed because I’m not the best parent or the best scientist that I could have been if they were staggered even by just 5 years.
At the moment, I feel more critical of the tenure system and—to be honest—am jealous that I am juggling parenting and trying to get tenure while my single colleagues have potentially an extra 20 hours a week to work on their research. While I know that having children is a choice that I made, the biology is such that I should have kids now … and the tenure system, which requires your most productive work in your thirties, is not sympathetic to this biological fact.
I only recently began feeling dissatisfied. Until recently, I instead felt somewhat guilty and greedy about trying to have it ‘all’—a family and a career. This is because I see that many women in academia chose not to have children. But lately, my self-esteem has been more vigorous and I feel that choosing between a family and a career is not a sensible choice for society to insist upon.
I also recently read the following sentence in Psychology Today, which catalyzed my stance:
Americans tend to blame their struggles to balance family and career on themselves [instead of the lack of social institutions and support], and feel like independent failures. (paraphrased from this article)
Incidentally, I went to a similar panel of female scientists about 10 years ago and I felt they were overly negative about balancing the demands of small children and research. I’m glad that your panel was more supportive. The balancing act makes me grouchy sometimes, but I think it’s OK. For psychological support, I rely a lot on my female colleagues that did have children as role models. (I do not have the psychological makeup to have been a pioneer with this, so I am grateful to them.)
In some cultures, like that of my mother’s, it is extremely rude to press a person to capitulation. It is expected that people should parry in such a way that neither person loses face. In such contexts, talking in circles, softening the argument and changing the line of the argument—by either party—can be signs that one person has already conceded. It’s not only polite to save the face of the person ‘losing’ the argument, it is polite to spare the ‘winner’ from the embarrassment of causing any loss of face. To the extent that if someone ever abruptly concedes an argument in a face-to-face encounter, I assume that they belong to this culture, and I will rewind the argument to see how I offended them—usually by pressing my argument too hard or too directly.
My father, on the other hand, thought that a touch-down dance must be done on the corpse of every argument, to make sure that it is never resurrected. To not do so would weaken the argument. And I think this is a common American view—that if you are difficult to throw down and hold down, then your opponent’s argument needs to be stronger.
The member of your e-mail list had a third view, which I think is defensible in its contrast to these two extremes.
Eliezer uses the word ‘vulnerability’. I think this is close to what they are trying to signal, which is ‘harmless’. It is a good strategy to have a very disciplined dress code, and build a brand as having a ‘dorky’, squeaky-clean manner, so that people feel comfortable allowing the missionaries in their home. In my home town anyway, they went door to door and I had no qualms about inviting them in, knowing that any odd behavior would be newsworthy, and quickly become widely known, exactly because the branding is so strong.
No matter how hard I try and kick it, there it is: Honesty.
Where in particular do you perceive the lie in the situation you described? Would her donation not have sponsored the child? Would the donation not have made her feel better? Or is that you do not believe she should feel better for sponsoring a child; that in your mind it would be dishonest for her to displace her current grief with this balm?
Of course if the donation would not have sponsored the child, then it would be dishonest to claim that it would.
I can also imagine the donation not actually making her feel better. It would be possible to simply overwhelm her (e.g., intimidate them with a whirlwind of emotional stimuli) into doing something she didn’t want to do. This is in the direction of being mean/bullyish. People can push boundaries and this is being pushy...but we don’t often describe it explicitly, perhaps because it requires such a high emotional intelligence to identify and name it.
Finally, the third case is that you don’t believe sponsoring children is a feel-good thing. Which would be a strong indication that that wasn’t the right job for you, but wouldn’t mean that sponsoring a child wasn’t the right thing for her.
Me too:
I would bet against Many Worlds. I am not a consequentialist. I am not really interested in cryonics. I think the flavor of decision theory practiced here is just cool math without foreseeable applications. I give very low probability to FOOM. I think FAI as a goal is unfeasible, for more than one reason.
I used to be very active on Less Wrong, posting one or two comments every day, and a large fraction of my comments (especially at first) expressed disagreement with the consensus. I very much enjoyed the training in arguing more effectively (I wanted to learn to be more comfortable with confrontation) and I even more enjoyed assimilating the new ideas and perspectives of Less Wrong that I came to agree with.
But after a long while (about two years), I got really, really bored. I visit from time to time just to confirm that, yes, indeed, there is nothing of interest for me here. Well, I’m sure that’s no big deal: people have different interests and they are free to come and go.
This is the first post that has interested me in a while, because it gives me a reason to analyze why I find Less Wrong so boring. I would consider myself the type of “reasonable contrarian” the author of this post seems to be looking for—I am motivated to argue if I disagree, and have the correct attitude in that I’m quite willing to think counter-arguments through and change my position if I disagree. If only, alas, I disagreed about anything.
On all the topics that I used to enjoy being contrary about, I’ve either been assimilated into Less Wrong (for example, I’m no longer a theist) or I have identified that either (a) the reason for the difference in opinion was a difference in values or (b) the argument in question had no immediate material meaning, and, so arguing about either was completely pointless. My disinterest in cryonics is an example of (a), and belief or disbelief in many worlds is an example of (b).
I do wish Less Wrong was more interesting, because I used to enjoy spending time here. I realize this is a completely self-centered perspective, because presumably many do continue to find Less Wrong entertaining. But I want to learn things, and be challenged and stretched as much possible, and now that I’m already atheist that challenge isn’t there. I’d like to understand how the “world works” and now that I’ve got materialism under my belt, what’s next? I wish Less Wrong would try and tackle taboo topics like politics, because this an area where I observe I’m completely clueless. On the other hand, I also understand that these questions are probably just too difficult to tackle, and such a conversation would have a large probability of being fruitless.
Still, I agree with prase, currently the top comment, that Less Wrong topics tend to be too narrow. My secondary criticism would be that for me (just my opinion) the posts are kind of bland. Maybe people are too reasonable (!?), but there doesn’t seem to be anything to argue with.
I would recommend a science encyclopedia, a single but large book with approximately 1-2 pages on a huge variety of topics. The reason I recommend this is because a person can develop a relationship with a hard copy book they can’t develop with an internet encyclopedia (my daughter’s favorite page is the one on the sun, and she can rattle off, ‘a ball of burning hot gases...’ from memory) and one can flip through the pages looking for something that looks interesting to them—this is self-guided education at its best.
Another advantage of the encyclopedia is that it much more likely you will read about a topic you wouldn’t have guessed you were interested in due to a particularly catching photograph (for example, about spiders) and it feels far more safe to surreptitiously or casually look up topics one might be uncomfortable about—that is, without making a very strong commitment that that is something you want to read about. While looking up certain topics online requires a definitive decision (you don’t accidentally end up at a site about the onset of puberty) and is (unfortunately) likely to encourage Google to give discomfiting or indiscreet ads, turning pages in your own Encyclopedia is entirely innocent. (It’s your book, after all.)
My daughter’s book has a page on religion (and it’s on the surface a perfectly reasonable and unoffensive description, but I expect it will inevitably sink in that each culture and time in history has it’s own religion...) and while our particular encyclopedia doesn’t have a page on evolution, there are plenty of good pages on biology and the different kingdoms. I feel that each page is interesting enough that as my daughter spends time with her encyclopedia, she would be developing a fairly broad—and occasionally detailed—education in science.
I think the question is: why do you really need to get there?
The first year I spent time reading Less Wrong, I had to deliberately pull back and carefully moderate my time on Less Wrong because I saw the signs that it was affecting my mental stability. A large component of this was the new ideas, but also culture shock and another large component was getting used to the strange social interaction—the drawn-out timescale and the feel of an anonymous, infinite audience is quite different in comment threads than anything I’d been used to.
When I first started writing comments, I wanted to train myself to speak more bravely, but I actually grew more sensitive before growing more brave. Now, probably a good 2-3 years later, my interaction with Less Wrong feels more or less ‘normal’ and the probability of instability is much lower. I got over my culture shock …
I hope he is eventually encouraged to go into medical research, rather than physics. (Not because it’s so terrible to be wrong in a theory about the Big Bang, but because who cares and we need the smartest people working on life extension, please.)
Help from LW readers is welcome.
I’ll chime in that Eliezer provided me with the single, most personally powerful argument that I have against religion. (I’m not as convinced by razor and low-prior arguments, perhaps because I don’t understand them.)
The argument not only pummels religion it identifies it: religion is the pattern matching that results when you feel around for the best (most satisfying) answer. To paraphrase Eliezer’s argument (if someone knows the post, I’ll link to it, there’s at least this); while you’re in the process of inventing things, there’s nothing preventing you from making your theory as grand as you want. Once you have your maybe-they’re-believing-this-because-that-would-be-a-cool-thing-to-believe lenses on, it all seems very transparent. Especially the vigorous head-nodding in the congregation.
I don’t have so much against pattern matching. I think it has it’s uses, and religion provides many of them (to feel connected and integrated and purposeful, etc). But it’s an absurd means of epistemology. I think it’s amazing that religions go from ‘whoever made us must love us and want us to love the world’—which is a very natural pattern for humans to match—to this great detailed web of fabrication. In my opinion, the religions hang themselves with the details. We might speculate about what our creator would be like, but religions make up way too much stuff in way too much detail and then make it dogma. (I already knew the details were wrong, but I learned to recognize the made-up details as the symptom of lacking epistemology to begin with.)
Now that I recognize this pattern (the pattern of finding patterns that feel right, but which have no reason to be true) I see it other places too. It seems pattern matching will occur wherever there is a vacuum of the scientific method. Whenever we don’t know, we guess. I think it takes a lot of discipline to not feel compelled by guesses that resonate with your brain. (It seems it would help if your brain was wired a little differently so that the pattern didn’t resonate as well—but this is just a theory that sounds good.)
I’m not thrilled about the societal emphasis on gifts to make children happy, but otherwise, as a parent of two young kids, I am grateful for Santa Claus.
Santa Claus is a perfect, uncomplicated person that also loves your children. I think it’s a good thing to give children the impression that love for them extends beyond the family; someone out there with power, resources and magic also loves them, personally. The gift they get from Santa is the ‘evidence’ of this love and this gift is usually the best gift they receive—more carefully chosen and grander than even the gifts their parents give them. I view the phenomenon of Santa Claus as an outlet for society to express their views and hopes about generosity. Santa Claus is a model of what it means to be generous, and we all feel more generous when we channel his personality to pretend that his is real. Possibly, we teach our kids to be generous for later. (Sometimes people are generous when they’ve been generously treated, and sometimes they feel entitled instead, I don’t know why.)
I don’t see it as a deception about whether Santa Claus ‘exists’—for the first time on the other side of the conspiracy, I’m amazed by how extensively society supports the realization of Santa Claus. A culture that does this, especially at such a grand scale, really does want him to exist to love the children. The collusion at all levels, and especially the way parents reserve the best present to be from Santa, shows that Santa Claus is fitting some set of societal and parental needs. I’m sure a well-researched, thought-out social science essay could write a lot of things that I am only half-aware of, but without fully understanding why, I personally feel that ‘Santa Claus’ is one of the most spectacular ways that society provides support to parents.
The deception of Santa Claus isn’t that he isn’t a real man. (He’s more real than I ever thought he was.) The myth is that all children are loved and cared for. In my opinion, the more disillusioned my kids feel when they find out about that myth, the better. As a society, we think ‘Santa Claus’ should visit every child (which is why there are toy drives) and we hope that every parent wants special and good moments for their children. If you’re a parent that doesn’t go along with Santa Claus because you don’t want to lie to them, then you are caring about their well-being. The bogey-men here, if they exist, are parents that couldn’t be bothered to make a special time for their kids.
(By the way, this comment was indubitably strongly influenced by this thread.)
I don’t know if the things that bother this feminist would also bother me, but I’ve been reading Less Wrong for several years and I’ll say that with some delicate issues, Less Wrong is like a bull in a China shop. In some investigations, it’s like trying to determine if there is life on a planet by bombing it. I just avoid these topics entirely.