Has anyone had the opposite experience where a rational realization has an immediate emotional impact? For example, as a child I was quite afraid of the dark and would have to switch lights off in a particular order to ensure I was never subjected to too much darkness. I vividly remember the exact moment I overcame this fear. I was in the bathroom at the sink trying to avoid looking in the mirror because I had just watched a horror movie involving mirrors. It suddenly occurred to me that all my life I had been looking in the mirror without fear and that nothing had changed except my own disposition. This epiphany rushed through me. I suddenly realized that all such “supernatural” things were my own superstitions and not “out there” in the world. The world was concrete and could not change in inexplicable, “supernatural” ways (the concept of which was almost completely associated with camera trickery in movies for me—i.e., if it was dark something might happen, if I look away and look back something might be there, etc). I immediately lost my fear of the dark and it never returned. It could be, of course, that this loss of fear had been building over time and only in this moment did I manage to disassociate the rituals I had built around it, etc, rather than it being the case that this rational epiphany led to my loss of fear.
scientism
I think one of the useful things about being able to identify people by name is that you frequently end up entertaining a contrary opinion you would have otherwise dismissed because it’s held by someone you respect. Such events are probably more significant to personal development than discovering a gem in the rough you would have otherwise overlooked. Hiding karma probably has more utility than hiding names.
This is one of my favorite topics, more so than the usual topics of rationalism perhaps, and I’ve thought about it a lot. How can we best believe things we accept? The other day I was out running and the moon was large and visible in the daylight. I was looking up at it and thinking to myself, “If people really understood what the moon was, what the stars are, what Earth is, could they go on living the way they do? If I really, genuinely knew these were other places, could I go on living the way I do?” This is, perhaps, too romantic a view of things. But it illustrates my point: we really do accept very profound things without ever truly making them part of our person. It’s not just the absence of ghosts and other supernatural entities we have difficulty with but the presence of many phenomena outside our usual experience.
Paul Churchland’s early work, Scientific Realism and the Plasticity of Mind, has a great illustration of this. Churchland’s mentor was the American philosopher Wilfrid Sellars who developed a distinction between the “scientific image” of the world and its common sense “manifest image.” Churchland’s approach is to give the scientific image preeminence. He wants science to replace common sense. This larger project was the background to his more familiar eliminative materialism (which seeks to replace folk psychology with a peculiar connectionist account of the brain). While most of the work is quite technical there are some excellent passages on how we could achieve this replacement. He discusses the way we still talk of the sun rising and setting, for example, and uses a particular diagram to show how one can reorient oneself to really appreciate the fact that we’re a planet orbiting a star.
I’ve tried to post the diagram here:
http://s5.tinypic.com/zinh38.jpg (before we reorient ourselves)
http://s5.tinypic.com/2akfoll.jpg (after we reorient ourselves)
I don’t have the descriptive text at hand but for me what the diagrams illustrate is a particular approach to science that I think Eliezer shares, more or less, which is that we should try to incorporate science deeply into the way we inhabit the world. I suppose that, to me, is a major part of what rationality is or should be: How can we best live that which we have until now merely accepted as fact?
What about ghosts, the supernatural and everything else of that kind they’ll encounter in movies, television, cartoons, etc? The only difference with Santa is that most people grow out of it, whereas many adults continue to believe in ghosts, but that’s because Santa is childhood-specific myth. If the same adults who think Santa is absurd were supposed to believe in Santa I’m sure they’d have no problems rationalizing it.
I don’t think people have (ethical) value simply because they exist. I think they should have to do a lot more than that before I should have to care whether they live or die.
I wouldn’t personally object, no. This is happening every day and, like most people, I do nothing. The difference is I don’t think I’m supposed to be doing anything either. That isn’t to say we should live in a society without laws or moral strictures; you need a certain amount of protection for society to function at all. You can’t condone random violence. But this is a pragmatic rather than altruistic concern.
- 18 Mar 2009 14:10 UTC; 11 points) 's comment on Rational Me or We? by (
This shouldn’t be surprising. Medicine has a longer history than empirical science. For thousands of years it flourished without a second thought for outcome. Clearly whatever medicine is, socially speaking, it isn’t reliant on the effectiveness of its methods for its survival. The same is true of education. Schools and universities existed long before there was anything to teach. Whatever social role they may play, imparting skill is a recent development, and clearly not the most central concern.
What you describe is similar to my own position. I made a short note of it in the “closet survey” thread: I don’t think any life has inherent value. However, there’s another problem with morality I want to draw attention to, and that’s the idea that people could somehow straightforwardly accumulate value by increasing virtue or reducing vice.
I find utilitarian ethical notions such as “alleviating suffering”, “increasing happiness” and even “increasing rationality” incoherent. These aren’t things you can pour into a bucket. Pain and happiness are not cumulative. Experiencing 40 years of uninterrupted happiness will not lead to Nirvana and will most likely not be particularly different, at the end of the 40 years, from having experienced a mixed life. (The degree to which suffering does have a lasting effect is probably due to long-term consequences to health and behavior rather than the accumulation of the negative experiences themselves.)
To me, what is accumulated has to be a something that is genuinely cumulative, which I believe can only be the gross empirical knowledge of human society as a whole (I think it can be argued that science is the only truly cumulative human activity; everything else is fad). Everybody who is contributing to the advancement of knowledge, whether directly or indirectly, has value. Their value would be a function of how important they are to a society focused on the pursuit of empirical knowledge. Everybody else has negative value. (I don’t believe it’s possible to be neutral; human beings require a lot of resources to merely exist.)
This is very bizarre situation and difficult to think about but I think there’s a chance I would press the button. My main issue is that children require some kind of protection because they’re our only source of valuable adults. Childhood is probably the worst time to torture people in terms of long-term side effects. But in terms of merely causing the experience of suffering (which I think is what you’re getting at) I think torture is value-neutral.
This is a slightly different matter to the one I initially posted about; I don’t think the experience of pain (or happiness) is cumulative. Consider the situation where I could choose to be tortured for a year to receive a reward. If you could strip this scenario of long-term side effects, which would probably require erasing my memory afterwards, then I would willingly undergo the torture for a reward. The reward would have to compensatory for the loss of time, the discomfort and the impracticality of the scenario. If I really liked pie I’d probably be willing to undergo 5 minutes of torture without long-term side effects for pie. Actually, I’d probably be willing to do it for 5 minutes purely out of curiosity.
Now, the child in question, assuming he or she has no value and comes from a community where he or she would not become a valuable adult, could not have long-term side effects. He or she would surely be changed by the situation but not being a value-contributor could not be changed for the worse; any change would be value-neutral in terms of benefit to the cumulative wealth of society. (There is a possibility that the child would become a greater strain on society, and acquire greater negative value, but let’s put this aside and say there are no major long-term side effects of the torture such as loss of function.)
A complication here is the value I place on pie in your scenario would be unlikely given how I determine value generally. As I said, I do not consider the experience of pain or pleasure cumulative, and consider them value-neutral in general. I would not place a high value on the consumption of pie. But let us say that my love of pie is a part of my general need to stay healthy and happy in order to be a value-contributor. In this case, whether I push the button would be some function of the probability that the child might be a child of value or from a community that produces adults of value weighed against the value of pie to me as a value-contributor, so there’s a non-zero probability I would push the button.
I think empirical knowledge has intrinsic value. This is not because I’m a rationalist; I’m not a rationalist in the traditional sense (I don’t think norms of instrumental reasoning are basic). Empirical knowledge has intrinsic value because it’s cumulative. I consider this essentially an issue of identity—i.e., something that is cumulative is valuable. That’s my definition of value. It’s quite easy to show that something that is not cumulative has no value (most people agree that fads and repetitions are inherently without value) and that misattributions of value usually involve the misidentification of an endeavor as cumulative. It’s harder to demonstrate an identity between being cumulative and having value though. There’s also the issue of defining “cumulative” more clearly: collecting stones is cumulative in a sense, we can collect more and more stones, but is obviously not cumulative in the same way that science is. Science makes progress and this progress, I think, must sit apart from any supposed instrumental value—i.e., there’s a sense in which science is not like collecting stones that doesn’t involve reference to what science can do for us (doesn’t make reference to any outside source of value).
As I’ve argued elsewhere, it’s common to misidentify happiness and suffering as cumulative, and to then misattribute value to the alleviation of suffering or the promotion of happiness. This is a basic misconception of what a mental or emotional state is; if you have 100 happy people you can’t say you’ve accumulated a lot of happiness any more than you can say you’ve accumulated a lot of red if you have 100 red balls. What you have is 100 happy people and not 100 times the happiness of a single person. Likewise, a person cannot accumulate happiness over their lifetime; a very happy life doesn’t cause greater happiness at the nth instant than a moderately happy life. Emotional states are not cumulative and cannot (on my account) have value.
This is true of art too. There is, of course, a technological side of art that is cumulative; the development of perspective, of materials and pigments, and the development of photography and optics, these are all good examples in the visual arts. The development of music led to developments in acoustics. Art may be cumulative in small degree: artists avoid copying other artists. This, in fact, is probably what drives creativity in art: niche creation. The artist wants to strike out on his own and find a place for himself in the art world. But I think it’s obvious this isn’t cumulative in the same way science is; it’s more like collecting stones than doing science. I do not, therefore, believe that art has any inherent value. (It may have indirect value by entertaining us, and thus creating an environment in which we can flourish in our cumulative endeavors, and by inspiring us. I think the latter is quite important. I think, for example, that fictive and fantastical and even erroneous concepts can be as important in inspiring us to real world discovery as logical or rational concepts, perhaps more so, and this is one of the reasons I do not consider myself a traditional rationalist.)
I should note that I don’t believe I personally need to be involved in making scientific advances (although I have chosen to be). A person who is convinced of my ideas might take up political advocacy, or wealth creation with the goal of increasing the efficiency of others who are involved in the creation of knowledge, or might become an artist for the reasons I have given, or might just decide to become the best damn barista Starbucks has ever known. Knowledge creation requires an entire functioning, flourishing human society.
Can you offer any examples of generalists (and/or rationalists) who have produced significant insights besides Eliezer? When I look at history, I see subject specialists successfully branching out into new areas and making significant progress, whereas generalists/rationalists have failed to produce any significant work (look at philosophy).
I think philosophy is a good example. Philosophers are supposed to be more logical/rational than other people and have been generalists until recently (many still are). They have also failed to produce a single significant piece of work on par with anything found in science. Now, some people might disagree with that assessment, but I suspect their counterexamples would be chiefly in specialist sub-disciplines: formal logic, for example. I think to the degree that there has been “good philosophy” it’s found under the model of specialists working under the kind of robust institutional framework Robin alludes to rather than individual theorists taking a global perspective (philosophy as martial arts). I can’t think of any systematizers I’d credit with discovering truth. I do not think Socrates, Plato, Aristotle and Descartes discovered any substantial truths (Descartes mathematical work aside) so we probably differ there. Regardless, I think there’s a good argument to be made that historically truth has come from robust institutions involving many specialists (such as science) rather than brilliant lone thinkers taking a global perspective.
There’s a huge difference between being considered historically important and having discovered substantial truth. The Bible is historically important. It helped lay the foundations of Western culture. This is hardly disputable. It does not, however, contain much in the way of truth. Nor do the works of Plato and Aristotle.
There’s probably a few in there. I won’t try to dispute them on a case by case basis. There are, on the other hand, literally thousands of specialists who have achieved more impressive feats in their fields than many of the people you cite. (I take straightforward exception to Chomsky who founded a school of linguistics that’s explicitly anti-empirical.)
It’s worth remembering that what we’re looking for is not just people who contributed to multiple fields but generalists/rationalists: people who took a “big picture” view. (I’m willing to set aside the matter of whether their specific achievements were related to their “big picture” view of things since it will probably just lead to argument without resolution.) Leibniz would definitely fall into that category, for example, but I’m not sure Newton would. He had interests outside of physics (religion/mysticism) but they weren’t really related to one another.
If personhood resides in brain structure then a brain-in-a-vat would be a person. Presumably its personhood would be postulated on the grounds of it having some sort of subjective experience. But that’s not an empirical fact so I don’t think personhood residing in brain structure can be classed as an empirical fact either.
It’s not an empirical fact that a brain-in-a-vat has subjective experience. It’s a thought experiment. Thought experiments don’t establish empirical facts.
I believe there are studies of crime that come to similar conclusions—i.e., criminality tends not to be profitable and it’s more about social networks. I think a lot of irrational behavior has similar explanations. We need to cast our net wider. Why do people become musicians? Why do they become artists and entertainers? In these cases there’s the complication of the audience but all of the activities are very strange indeed if you take a step back and look at them. Playing one of the many odd instruments available or putting paint to canvas are strange behaviors (putting aside talk of creativity, expression, etc, which offer little insight IMO and just serve to obscure what’s genuinely interesting about these activities). It’s all about niche building. A set of historically contingent social and technological factors have coalesced on the possibility of finding a place in society doing something as odd as playing the violin. Nobody just woke up one morning and thought “let’s blow into a hollowed out piece of wood” or “let’s get a group of people together and pretend to be other people while a larger group of people watch.” There’s a long, strange history to these things. The factors involved are super-personal.
Religion is another excellent example. Some people have managed to find a place in the world as celibate monks. It’s not a matter of personal irrationality but rather a society that, through a sequence of strange and historically contingent machinations, has settled on a state where one can indeed “have a living” as a celibate monk. Given this, it’s little wonder we find people who choose to be celibate monks in our society; such a choice is not irrational on the personal scale on which most people live their lives. Terrorism is the same; we have terrorists because society, for whatever reason, has coalesced on a situation where one can find satisfaction through being a member of a terrorist organization. One can have ones human needs satisfied; including social relationships, status and a sense of worth. Ideologies don’t physically exist. Groups have ideologies. To have an ideology there must first be a set of people, a tightly knit social group, to espouse it. Much like religion I doubt the content of the ideology matters much; the form of the ideology, indeed, probably has more to do with how it fits the daily activities of group members rather than as something outsiders can understand (as is probably the case with religion). The concepts probably form a social exchange for in-group cohesion and should be analyzed as such.
This, I think, is the correct level to study these things. Don’t look at the ideology; look at the actual material embodiment of that ideology, the group that espouses it, and ask yourself not “How do people believe this nonsense?” or “Why do people believe something so irrational?” but “How does this group of people sustain itself?” and “What role does this way of speaking and way of interpreting events play in sustaining in-group cohesion?”
I recall reading somewhere that one of the reasons Harvard has so much money is that the majority of donations they receive are earmarked for a narrow range of projects and they receive more money than they can spend in those areas (while other areas remain underfunded). I can’t find the article but maybe someone else remembers it. Regardless I wouldn’t be so quick to assume that many people are funding universities without restrictions on how the funds must be used.
I think this is kind of a backwards way of looking at things. All scientists go through a period of apprenticeship and very little of what it is to “do science” is written down. Textbooks contain descriptions of phenomena and experiments. There are protocols for performing common tasks. But there really isn’t an extensive literature on “how to be a scientist.” But I don’t see why we should expect it to be communicable anyway. Why should we be able to provide a casual description of what people do? I think this expectation relies on the fallacy that explicit language is a mere translation of some internal “mentalese.” Yet there’s no reason to expect that language can capture thought or even behavior on anything but a technical level (i.e., a level not immediately useful to communicating practice). And even if we could express thought and behavior appropriately there’s no reason to expect us to be adept at turning verbal descriptions back into thought and behavior. If the cognitive and behavioral sciences do manage to inform pedagogy I expect it will be in the form of providing better hands-on experiences and better apprenticeships rather than finding ways to express these ideas in textbooks.