Academic website: https://www.andrew.cmu.edu/user/coesterh/
Caspar Oesterheld(Caspar Oesterheld)
On the contrary, I always liked sports that compensate for mental work. Also, physical activity can be healthy and, as they say, mens sana in corpore sano… (For example, running is said to be healthy for the brain and I found it very useful for forcing my brain to pause. One problem is that in the long term running destroys your knees, so after two marathons I will reduce long-distance running in the future.)
This has some tradition, for example Alan Turing (maybe the greatest computer scientist of all time?) and Bobby Fischer (maybe the best chess player of all time?) did a lot of sports and Fischer said that he wanted to keep in shape for chess explaining that one needs a lot of stamina for playing four to five hours.
So, I think rationalist sports should support the more important mental activities, e.g. by improving health.
Sure, however, most LWers put a high value on rationality and as our brain has some impact on our rationality (any dualist around?), our body has some impact on our brain and physical activity has some (positive) impact on our body, it seems rational to me to engage in some physical activity rather than “waisting your intelligence” on playing chess (which I also did a lot).
I actually did not want to go too deep into discussing specific sports and wait for another 24 hours, but...
I never had actual problems with the knees myself—I’m neather heavy, nor run that much at all (100mile/week for 6 years is extremely impressive!), also I eat helthy and think my technique to be okay. But I am very young. My grandfather, who has been doing a lot of sports his whole life (to my knowledge he still rides his bicycle for 50 miles a day or something at age 80) had some knee problems and therefore changed from relatively serious marathon running (best time ~2:40) to swimming and bicycling. Of course these are just anecdotes that do not prove anything. I would be very interested in the current state of research on the matter.
For me the most important argument against long-distance running is that it seems to conflict with general fitness. After running my second marathon I pretty much sucked at everything else, even riding a bicycle...
Also, long-distance running takes a lot of time to practice, so now I changed to less than daily interval training, also supplemented by weight training.
Of course, motivation is an important issue in choosing a sport. If you start running, it might be boring and not very satisfying, so it is hard to practice regularly.
But I think from a huge extensive list of sports, a lot of them can be discarded for being too risky (maybe soccer or mixed martial arts?), having no physical/mental health benefits (maybe most e-sports?) etc. So I do not think that “Whatever you can get yourself to do regularly” provides a sufficient condition for finding out whether a sport is rational, even though it is definetely a necessary condition.
Thank you, apparently my question mark and ‘maybe’ were very approriate. ;-)
The positive effects of chess may be higher, but I presume that the average rationalist or LWer practices 8 hours of abstract reasoning a day, simply by doing their job. Let us think about Bobby Fischer. He probably practiced at least 10 hours a day—maybe then another hour of chess did not have an impact as positive as an hour of tennis, swimming etc. At least, he did not think so.
The situation is of course very different, if you are a professional athlete. Then some hours of chess in the free time is (probably) a better way to train your brain, but so would be reading a book about AI, rationality, etc.
All I am saying is that the time you can improve your mental abilities by thinking about some hard problems is limited and above a certain threshold (maybe 8h a day, maybe a lot more or less depending on the kind of activities, the specific person etc.) it might be better to do something else, like sleep, go for a walk, listen to music or engage in some physical activity.
Here is some further evidence that physical activity might have a positive impact on your brain: (I neither have the time nor the competence to evaluate the quality of these papers; also I hope that they’re visible from outside a university network)
Cotman, Carl W.; Engesser-Cesar, Christie: Exercise Enhances and Protects Brain Function. http://journals.lww.com/acsm-essr/Abstract/2002/04000/Exercise_Enhances_and_Protects_Brain_Function.6.aspx
Cotman, Carl W. , Berchtold, Nicole C., Christie, Lori-Ann: Exercise builds brain health: key roles of growth factor cascades and inflammation. http://www.sciencedirect.com/science/article/pii/S0166223607001786#
Colcombe, Stanley J., Erickson, Kirk I., Raz, Naftali, Webb, Andrew G., Cohen, Neal J., McAuley, Edward, Kramer, Arthur F.: Aerobic Fitness Reduces Brain Tissue Loss in Aging Humans. http://biomedgerontology.oxfordjournals.org/content/58/2/M176.short
Google Scholar finds thousands of such articles.
For practical purposes I agree that it does not help a lot to talk about utility functions. As the We Don’t Have a Utility Function article points out, we simply do not know our utility functions but only vague terminal values. However, as you pointed out yourself that does not mean that we do not “have” a utility function at all.
The soft (and hard) failure seems to be a tempting but unnecessary case of pseudo-rationalization. Still, the concept of an agent “having” (maybe in the sense of “acting in a complex way towards optimizing”) a utility funktion seems to be very important for defining utilitarian (hence the name, I guess...) ethical systems. In contrast, the notion of terminal values seems to be a lot more vague and not sufficient for defining utilitarianism. Similar things (practical uselessness but theoretical importance) apply to the evaluation of the intelligence of an agent. Therefore, I think that the term ‘utility function’ is essential for theoretical debate, even though I agree that it is sometimes used in the wrong place.
To me it seems as if utility functions were the most general (deterministic) way to model preferences. So, if we model preferences by “something else”, it will usually be some special case of a utility function. Or do you have something even more general than utility functions that is not based on throwing a coin? Or do you propose that we model preferences with randomness?
For our universe, other models have been extremely succesful. Therefore, the generality of wave functions clearly is not required. In case of (human) preferences, it is unclear whether another model suffices.
What you are saying seems to me a bit like: “Turing machines are difficult to use. Nobody would simulate this certain X with a Turing machine in practice. Therefore Turing-machines are generally useless.” But of course on some level of practical application, I totally agree with you, so mabye there is no real disagreement in the use of utility functions here—at least I would never say something like “my utility funtion is …” and I do not attempt to write a C-Compiler on a Turing machine.
I do not think that the statement “utility functions can model human preferences” has a formal meaning, however, if you say that it is not true, I would really be very interested in how you prefer to model human preferences.
Philosophy surely is not useless, but some of their arguments just do not make sense to me.
Physicists tend to express bafflement that philosophers care so much about the words. Philosophers, for their part, tend to express exasperation that physicists can use words all the time without knowing what they actually mean.
My experience is that philosophers often carelessly use words to avoid conveying a clear statement, that could be refutable.
This leads directly to the other common misunderstanding among physicists: that philosophers waste their time on grandiose-sounding “Why?” questions that may have no real answers. Perhaps “misunderstanding” isn’t the right word – some such questions are a waste of time, and philosophers do sometimes get caught up in them. (Just as physicists sometimes spend their time on questions that are kind of boring.)
To me, there seems to be a huge difference between “boring” scientific questions and “grandiose-sounding Why?-questions that [..] have no real answers” what Yudkowsky calls wrong questions, e.g. “Why is there anything instead of nothing?” where it remains very unclear how an answer to that problem would look like.
The quest for absolute clarity of description and rigorous understanding is a crucially important feature of the philosophical method.
As Jacob Bronowski and Bruce Mazlish state in The Western Intellectual Tradition, “our confidence in any science is roughly proportional to the amount of mathematics it employs—that is, to its ability to formulate its concepts with enough precision to allow them to be handled mathematically.” In my experience, some philsophers sometimes confuse precision with difficult to read sentences, use of latin words etc. If they knew mathematics (or other formalisms) better, they’d probably produce less material that is of no use (in other scientific disciplines) due to lack of precision.
Science often gives us models of the world that are more than good enough [...]. But that’s not really what drives us to do science in the first place. We shouldn’t be happy to do “well enough,” or merely fit the data – we should be striving to understand how the world really works.
How do they expect an answer to the question of how the world really works to look like? More specifically, what would stop one from responding to any answer with: Yeah, but … how does the world really, actually work?
Yudkowsky, Eliezer (2007): Levels of Organization in General Intelligence. In: Artificial General Intelligence, edited by Ben Goertzel and Cassio Pennachin, 389–501.
Hanson,Robin, Yudkowsky, Eliezer (2013): The Hanson-Yudkowsky AI-Foom Debate.
...
Oops… ;-)
This post argues that there is one and only one super goal for any agent, and that goal is simply to exist in a competitive world. Our human sense of other purposes is just an illusion created by our evolutionary origins. It is not the goal of an apple tree to make apples. Rather it is the goal of the apple tree’s genes to exist. The apple tree has developed a clever strategy to achieve that, namely it causes people to look after it by producing juicy apples.
Humans are definitely a result of natural selection, but it does not seem to be difficult at all to find goals of ours that do not serve the goal of survival or reproduction at all. Evolution seems to produce these other preferences accidentally. One thing how that happens may be examplified by the following: Our ability to contemplate our thinking from an almost external perspective (sometimes referred to as self-consiousness), is definitely helpful for learning / improving our thinking and could therefore prevail in evolution. However, it may also be the cause of altruism, because it makes every single one of us realize, that they are not very special. (This is by no means an attempt to explain altruism scientifically or something...) More generally, it would be a really strange coincidence, if all cognitive features of an organism in our physical world that serve the goal to survive and reproduce do not serve any other goal. In conclusion, even evolution can (probably) produce (by coincidence) organisms with goals that are not subgoals of the goal to survive and reproduce.
Likewise the paper clip making AI only makes paper clips because if it did not make paper clips then the people that created it would turn it off and it would cease to exist. (That may not be a conscious choice of the AI anymore than than making juicy apples was a conscious choice of the apple tree, but the effect is the same.)
Now, imagine the paper clip maximizer to be more than a robot arm, imagine it to be a well-programmed Seed AI (or the like). As pointed out in Viliam_Bur’s and cousin_it’s comment, its goal will probably not be easily changed (by coincidence or evolution of several such AIs), for example it could save its source code on several hard drives that are synchronized by a hard-wired mechanism or something… Now this paper clip maximizer would start turning all matter into paper clips. To achieve its goal, it would certainly remain in existence (and thereby give you the illusion of having the supergoal to exist in the first place) and protect its values (which is not extremely difficult). Assuming, it is successful (and we can expect this from a seed AI/superintelligence), the only matter (in reach) left, would at some point be the hardware of the paper clip maximizer itself. What would the paper clip maximizer do then? In conclusion, self-preservation and maybe propagation of value may be important subgoals, but it is certainly not the supergoal.
I challenge you to find one.
One particular example of those “evolutionary accidents / coincidences”, is homosexuality in males. Here are two studies claiming that homosexuality in males correlates with fecundity in female maternal relatives:
So, appear to be some genetic factors that prevail, because they make women more fecund. Coincidentally, they also make men homosexual, which is both an obstacle to reproduction and survival (not only due to the homophobia of other’s but also STDs. I presume, that especially our (human) genetic material is full of such coincidences, because the lack of them (i.e. the thesis that all genetic factors that prevail in evolutionary processes only lead to higher reproduction and survival rates and nothing else) seems very unlikely.
Me too, note, however, that I received several PMs from volunteers.
Interesting, thanks! I thought that it was more or less consensus that the smoking lesion refutes EDT. So, where should I look to see EDT refuted? Absent-minded driver, Evidential Blackmail, counterfactual mugging or something else?
I am sorry, but I am not sure about what you mean by that. If you are a one-boxing agent, then both boxes of Newcomb’s original problem contain a reward, assuming that Omega is a perfect predictor.
Well, the problem seems to be that this will not give you the $1M, just like in Newcomb’s original problem.
Maybe I should have added that you don’t know which genes you have, before you make the decision, i.e. two-box or one-box.
The term Machine Ethics also seems to be popular. However, it seems to put emphasis on the division of Friendly AI into a normative part (machine ethics itself) and more technical research (how to make AI possible and then how to make it safe).