I find that there is often a conflict between a motivation to speak only the truth and a motivation to successfully communicate as close approximations to the most relevant truths as constraints of time, intelligence and cultural conversational conventions allow.
wuwei
Since the indexical uncertainty in the example just comes down to not knowing whether you are going first or second, you can run the example with someone else rather than a past / future self with amnesia as long as you don’t know whether you or the other person goes first.
Some people have times when they are suicidally depressed. I think it’s quite defensible to tell those people that their life is worth more than they personally value it at.
More generally, I don’t see any strong reasons to expect people to be less mistaken about their own life worth than about any other sort of value judgment.
Also, I don’t see any case yet for interpreting CronoDAS as doing anything more than simply asking a community that may have some insight into a given field (rationality), whether his reasoning or conclusions check out.
“The correlation between liking of the date and evaluation of the date’s physical attractiveness is .78 for male subjects and .69 for female subjects. . . Sheer physical attractiveness appears to be the overriding determinant of liking.”
Also interesting: “The correlation between how much the man says he likes his partner and how much she likes him is virtually zero: r = .03.”
“Male’s MSAT scores correlate .04 with both the woman’s liking for him and her desire to date him.” (For females the equivalent figure was around -.06.)
“Importance of Physical Attractiveness in Dating Behavior”, Elaine Walster, et al., p. 514-515 http://www2.hawaii.edu/~elaineh/13.pdf
I trust this data more than folk psychology or self-reports, but I would be interested if anyone knows of any subsequent studies confirming or disconfirming these types of figures, or assessing its generalizability from 18 year olds on blind dates.
The correlations with independent ratings of attractiveness were still .44 and .39. Compared to .04 and -.06 for intelligence, that still supports the conclusion that “sheer physical attractiveness appears to be the overriding determinant of liking.”
They also used various personality measures assessing such things as social skills, maturity, masculinity/femininity, introversion/extroversion and self-acceptance. They found predominantly negative correlations (from -.18 to -.08) and only two comparatively small positive correlations .14 and .03.
Thanks for clarifying what factors you think are relevant. I agree that those have not been tested.
Voted Down. Sorry, Roko.
I don’t find Greene’s arguments to be valuable or convincing. I won’t defend those claims here but merely point out that this post makes it extremely inconvenient to do so properly.
I would prefer concise reconstructions of important arguments over a link to a 377 page document and some lengthy quotes, many of which simply presuppose that certain important conclusions have already been established elsewhere in the dissertation.
As an exercise for the reader demonstrating my complaint, consider what it would take to work out whether Joshua Greene has any argument against this analysis of morality.
I agree that this is an important discussion to have but I don’t think this post helps us to engage in a productive discussion. Rather, it merely seems to handicap those who disagree with Greene on multiple points when they wish to participate in the discussion and does so without adequate justification.
Would you bet on resource depletion?
You seem to think an FAI researcher is someone who does not engage in any AGI research. That would certainly be a rather foolish researcher.
Perhaps you are being fooled by the fact that a decent FAI researcher would tend not to publicly announce any advancements in AGI research.
I like some of the imagery but I wouldn’t say whatever the outcome is, it is by definition good.
To continue with the analogy, sometimes our inner book of morals really says one thing while a momentary upset prevents what is written in that book from successfully governing.
“Science is what we understand well enough to explain to a computer. Art is everything else we do. … Science advances whenever an Art becomes a Science. And the state of the Art advances too because people always leap into new territory once they have understood more about the old.”
-- Donald Knuth
“Muad’Dib learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It’s shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. Muad’Dib knew that every experience carries its lesson.”
-- Frank Herbert, Dune
- 4 Apr 2012 0:53 UTC; 12 points) 's comment on Rationality Quotes April 2012 by (
- 6 Jul 2012 5:34 UTC; 0 points) 's comment on Negative and Positive Selection by (
“One can measure the importance of a scientific work by the number of earlier publications rendered superfluous by it.”
-- David Hilbert
I’m not sure intelligence enhancement alone is sufficient. It’d be better to first do rationality enhancement and then intelligence enhancement. Of course that’s also much harder to implement but who said it would be easy?
It sounds like you think intelligence enhancement would result in rationality enhancement. I’m inclined to agree that there is a modest correlation but doubt that it’s enough to warrant your conclusion.
I suspect you aren’t sufficiently taking into account the magnitude of people’s irrationality and the non-monotonicity of rationality’s rewards. I agree that intelligence enhancement would have greater overall effects than rationality enhancement, but rationality’s effects will be more careful and targeted—and therefore more likely to work as existential risk mitigation.
Increases in rationality can sometimes lead with some regularity to decreasing knowledge or utility (hopefully only temporarily and in limited domains).
I think many of the most pressing existential risks (e.g. nanotech, biotech and AI accidents) come from the likely actions of moderately intelligent, well-intentioned, and rational humans (compared to the very low baseline). If that is right then increasing the number of such people will increase rather than decrease risk.
And I will suggest in turn that you are guilty of the catchy fallacy name fallacy. The giant cheesecake fallacy was originally introduced as applying to those who anthropomorphize minds in general, often slipping from capability to motivation because a given motivation is common in humans.
I’m talking about a certain class of humans and not suggesting that they are actually motivated to bring about bad effects. Rather all it takes is for there to be problems where it is significantly easier to mess things up than to get it right.
You seem to be assuming that the relation between IQ and risk must be monotonic.
I think existential risk mitigation is better pursued by helping the most intelligent and rational efforts than by trying to raise the average intelligence or rationality.
Alison Gopnik (The Scientist in the Crib, The Philosophical Baby, Causal Learning: Psychology, Philosophy and Computation). She’s done a diavlog with Joshua Knobe.