As a trained musician with a vivid aural imagination, I find this idea to be hilarious. Totally. Risky? Really? What could possibly be risky about practicing a skill that others possess in much greater quantities, due to the same sort of practice?
pdf23ds
Well, it certainly gave me a good strong confusing. You could at least append a ”,′ he continued.” or something.
You are the walking dead, and this is a dead world spinning, and many other worlds like this one are already destroyed.”
“But this world is going to live anyway. I have decided it.”
“That is my own world’s heroism.”
I think your quoting is messed up here. All three of these lines are the hero’s, correct? You should remove the end quote from the first two lines.
I think it’s more like “never praise a child for being intelligent”. You can tell them they’re smart if they are, just don’t do it often or put any importance on it.
There’s another aspect of the shortcomings of IQ tests that people might not be aware of. Cognition is quite flexible, and abstract problem-solving ability can be met by many combinations of underlying, modular capacities. A person lacking in certain respects can make up for the lack, at the price, perhaps, of thinking a little more slowly.
Take me for an example. On the WISC-III IQ test, my combined score is 145. There are two composite scores that the combined score is made up of, the verbal score (I got 155, the maximum possible on that test) and the performance score (I got 125). There are also a number of different individual capacity scores. On most, I scored above the 95 percentile. On two or three, I scored right in the middle, and in one (visual short term memory) I scored in the first percentile.
Let me repeat that. I scored in the first percentile for the capacity to keep visual information in my short-term memory. (I scored in the 97th for aural short term memory, and 99.9th for linguistic.) How does that change how I solve problems, how I think about the world? Well, I perform many tasks about twice as slowly (but just as accurately) as others with my composite IQ. I have to use other circuits than most people do to solve the same problems, circuits that aren’t as efficient. Circuits that may even work slightly differently, giving me a different perspective on problems, which may be superior or inferior, I don’t know (likely depending on the individual problem). I strongly suspect that this is a large part of the cause of my intense dislike of school.
(BTW, people with a large difference between performance and verbal IQ are classified as having non-verbal learning disorder. That’s right, even really smart people can have learning disorders.)
IQ is not a single number. Even IQ recognizes a large part of the complexity of human intelligence. It’s not the psychologists that make the mistake of reducing it to a single number.
Why did I write this long comment on a dead thread? Dunno.
I live with this awareness.
Hmm. I’ve never had this problem. On the other hand, I have had the problem of my sense of self-worth being based in being naturally talented at things, and so when I don’t pick up some new pursuit easily, I tend to get discouraged. Thus, I’m bad at math and don’t read enough science papers. It’s a hard cost/benefit analysis to choose whether to improve your skill at something you’re naturally talented at (and already better than most people at), or some other equally valued skill that you’re not very talented at (and well below average in skill). And the pressure of competition, and the psychological problems caused by perfectionism, have to be dealt with.
TGGP, I think we have to define “deserve” relative to social consensus—a person deserves something if we aren’t outraged when they get it for one reason or another. (Most people define this based on the consensus of a subset of society—people who share certain values, for instance.) Differences in the concept of “deserve” are one of the fundamental differences (if not the primary difference) between conservatism and liberalism.
My problem with CEV is that who you would be if you were smarter and better-informed is extremely path-dependent. Intelligence isn’t a single number, so one can increase different parts of it in different orders. The order people learn things in, and how fully they integrate that knowledge, and what incidental declarative/affective associations they form with the knowledge, can all send the extrapolated person off in different directions. Assuming a CEV-executor would be taking all that into account, and summing over all possible orders (and assuming that this could be somehow made computationally tractable) the extrapolation would get almost nowhere before fanning out uselessly.
OTOH, I suppose that there would be a few well-defined areas of agreement. At the very least, the AI could see current areas of agreement between people. And if implemented correctly, it at least wouldn’t do any harm.
Intuitions about personal identity are probably incoherent under an increased understanding of the mind, just like free will is.
Kevembuangga,
If you take a Bayesian view of the scientific process as opposed to a Popperian one, then theories are never disproved either, just shown to be very unlikely.
But though science can never prove anything conclusively, it doesn’t follow that science is not a pursuit of truth. There are no processes that produce certain truths. The ones that claim to are mainly fundamentalist religions. But something doesn’t have to be certain to be a truth, if you’re a Bayesian, and not a fundamentalist.
Insanity: doing the same thing over and over again and expecting different results.
That’s a stupid quote. The fact that it’s often attributed to Ben Franklin is even more ridiculous. Insanity (psychological problems) rarely includes that as a symptom, and even when it does it’s only a small part of the problem. (OCD doesn’t count, because the compulsion doesn’t include a belief that this time will be any different.)
Replace “insanity” with “stupidity” and the quote isn’t quite as stupid.
Hmm. I don’t think this really works, because Eric brought up neither sex nor marriage. I think Ilyssa (the female protagonist) does pose an interesting question, but I have a hard time believing that she really feels repulsed or thinks she ought to. I’m not sure how to answer the question, but I’m pretty sure that it would be answered in the negative, and I have a strong feeling Ilyssa would agree.
The point of the passage seems simply to be that she has a tendency to say whatever pops into her head (and that those thoughts tend to be interesting and very intelligent), without any thought to the potentially negative social consequences. It’s meant to elicit a desire in the (presumably male) reader to be in Eric’s place and able to see past Ilyssa’s awkwardness and to her great mind, perhaps to handle the situation with more rationalist grace, perhaps even winning the interest of the girl. I think the rest of the story supports this interpretation.
In short: DECIDE to work. Just do it.
For those lacking the relative self-discipline, this is like asking someone to lift a box that’s too heavy for them. It’s exactly the same as telling an obese person to just stop eating so much. If someone has the self-discipline, it can be helpful advice in certain circumstances. If they don’t, all the advice leads to is cycles of guilt and frustration.
(I don’t mean to say Eliezer has no self-discipline. What he’s trying to do requires huge reserves of it.)
Some people with scientific accomplishments have been positivly crazy, in fact. E.g. Kary Mullis, who developed the polymerase chain reaction, winning a nobel prize. In 1992, Mullis founded a business with the intent to sell pieces of jewelry containing the amplified DNA of deceased famous people like Elvis Presley and Marilyn Monroe. He’s also an AIDS denier and a global warming skeptic.
I think lots of people are misunderstanding the “1-place function” bit. It even took me a bit to understand, and I’m familiar with the functional programming roots of the analogy. The idea is that the “1-place morality” is a closure over (i.e. reference to) the 2-place function with arguments “person, situation” that implicitly includes the “person” argument. The 1-place function that you use references yourself. So the “1-place function” is one’s subjective morality, and not some objective version. I think that could have been a lot clearer in the post. Not everyone has studied Lisp, Scheme, or Haskell.
Overall I’m a bit disappointed. I thought I was going to learn something. Although you did resolve some confusion I had about the metacircular parts of the reasoning, my conclusions are all the same. Perhaps if I were programming an FAI the explicitness of the argument would be impressive.
As other commenters have brought up, your argument doesn’t address how your moral function interacts with others’ functions, or how we can go about creating a social, shared morality. Granted, it’s a topic for another post (or several) but you could at least acknowledge the issue.
OTOH, saying you “believe” in some mostly vacuous statement that you were raised to believe, while not really believing anymore in most of the more obviously false beliefs in the same package, doesn’t reflect very poorly on your rationality. (I’m not sure to what extent this applies to MrHen.)
ETA: I view belief in god in a growing rationalist as sort of a vestigial thing. It’ll eventually just wither and fall off.
A similar site, which you posted about last year, is Wrong Tomorrow, which tracks pundit predictions. There’s also this thing called PunditWatch, though it only tracks a small number of pundits.
I think the biggest reason that most dating advice sucks is that good advice is only possible if you actually view the realtime performance of the person and thus get an idea of what kind of mistakes they’re making. Once you see what mistakes they’re making, giving good advice becomes orders of magnitude easier. Then it would be called “teaching” instead of “advice-giving”.
Those must be pretty big paperclips.