I’ve frequently thought that an ethics based around doing/praising those things that are “awesome” rather than those things that are “good” might work out well. (Of course, one might argue that that’s basically what Objectivism is...)
dclayh
Myself, I was thinking it’s a good thing Eliezer restricted himself to text interaction during his “AI in the box” games; otherwise, anyone would have let him out after enduring five minutes of his motionless staring.
Eliezer can in fact tile the Universe with himself, simply by slicing himself into finitely many pieces. The only reason the rest of us are here is quantum immortality.
This problem seems conceptually identical to Kavka’s toxin puzzle; we have merely replaced intending to drink the poison/pay $100 with being the sort of person whom Omega would predict would do it.
Since, as has been pointed out, one needn’t be a perfect predictor for the game to work, I think I’ll actually try this on some of my friends.
I would put the cutoff at ~1 week after birth rather than 2 years, simply for a comfortable margin of safety, but yes.
However, as I’ve written about before elsewhere, this kind of thinking does lead to the amusing conclusion that cutting off a baby’s limb is more wrong than killing it (because in the former case there’s a full-human who’s directly harmed, which is not true in the latter case).
Personally I’d prefer an eternity of being tortured by an unFriendly AI to simple death. Is that controversial?
a private institution which pays the unintelligent to undergo voluntary sterilization.
I’d donate to that.
Bystanders may well identify themselves emotionally with one debater or the other, so being “nice” to one’s opponent would reduce the defensiveness of the audience as well.
There was a study I won’t bother to look up now which showed that while wine experts could discriminate between cheap and expensive wines, and got much more enjoyment from the expensive ones (or at least claimed to), people who were new to wine reported no differences between the groups in either objective quality or subjective enjoyment.
Religious pluralism and state-sponsored religions both harm society. Many competing religions contribute to social fragmentation and create useless discord (as opposed to the useful discord of, say, a free market). A single state-sponsored religion inevitably intrudes into citizens’ privacy and the political process.
Competing religions may not be good for society, but they do seem to be good for religion. I read a fascinating article once (which I unfortunately cannot locate) which argued that the U.S.’s policy of free religion led to a competitive marketplace of different churches, which functioned as markets do to keep religion responsive to what people want, which kept religiosity strong down to the present day. Whereas in Europe, state religions were stagnant and stultified, and so the people gradually drifted away from them.
No, because the baby (by assumption) has no moral weight. The entity with moral weight is the adult which that baby will become. Preventing that adult from existing at all is not immoral (if it were, we’d essentially have to accept the repugnant conclusion), whereas causing harm to that adult, by harming the baby nonfatally, is.
Personally, I would say that neither of those is wrong (per se, anyway), and I don’t think the situations are very analogous. But I certainly agree with your last sentence (both that we apply different standards, and that we shouldn’t).
The main problem, I think, is getting them to believe that I’m a reliable predictor (i.e. that I predict as well as I claim I do).
Actually, I don’t know that if I do this it will show anything relevant to the problem under consideration. But I think it will show something. It has in fact already shown that I believe that 59% of them would agree to give me the money, either because they are sufficiently similar to Eliezer, or because they enjoy random acts of silliness (and the amount of money involved will be pretty trivial).
And if the group is made up of 1000 people? Then your hunter-gatherer instincts will underestimate the inertia of a group so large, and demand an unrealistically high price (in strategic shifts) for you to join.
I would say an equally if not more important issue is that your hunter-gather instincts underestimate the benefits (to you) of joining the thousand-person organization, presuming that they can’t be much greater than the benefits of joining a forty-person one.
I am now getting this error also. Only my own user page, only when I’m logged in.
I note without much surprise that of the 54 members who joined before I did, all seem to be male (as I am also). Perhaps this would make a good topic for a future post? Or would that bring down the wrath of Mr. Munroe.
Anyone who said “Dr. A should not be learning spelling from Yvain, Yvain should be learning science from Dr. A” would be missing the point. If Dr. A wants to learn spelling, he might as well learn it from me. And best of all if we both learn from each other!
I believe that formally, the best of all is if you and Dr. A employ comparative advantage, with him doing your science and you doing his spelling.
Your use of “parasitic” is also Dark: it serves no purpose other than to trigger the negative emotional associations of the word.
This effect is interesting from the rationalism perspective because it has three separate effects: (1) making us believe false things about our past mental states (or even false things about the world); (2) creating a disconnect between why we claim/believe we are saying or doing something, and why we actually are; and (3) the change in our behaviors/desires themselves. While (1) and (2) clearly represent decreases in rationality/sanity, what can we say about (3)? Don’t we all believe Hume around here?