I’m reminded of some “advice” I read about making money in the stock market:
Buy a stock, wait until it goes up, and then sell it. If it doesn’t go up, then don’t have bought it.
I’m reminded of some “advice” I read about making money in the stock market:
Buy a stock, wait until it goes up, and then sell it. If it doesn’t go up, then don’t have bought it.
The Patrician took a sip of his beer. “I have told this to few people, gentlemen, and I suspect I never will again, but one day when I was a young boy on holiday in Uberwald I was walking along the bank of a stream when I saw a mother otter with her cubs. A very endearing sight, I’m sure you will agree, and even as I watched, the mother otter dived into the water and came up with a plump salmon, which she subdued and dragged onto a half-submerged log. As she ate it, while of course it was still alive, the body split and I remember to its day the sweet pinkness of its roes as they spilled out, much to the delight of the baby otters who scrambled over themselves to feed on the delicacy. One of nature’s wonders, gentlemen: mother and children dining upon mother and children. And that’s when I first learned about evil. It is built in to the very nature of the universe. Every world spins in pain. If there is any kind of supreme being, I told myself, it is up to all of us to become his moral superior.”
-- Terry Pratchett, Unseen Academicals
Here’s something else I can’t normally say in public:
Infants are not people because they do not have significant mental capacities. They should be given the same moral status as, say, dogs. It’s acceptable to euthanize one’s pet dog for many reasons, so it should be okay to kill a newborn for similar reasons.
In other words, the right to an abortion shouldn’t end after the baby is born. Infants probably become more like people than like dogs some time around two years of age, so it should be acceptable to euthanize any infant less than two years old under any circumstances in which it would be acceptable to euthanize a dog.
Consider the case of a hungry rat that sees food on the other side of an electrified floor. The rat wants to minimize its discomfort. It wants to not get shocked, and also wants not to be hungry.
A moderately stupid rat will compare the pain of its current hunger to the pain of crossing the floor. When its pain from hunger becomes as strong as the pain of crossing the floor, it’ll decide to cross the floor.
A smarter rat will realize that it’ll have to cross the floor eventually, and so will minimize its total pain by crossing immediately, so its hunger doesn’t have a chance to build to a painful level.
A really stupid rat will notice that, when it steps onto the electrified floor, its current pain equals the sum of its pain from hunger and the pain from the shock. As this total is always greater than the pain from hunger alone, it’ll never step on the electrified floor and it will starve to death.
When it comes to homework, my decision-making algorithm seems to act like the first rat...
Annoyed Girl: No.
Me: Will you go out on a date?
Annoyed Girl: Hell no!
When I procrastinate over a task, it’s usually because I’m in a situation like this:
1) I want something to have been done and 2) I don’t want to experience doing it.
To use the classic example, I want to have done my homework but I don’t want to be doing my homework.
I’ve never seen anything from Eliezer that proves that he’s done anything at all of value except be a rationality teacher. I know of two general criteria by which to judge someone’s output in a field that I am not a part of:
1) Academic prestige (degrees, publications, etc.) and 2) Economic output (making things that people will pay money for).
Eliezer’s institution doesn’t sell anything, so he’s a loss on part 2. He doesn’t have a Ph.D or any academic papers I can find, so he’s a loss on part 1, as well. Can SIAI demonstrate that it’s done anything except beg for money, put up a nice-looking website, organize some symposiums, and write some very good essays?
To be honest, I’d say that his output matches the job description of “philosopher” than “engineer” or “scientist”. Not that there’s anything wrong with that. Many works that fall broadly under the metric of philosophy have been tremendously influential. For example, Adam Smith was a philosopher.
Eliezer seems to have talents both for seeing through confusion (and its cousin, bullshit) and for being able to explain complicated things in ways that people can understand. In other words, he’d be an amazing university professor. I just haven’t seen him prove that he can do anything else.
Is it just me, or is Voldemort also using Hermione as a test subject for things he’d like to do to himself but never tried before? (In other words, he learned his lesson after Harry told him he should have tested Horcrux 2.0 on someone else first.)
“War, Nobby. Huh! What is it good for?” he said.
”Dunno, sarge. Freeing slaves, maybe?”
“Absol—Well, okay.”
”Defending yourself from a totalitarian aggressor?”
“All right, I’ll grant you that, but—”
”Saving civilization against a horde of—”
″It doesn’t do any good in the long run is what I’m saying, Nobby, if you’d listen for five seconds together,” said Fred Colon sharply.
”Yeah, but in the long run what does, sarge?”
-- Terry Pratchett, Thud!
From a forum signature:
The fool says in his heart, “There is no God.”—Psalm 14:1
It is a fool’s prerogative to utter truths that no one else will speak. --Neil Gaiman, Sandman 3:3:6
I’d suggest Slate Star Codex and The GiveWell Blog.
Feynman once talked about this specific issue during a larger speech:
We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off, because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of the electron, after Millikan. If you plot them as a function of time, you find that one is a little bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn’t they discover that the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease.
Would MIRI be interested in hiring a full time staff writer/editor? I feel like I could have produced a good chunk of this if I had thought I should try to, just from having hung around LessWrong since it was just Eliezer Yudkowsky and Robin Hanson blogging on Overcoming Bias, but I thought the basic “no, really, AI is going to kill us” arguments were already written up in other places, like Arbital and the book Superintelligence.
I donated $20, roughly the price of a cheap hardcover novel.
when billions of people are extinguished and replaced by slightly different versions of themselves.
This happens in the ordinary passage of time anyway. (Stephen King’s story “The Langoliers” plays this for horror—the reason the past no longer exists is because monsters are eating it.)
“What was the Sherlock Holmes principle? ‘Once you have discounted the impossible, then whatever remains, however improbable, must be the truth.’”
“I reject that entirely,” said Dirk sharply. “The impossible often has a kind of integrity to it which the merely improbable lacks. How often have you been presented with an apparently rational explanation of something that works in all respects other than one, which is just that it is hopelessly improbable? Your instinct is to say, ‘Yes, but he or she simply wouldn’t do that.’”
“Well, it happened to me today, in fact,” replied Kate.
“Ah, yes,” said Dirk, slapping the table and making the glasses jump. “Your girl in the wheelchair—a perfect example. The idea that she is somehow receiving yesterday’s stock market prices apparently out of thin air is merely impossible, and therefore must be the case, because the idea that she is maintaining an immensely complex and laborious hoax of no benefit to herself is hopelessly improbable. The first idea merely supposes that there is something we don’t know about, and God knows there are enough of those. The second, however, runs contrary to something fundamental and human which we do know about. We should therefore be very suspicious of it and all its specious rationality.”
-- Douglas Adams. The Long Dark Tea-Time of the Soul (1988) p.169
Umeshism: “If nobody ever says no to you, you’re not asking for enough.”
Do (incremental) advances in military technology actually change the number of people who die in wars? They might change which people die, or how rapidly, but it seems to me that groups of people who are determined to fight each other are going to do it regardless of what the “best” weapons currently available happen to be. The Mongols wreaked havoc on a scale surpassing World War I with only 13th century technology, and the Rwandan genocide was mostly carried out with machetes. World War I brought about a horror of poison gas, but bullets and explosions don’t make people any less dead than poison gas does.
(Although the World War 1 era gases did have one thing that set them apart from other weapons: nonlethal levels of exposure often left survivors with permanent debilitating injuries. Dead is dead, but different types of weapons can be more or less cruel to those who survive the fighting.)
(In which I solve the wrong problem)
“Obviously”, you have the dark elves attack the farm while the adventurers are trying to help get the field plowed. ;)
One takeaway I got from this when combined with some other stuff I’ve read:
Don’t do psychedelics. Seriously, they can fuck up your head pretty bad and people who take them and organizations that encourage taking them often end up drifting further and further away from normality and reasonableness until they end up in Cloudcuckooland.