Apartheid based on age that replaces the previous versions based on race and sex.
The morally indefensible and insanely self-destructive attempt to mitigate drug addiction by banning drugs.
Apartheid based on age that replaces the previous versions based on race and sex.
The morally indefensible and insanely self-destructive attempt to mitigate drug addiction by banning drugs.
Human thought is by default compartmentalized for the same good reason warships are compartmentalized: it limits the spread of damage.
A decade or thereabouts ago, I read a book called Darwin’s Black Box, whose thesis was that while gradual evolution could work for macroscopic features of organisms, it could not explain biochemistry, because the intricate molecular machinery of life did not have viable intermediate stages. The author is a professional biochemist, and it shows; he’s really done his homework, and he describes many specific cases in great detail and carefully sets out his reasons for claiming gradual evolution could not have worked.
Oh, and I was able to demolish every one of his arguments in five minutes of armchair thought.
How did that happen? How does a professional put so much into such carefully constructed arguments that end up being so flimsy a layman can trivially demolish them? Well I didn’t know anything else about the guy until I ran a Google search just now, but it confirms what I found, and most Less Wrong readers will find, to be the obvious explanation.
If he had only done what most scientists in his position do, and said “I have faith in God,” and kept that compartmentalized from his work, he would have avoided a gross professional error.
Of course that particular error could have been avoided by being an atheist, but that is not a general solution, because we are not infallible. We are going to end up taking on some mistaken ideas; that’s part of life. You cite the Singularity as your primary example, and it is a good one, for it is a mistaken idea, and one that is immensely harmful if not compartmentalized. But really, it seems unlikely there is a single human being of significant intellect who does not hold at least one bad idea that would cause damage if taken seriously.
We should think long and hard before we throw away safety mechanisms, and compartmentalization is one of the most important ones.
I think a more complete translation would be something like “never assume malice when stupidity will suffice; never assume stupidity when ignorance will suffice; never assume ignorance when forgivable error will suffice; never assume error when information you hadn’t adequately accounted for will suffice.”
Aside from various other issues on which people have remarked, one thing that hasn’t been pointed out so far is the problem with college projects: they’re too damn small.
For every project, there is a good range of team sizes, large enough for many hands make light work, but not large enough for too many cooks spoil the broth (where you start having to split it up so finely that coordination overhead swamps the actual project work—see Fred Brooks, The Mythical Man-Month, for a classic discussion of this).
I remember one college project I had, we were supposed to do it in teams of four. Problem was, like other college projects, it was comfortably sized for one person, a moderately tight squeeze for two; four people was an insane glut. Solution: the two best programmers in our team did the project, the other two paid for our beer for an evening.
Now you say “all of a sudden, foom!, my part of the project is done because one of the girls was bored on the weekend and had nothing better to do. (Huh? When does this ever happen?)”
Well the answer is, it doesn’t happen often in real life, because it’s a symptom of a project much too small for the team size, or equivalently a massive glut of manpower for the project size—in real life, you have a to-do list stretching to somewhere around the middle of the next century, that grows by two items for every one item you knock off, and a desperate shortage of skilled people to do the work.
So while teamwork is important in real life, and many issues thereof do arise, that particular problem is mostly endemic to college projects.
But I’ve also seen people don’t have tails. My point is, if we assume that is a hallucination, we should be even more ready to assume the other is a hallucination.
Not only is intellectual property law in its current form destructive, but the entire concept of intellectual property is fundamentally wrong. Creating an X does not give the creator the right to point a gun at everyone else in the universe who tries to arrange matter under their control into something similar to X. In programming terminology, property law should use reference semantics, not value semantics. Of course it is true that society needs to reward people who do intellectual work, just as much as people who do physical work, but there are better justified and less harmful ways to accomplish this than intellectual property law.
I was about to explain why nobody has an answer to the question you asked, when it turned out you already figured it out :) As for what you should actually do, here’s my suggestion:
Explain your actual situation and ask for advice.
For each piece of advice given, notice that you have immediately come up with at least one reason why you can’t follow it.
Your natural reaction will be to post those reasons, thereby getting into an argument with the advice givers. You will win this argument, thereby establishing that there is indeed nothing you can do.
This is the important bit: don’t do step 3! Instead, work on defeating or bypassing those reasons. If you can’t do this by yourself, go ahead and post the reasons, but always in a frame of “I know this reason can be defeated or bypassed, help me figure out how,” that aligns you with instead of against the advice givers.
You are allowed to reject some of the given advice, as long as you don’t reject all of it.
A problem with Pascal’s Mugging arguments is that once you commit yourself to taking seriously very unlikely events (because they are multiplied by huge potential utilities), if you want to be consistent, you must take into account all potentially relevant unlikely events, not just the ones that point in your desired direction.
To be sure, you can come up with a story in which SIAI with probability epsilon makes a key positive difference, for bignum expected lives saved. But by the same token you can come up with stories in which SIAI with probability epsilon makes a key negative difference (e.g. by convincing people to abandon fruitful lines of research for fruitless ones), for bignum expected lives lost. Similarly, you can come up with stories in which even a small amount of resources spent elsewhere, with probability epsilon makes a key positive difference (e.g. a child saved from death by potentially curable disease, may grow up to make a critical scientific breakthrough or play a role in preserving world peace), for bignum expected lives saved.
Intuition would have us reject Pascal’s Mugging, but when you think it through in full detail, the logical conclusion is that we should… reject Pascal’s Mugging. It does actually reduce to normality.
Cast in consequentialist terms, the reason we shouldn’t push the fat man in the second trolley problem is that we are fallible, and when we believe committing an unethical act will serve the greater good, we are probably wrong.
Thought experiments aside, supposing that scenario came up in real life, and I tried actually pushing the fat man, what would happen? Answer: either I’d end up in a tussle with an angry fat man demanding to know why I just tried to kill him, while whatever chance I might have had of shouting a warning to the people in the path of the trolley was lost, or I’d succeed a second too late and then I’d have committed murder for nothing. And when the media got hold of the story and spread it far and wide—which they probably would, it’s exactly the kind of ghoulish crap they love—it might help spread the idea that in a disaster, you can’t afford to devote all your attention to helping your neighbors, because you need to spare some of it for watching out for somebody trying to kill you for the greater good. That could easily cost more than five lives.
If some future generation ever builds a machine whose domain and capabilities are such that it is called on to make ethical decisions, these considerations will apply far more strongly. The machine will initially be far more fallible than humans in dealing with unexpected real-world situations by simple lack of experience, and the media will apply a double standard: errors of commission by an intelligent machine will be punished orders of magnitude more strongly than either machine errors of omission or human errors of either variety. I think it’s not an exaggeration to say that the media reaction to a single instance of a machine pushing the fat man, could be enough to tip the balance between continued progress and global snuff.
So yes, I’m with the authors on this one.
It’s true that the basilisk in question is a wild fantasy even by Singularitarian standards, and that people took it seriously enough to get upset about it, could well be considered cause for alarm.
But that’s not why people are telling waitingforgodel they’d rather he left. People are telling him that because he took action he sincerely (perhaps wrongly, but sincerely) believed would reduce humanity’s chances of survival. That’s a lot crazier than believing in basilisks!
And the pity is, it’s not true he couldn’t effect change. The right thing to do in a scenario like this is propose reasonable compromises (like the idea of rot13′ing posts on topics people find upsetting) and if those fail then, with the moral high ground under your feet, find or create an alternative site for discussion of the banned topics. Not only would that be morally better than this nutty blackmail scheme, it would also be more effective.
This is a great example of the general rule that if you think you need to do something crazy or evil for the greater good, you are probably wrong—keep looking for a better solution instead.
A good reference, but it’s worth remembering that if I tried the radio sabotage trick in real life, either I’d accidentally break the transmit capability as well as receive, or I’d be there until the deadline had come and gone happily blabbering about how I’m on the hill that looks like a pointy hat, while you were 20 miles away on a different hill that also looked like a pointy hat, cursing me, my radio and my inadequate directions.
In other words, like most things that are counterintuitive, these findings are counterintuitive precisely because their applicability in real life is the exception rather than the rule; by all means let’s recognize the exceptions, but without forgetting what they are.
I’d rather skip a middle four. It’s necessary for everyone to learn basic things like literacy and arithmetic, but remember that the idea of setting school leaving age at 18 was supposed to be that you would then have finished your education. If most people are going to be going to college, then school leaving age should be set at 14, so that you can spend those next four years learning something useful instead of just marking time while you memorize the dates of Napoleon’s battles and the agricultural products of Denmark.
I do know what was censored and why, and I think Eliezer was wrong to delete the material in question.
That’s a separate issue from whether waitingforgodel’s method of expressing his (correct) disagreement with the censorship is sane or reasonable—of course it isn’t.
I’ve had to consciously adjust my reactions on this sort of thing a few times, by reminding myself that the amount I should care about saving 1 euro on a product should not depend on the total price—but only and specifically on how frequently I will buy the product.
Put another way: it helps to have the right formula to replace the wrong one.
Because you couldn’t. In the ancestral environment, there weren’t any scientific journals where you could look up the original research. The only sources of knowledge were what you personally saw and what somebody told you. In the latter case, the informant could be bullshitting, but saying so might make enemies, so the optimal strategy would be to profess belief in what people told you unless they were already declared enemies, but base your actions primarily on your own experience; which is roughly what people actually do.
The purpose of thought experiments and other forms of simulation is to teach us to do better in real life. Obviously, no simulation can be perfectly faithful to real life. But if a given simulation is not merely imperfect but actively misleading, such that training in the simulation will make your real performance worse, then rejecting the simulation is a perfectly rational thing to do.
In real life, if you think the greater good requires you to do evil, you are probably wrong. Therefore, given a thought experiment in which the greater good really does require you to do evil, rejecting the thought experiment on the grounds of being worse than useless for training purposes, is a correct answer.
Consider the two possible explanations in the first scenario you describe:
Humans really all have tails.
The AI is just a glorified chat bot that takes in English sentences, jumbles them around at random and spits the result out. Admittedly it doesn’t have code for self-deception, but it doesn’t have any significant intelligence either. All I did to get the supposed 99% success rate was to basically feed in the answers to the test problems along with the questions. Having dedicated X years of my life to working on AI, I have strong motive for deceiving myself about these things.
If I were in the scenario you describe, and inclined to look at the matter objectively, I would have to admit the second explanation is much more likely than the first. Wouldn’t you agree?
If I observe that I did read the thread to which you refer, and I still think your current course of action is stupid and crazy (and that’s coming from someone who agrees with you about the censorship in question being wrong!) will that change your opinion even slightly?
Here’s my shot at what it would look like:
“Hey guys, I understand contemporary politics is not considered appropriate for this site, so I’ve started writing a series of posts on my own blog; here’s the link if anyone wants to check it out.”
It so happens that, as a libertarian, I sympathize with your agenda (and would happily follow the link if you did the above) but at the same time I don’t think you can write a political post while leaving out the politics. (Eliezer managed to write some good posts about politics while leaving out the politics, which was a non-trivial feat in itself; I think that’s about the best you can do.) And there are good reasons why we try to avoid politics on Less Wrong. So the best I can suggest is to write what you want to write on a blog where it’s appropriate.
Last time I ran into that argument, Fidel Castro was the example given. My reply was this:
If you heard a proposal to kill Fidel Castro, would you approve? Maybe. (Though even that’s not quite as simple as it sounds, when you consider things like precedent and ethical prohibitions.)
If the proposal involved dropping a hydrogen bomb on Havana, would you still approve? Of course not!
This, I claimed, sufficiently refutes the idea that getting rid of a handful of bad apples justifies the death of everyone.