Re: “information that appears forbidden or secret, seems more important and trustworthy” Michael Scheuer says the same thing about how the CIA analyzes data. He claims that public sources are often ignored in favor of confidential ones, even when its irrational to do so.
Grant
I love most of your posts, but I think you might be off on this one.
Why successful cultures reward people who tried to discover and understand knowledge that was already known? I don’t think they would; I think they’d reward people who go after secrets, mysteries, etc. People who make use of knowledge (preexisting or otherwise) for their own gain are of course rewarded anyway, but for the most part I don’t think that describes science very well, does it?
What would be the purpose of rewarding people (who cannot make use of the knowledge) to ‘discover’ things already in the public domain? Wouldn’t we rather have them striving to solve puzzles? I know its enjoyable for many of us to ‘discover’ science on our own, but for your average person I think thats a waste of time. In other words, I think scarcity in knowledge is beneficial for society, though maybe not for science nuts.
Can’t you usually audit courses in most universities for free?
Isn’t the state-space of similar such problems known to exceed the number of atoms in the Universe? There is a term for problems which are rendered unsolvable because there just isn’t enough possible state-storing matter to represent them, but I can’t think of it now.
Pardon me if this is a stupid question, my experience with AI is limited. Funny Eliezer should mention Haskell, I’ve got to get back to trying to wrap my brain around ‘monads’.
I call BS. I’ve been to open-source conferences, and I’ve never seen attractive women at them, zombie nurses or otherwise!
Good story, though.
I was more or less surrounded by people of average sanity when I grew up, but they still seemed pretty nuts to me. (Completely off-topic, but I really wonder why people tell children known-fantasies such as Santa Clause and the Easter Bunny)
I don’t think its really accurate to say most people are insane. Clearly they need to be sane for the world to keep on running. IMO, they are insane when they can afford to be—which is pretty common in politics, religion and untestable hypothesizes, but a LOT less common in the workplace. Most people just aren’t interested in truth because truth doesn’t pay out in a lot of circumstances. I wonder if science might change in your direction (and how quickly?) if betting markets were more commonly accepted?
I think I’ve finally put my finger on the icky feeling I get when reading gender topics on this blog. From my point of view, many comments seem to (inadvertently?) assume that misandry and misogyny are mutually exclusive. I don’t think they are. Both men and women have deeply-rooted flaws, worthy of criticism from anyone attempting self-improvement. My culture (American nerd) contains strong anti-male and anti-female elements, some of which I think are totally irrational.
As an educated, upper-class geek, my own personal experience is that misandry is currently ahead of misogyny by a good margin, but I realize this probably isn’t true of all subcultures. Also, since I’ve only been male, I cannot see things from the perspective of a female.
If I borrow from another topic and say something like “women are stupid when it comes to dating, always going out with the jerks instead of nice guys like me” (which I don’t believe, by the way), this doesn’t indicate that I feel male dating habits are more rational. The problems and irrationalities of modern male attraction are so much more obvious than female attraction they have been made fun of for centuries.
Historically, we’re told gender-roles (formal and informal) have primarily been anti-female. I’m sure thats true, but they’ve also been anti-male. Women weren’t drafted to fight in WWI or WWII (not that I’d want them to be!), and the enforced role of the wife and house-maker isn’t necessarily worse than the father’s role as the sole breadwinner.
Laura, I don’t think being called “one of the guys” is in any way an insult to women (provided it comes from men; each gender seems to have an inflated view of itself relative to the other). Men have positive qualities lacking in women, and vise-versa; if you combine the best of both worlds, good for you. Calling a guy “sensitive” isn’t an insult when it comes from women, is it?
In computer engineering, I’ve observed the opposite effect from what Laura describes. Women are so rare (approximately 1 in 30 among American undergrads, probably much higher among foreign grad students) they seem to be treated very well. A few times it even interfered with my classwork, due to the ease of which a lone female student was able to monopolize the TA’s time (not that I’m blaming the student for this).
I think men both irrationally favor and disfavor women based on their gender. I think attractive women are a lot more likely to be favored. I’m not sure if women do the same, though I’d guess they do, just to lesser degrees.
If we are taking a “social engineering” viewpoint towards increasing male confidence, I would think the best thing would be for domestic women to just be more understanding towards under-confident men. Learning to be good in bed really isn’t rocket science, its just that its hard to acquire the sort of experience and honest feedback needed in order to develop those skills.
I await the proper timing and forum in which to elaborate my skepticism that we should focus on trying to design a God to rule us all. Sure, have a contingency plan in case we actually face that problem, but it seems not the most likely or important case to consider.
I find the idea of an AI God rather scary. However, unless private AIs are made illegal or heavily regulated, is there much danger of one AI ruling all the lesser intelligences?
This very much reminds me of people’s attitude towards cute, furry animals: -Some like to make furry animals happy by preserving their native habitats. -Some like to forcibly keep them as pets so they can make them even happier. -Some like to tear off their skin and wear it, because their fur is cute and feels nice.
I’m hoping we’d all defect on this one. Defecting isn’t always a bad thing anyways; many parts of our society depend on defected prisoner’s dilemmas (such as competition between firms).
When I first studied game theory and prisoner’s dilemmas (on my own, not in a classroom) I had no problem imagining the payoffs in completely subjective “utils”. I never thought of a paperclip maximizer, though.
I know this is quite a bit off-topic, but in response to:
We’re born with a sense of fairness, honor, empathy, sympathy, and even altruism—the result of our ancestors adapting to play the iterated Prisoner’s Dilemma.
Most of us are, but there is the small minority of the population (1-3%) that are specifically born without a conscious (or much of one). We call them sociopaths or psychopaths. This is seemingly advantageous because it allows those people to prey on the rest of us (i.e., defect where possible), provided they can avoid detection.While I’m sure Eliezer knows this (and likely knows more about the subject than I), its omission in his post IMO highlights a widespread and costly bias: pretending these people don’t exist, or pretending they can be “cured”.
If “rational” actors always defect and only “irrational” actors can establish cooperation and increase their returns, this makes me question the definition of “rational”.
However, it seems like the priors of a true prisoner’s dilemma are hard to come by (absolutely zero knowledge of the other player and zero communication). Don’t we already know more about the paperclip maximizer than the scenario allows? Any superintelligence would understand tit-for-tat playing, and know that other intelligences should understand it as well. Knowing this, it seems like it would first try a tit-for-tat strategy when playing with an opponent of some intelligence.
If the intelligence knew the other player was stupid, it wouldn’t bother. Humans don’t try and cooperate with non-domesticated wolves or hawks when they hunt, after all.
Eliezer,
As someone who rejects defection as the inevitable rational solution to both the one-shot PD and the iterated PD, I’m interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD.
I am guilty of the above. In the one-shot PD there is no communication, and no chance for cooperation to help. In the iterated PD, there is a chance the other player will be playing tit-for-tat as well.
If one wants to walk to work, one can live close to one’s workplace. I know quite a few people who walk or bike to work. Most people don’t adopt new technology because they are coerced into doing so, they do it because it makes their lives better. In the zero-sum world of status-seeking, I could see how some people might feel coerced into adopting new technology, or loose their status. But feeling coerced and being coerced seem to be two very different things.
Eliezer,
In my experience, smart people have many original theories. They likely hold these theories because they know they are smarter than most people, and so don’t see any reason to trust common knowledge. Also, holding original and complex theories make them seem more intelligent. Most original theories are of course incorrect, even when they come from smart people. Intelligent, charismatic people are very good at convincing themselves and others they are correct.
IMO, this is one of the main reasons those, smart, competent people in charge screw up so often. They don’t do it because they aren’t smart or competent, they do it because they have a bias in favor of their own ideas and theories, just like everyone else.
I don’t think the second theory is any less “predictive” than the first. It could have been proposed at the same time or before the first, but it wasn’t. Why should the predictive ability of a theory vary depending on the point in time in which it was created? David Friedman seems to prefer the first because it demonstrates more ability on the part of the scientist who created it (i.e., he got it after only 10 tries).
Unless we are given any more information on the problem, I think I agree with David.
If all people, including yourself, become corrupt when given power, then why shouldn’t you seize power for yourself? On average, you’d be no worse than anyone else, and probably at least somewhat better; there should be some correlation between knowing that power corrupts and not being corrupted.
Knowing power corrupts, a self-interested individual would seek to surround himself with those who did not seek power (or at least did not seek power that could be wielded against him). These individuals would not be likely to put up with people who sought power with the potential to do serious harm, so it would be advantageous for this self-interested individual not to seek power at all (or to do it clandestinely).
I’m not sure I understand why you say it can’t be group selection. It seems perfectly possible to me, albeit much rarer than individual selection.
Suppose all the tribes of humans (or monkeys, for that matter) on earth were populated by perfectly rational sociopaths. Then suppose an individual mutant developed a conscience. If this mutant gets lucky and passes his or her genes on a good number of times, you might end up with a tribe of people with consciences. This tribe would have an enormous advantages over the other sociopathic tribes, and would almost certainly out-perform them if other variables were roughly equal.
I think the same argument can be made for memes and religion. If people believe some god in the sky is watching them, they are less likely to perform socially destructive behavior (like theft or violence when they can get away with it). Thus, societies who practiced this sort of self-deception would be more successful than ones which did not. Yes it would be rare for an entire tribe to adopt these beliefs (for individuals its a prisoner’s dilemma), but once it happened that tribe would have a huge advantage over tribes of sociopaths.
I’m not sure if this has been discussed here before, but how isn’t atheism as religion? It has to be accepted on faith, because we can’t prove there isn’t a magical space god that created everything. I happen to have more faith in atheism than Christianity, but ultimately its still faith.
...we never pry open the black box and scale the brain bigger or redesign its software or even just speed up the damn thing.
How would this be done? In our current economy, humans all run similar software on similar hardware. Yet we still have difficulty understanding each other, and even two tradesmen of the same culture, gender, and who grew up in the same neighborhood have knowledge of their trade the other cannot even likely understand (or they may not even know that they have). We’re far from being able to tinker around in each other’s heads. Even if we had the physical ability to alter others’ thought processes, its not clear that doing so (outside of increasing connectivity; I would love to have my brain wired up to a PC and the Internet) would produce good results. Even cognitive biases which are so obviously irrational have purposes which are often beneficial (even if only individually), and I don’t think we could predict the effects of eliminating them en mass.AI could presumably be much more flexible than human minds. An AI which specialized in designing new computer processors probably wouldn’t have any concept of the wind, sunshine, biological reproduction or the solar system. Who would improve upon it? Being that it specialized in computer hardware, it would likely be able to make other AIs (and itself) faster by upgrading their hardware, but could it improve upon their logical processes? Beyond improving speed, could it do anything to an AI which designed solar panels?
In short, I see the amount of “Hayekian” knowledge in an AI society to be far, far greater than a human one, due to the flexibility of hardware and software that AI would allow over the single blueprint of a human mind. AIs would have to agree on a general set of norms in order to work together, norms most might not have any understanding of beyond the need to follow them. I think this could produce a society where humans are protected.
Though I don’t know anything about the plausibility of a single self-improving AGI being able to compete with (or conquer?) a myriad of specialized AIs. I can’t see how the AGI would be able to make other AIs smarter, but I could see how it might manipulate or control them.
The problems that I see with friendly AGI are:
1) Its not well understood outside of AI researchers, so the scientists who create it will build what they think is the most friendly AI possible. I understand what Eliezer is saying about not using his personal values, so instead he uses his personal interpretation of something else. Eliezer says that making a world which works by “better rules” then fading away would not be a “god to rule us all”, but who’s decided on those rules (or the processes by which the AI decides on those rules)? Ultimately its the coders who design the thing. Its a very small group of people with specialized knowledge changing the fate of the entire human race.
2) Do we have any reason to believe that a single foom will drastically increase an AI’s intelligence, as opposed to making it just a bit smarter? Typically, recursive self-improvement does make significant headway, until the marginal return on investment in more improvement is eclipsed by other (generally newer) projects.
3) If an AGI could become so powerful as to rule the world in a short time span, any group which disagrees with how an AGI project is going will try to create their own before the first one is finished. This is a prisoner’s dilemma arms-race scenario. Considerations about its future friendliness could be put on hold in order to get it out “before those damn commies do”.
4) In order to create an AGI before the opposition, vast resources would be required. The process would almost certainly be undertaken by governments. I’m imagining the cast of characters from Dr. Strangelove sitting in the War Room and telling the programmers and scientist how to design their AI.
In short, I think the biggest hurdles are political, and so I’m not very optimistic they’ll be solved. Trying to create a friendly AI in response to someone else creating a perceived unfriendly AI is a rational thing to do, but starting the first friendly AI project may not be rational.
I don’t see whats so bad about a race of machines wiping us out though; we’re all going to die and be replaced by our children in one way or another anyways.
To me, the decision is very easy. Omega obviously possesses more prescience about my box-taking decision than I do myself. He’s been able to guess correct in the past, so I’d see no reason to doubt him with myself. With that in mind, the obvious choice is to take box B.
If Omega is so nearly always correct, then determinism is shown to exist (at least to some extent). That being the case, causality would be nothing but an illusion. So I’d see no problem with it working in “reverse”.