Sick babies are often too weak to suck much—and this is true even if the baby isn’t sick enough to require a nicu stay. If a baby has to be in the hospital—it can be difficult logistically to breastfeed them, and of course if women aren’t dedicated to it, they won’t maintain milk. My son was required to stay in the nicu for 4 days (for ridiculous reasons—he was fine). I was only allowed to stay in the hospital 2 nights, and I was exhausted and needed to sleep. I ended up allowing them to feed him formula since my milk was slow to come in—no one strongly encouraged me to stay there and breastfeed in the night. I got a 5 minute tutorial on how to use a pump, which was briefly suggested. It’s great that some hospitals are encouraging breastfeeding and providing donor milk to premature babies. I don’t know how universal this is. I know other women who have complained of similar problems I faced.
Ozy—sibling studies have a major problem—they don’t take into account the reasons why a mother would breast-feed one child but not the other. If you ask moms about this, they always have an answer, and it is usually something like, ‘Josh was very sleepy and just wouldn’t suck. We had to give him a bottle to get him to eat at all.’ My mother basically gives this exact story for why I was breast-fed and my brother was not. And my brother had developmental problems and I did not. I don’t think this is because he was fed formula. Remember, weaker/sicker babies are more likely to get formula, and sicker/older/tireder/more depressed mothers are more likely to formula feed. In order to breastfeed, everything has to go right. One thing goes wrong, and it’s on to formula.
It’s a mess. In general poor people are more likely to use formula since they have to go back to work/don’t have the same level of indoctrination- oops education—about the benefits of breast feeding, and breast feeding is a lot of work. Then there’s the issue that sicker babies often have to be formula fed, because they have weaker sucking reflexes and/or require special high-calorie formula. Multiples are more likely to be formula fed, for obvious reasons. Babies of older mothers are more likely to be formula feed, since older moms produce less milk, etc. etc. More obsessive and more highly educated mothers are more likely to breast-feed for obvious reasons. In general, my conclusion from the (noncomprehensive) reading I’ve done about it indicate that breast feeding clearly reduces early respiratory and GI infections as well as reduced colic and GI distress (while breastfeeding), but has unclear impact on long term psychological, physical, and cognitive health. Overall those things look better with breast-fed babies, but attempts to control for other things often negates the effects, leading to yo-yoing articles about the supremacy of breast milk depending on the fashion of the day. However, going back to theory, it would be very strange if breast milk weren’t better given human’s past experience with making food-substitutes. That being said, the healthiest baby is a fed baby, and the impact of formula vs breast feeding is unlikely to outweigh many other factors in a person’s life, such as milk production, needs to earn money to support the family, and mental health of the mother (depression in mothers is very highly correlated with poor long term outcomes).
There is another interpretation, which is that strong property rights *are* moral. I am currently 80% through Atlas Shrugged, which is a very strong thesis for this interpretation. Basically, when you take away property rights, whether the material kind, the action of one’s labor, or the spiritual kind, you give power to those who are best at taking. Ayn Rand presents the results of this kind of thinking, the actions that result, and the society it creates. I strongly recommend you read.
Excellent post with good food for thought. I’m interested to hear more about how people on this blog avoid superstitions.
I agree with Ray—the chapter was too long and spent too many words saying what it was trying to say. I read it in several sittings due to lack of adequate time block nd couldn’t find my place, which lead to me losing time and rereading portions and feeling generally frustrated. think the impact would be improved by reducing by a considerable margin.
I agree that this is an important issue we may have to deal with. I think it will be important to separate doing things for the community from doing things for individual members of the community. For example, encouraging people to bring food to a pot luck or volunteer at solstice is different from setting expectations that you help someone with their webpage for work or help out members of the community who facing financial difficulties. I’ve been surprised by how many times I’ve had to explain that expecting the community to financially support people is terrible on every level and should be actively discouraged as a community activity. This is not an organized enough community with high enough bars to membership to do things like collections. I do worry that people will hear a vague ‘Huffelpuff!’ call to arms and assume this means doing stuff for everyone else whenever you feasilbly can—It shouldn’t. It should be a message for what you do in the context of the public community space. What you choose to do for individuals is your own affair.
I understand the anxiety issues of, ’Do I have what it takes to accomplish this...”
I don’t understand why the existence of someone else who can would damage Eliezer’s ego. I can observe that many other people’s sense of self is violated if they find out that someone else is better at something they thought they were the best at—the football champion at HS losing their position at college, etc. However, in order for this to occur, the person needs to 1) in fact misjudge their relative superiority to others, and 2) value the superiority for its own sake.
Now, Eliezer might take the discovery of a better rationalist/fAI designer as proof that he misjudged his relative superiority—but unless he thinks his superiority is itself valuable, he should not be bothered by it. His own actual intelligence, afterall, will not have changed, only the state of his knowledge of others’ intelligence relative to his own.
Eliezer must enjoy thinking he is superior for loss of this status to bother his ‘ego’.
Though I suppose one could argue that this is a natural human quality, and Eliezer would need to be superhuman or lying to say otherwise.
Again, I have difficulty understanding why so many people place such a high value on ‘intelligence’ for its own sake, as opposed to a means to an end. If Eliezer is worried that he does not have enough mathematical intelligence to save the universe from someone else’s misdesigned AI, than this is indeed a problem for him, but only because the universe will not be saved. If someone else saves the universe instead, Eliezer should not mind, and should go back to writing sci-fi novels. Why should Eliezer’s ego cry at the thought of being upstaged? He should want that to happen if he’s such an altruist.
I don’t really give a damn where my ‘intelligence’ falls on some scale, so long as I have enough of it to accomplish those things I find satisfying and important TO ME. And if I don’t, well, hopefully I have enough savvy to get others who do to help me out of a difficult situation. Hopefully Eliezer can get the help he needs with fAI (if such help even exists and such a problem is solvable).
Also, to those who care about intelligence for its own sake, does the absolute horsepower matter to you, or only your abilities relative to others? IE, would you be satisfied if you were considered the smartest person in the world by whatever scale, or would that still not be enough because you were not omniscient?
Scott: “You have a separate source of self-worth, and it may be too late that you realize that source isn’t enough.”
Interesting theory of why intelligence might have a negative correlation with interpersonal skills, though it seems like a ‘just so story’ to me, and I would want more evidence. Here are some alternatives: ‘Intelligent children find the games and small-talk of others their own age boring and thus do not engage with them.’ ‘Stupid children do not understand what intelligent children are trying to tell them or play with them, and thus ignore or shun them.’ In both of these circumstances, the solution is to socialize intelligent children with each other or with an older group in general. I had a horrible time in grade school, but I socialized with older children and adults and I turned out alright (well, I think so). I suppose without any socialization, a child will not learn how to interpret facial expressions, intonations, and general emotional posturing of others. I’m not certain that this can’t be learned with some effort later in life, though it might not come as naturally. Still, it would seem worth the effort.
I’m uncertain whether Eliezer-1995 was equating intelligence with the ability to self-optimize for utility (ie intelligence = optimization power) or if he was equating intelligence with utility (intelligence is great in and of itself). I would agree with Crowly that intelligence is just one of many factors influencing the utility an individual gets from his/her existence. There are also multiple kinds of intelligence. Someone with very high interpersonal intelligence and many deep relationships but abyssmal math skills may not want to trade places with the 200 IQ point math wiz who’s never had a girlfriend and is still trying to compute the ultimate ’girlfriend maximizing utility equation”. Just saying...
Anyone want to provide links to studies correlating IQ, ability, and intelligences in various areas with life-satisfaction? I’d hypothesize that people with slightly above average math/verbal IQs and very above average interpersonal skills probably rank highest on life-satisfaction scales.
Unless, of coures, Eliezer-1995 didn’t think utility could really be measured by life satisfaction, and by his methods of utility calculation, Intelligence beats out all else. I’d be interested in knowing what utility meant to him under this circumstance.
Oh, come on, Eliezer, of course you thought of it. ;) However, it might not have been something that bothered you, as in-
A) You didn’t believe actually having autonomy mattered as long as people feel like they do (ie a Matrix/Nexus situation). I have heard this argued. Would it matter to you if you found out your whole life was a simulation? Some say no. I say yes. Matter of taste perhaps?
B) OR You find it self evident that ‘real’ autonomy would be extrapolated by the AI as something essential to human happiness, such that an intelligence observing people and maximizing our utility wouldn’t need to be told ‘allow autonomy.’ This I would disagree with.
C) OR You recognize that this is a problem with a non-obvious solution to an AI, and thus intend to deal with it somehow in code ahead of time, before starting the volition extrapolating AI. Your response indicates you feel this way. However, I am concerned even beyond setting an axiomatic function for ‘allow autonomy’ in a program. There are probably an infinite number of ways that an AI can find ways to carry out its stated function that will somehow ‘game’ our own system and lead to suboptimal or outright repugnant results (ie everyone being trapped in a permanent quest- maybe the AI avoids the problem of ‘it has to be real’ by actually creating a magic ring that needs to be thrown into a volcano every 6 years or so). You don’t need me telling you that! Maximizing utility while deluding us about reality is only one. It seems impossible that we could axiomatically safeguard against all possibilities. Assimov was a pretty smart cookie, and his ‘3 laws’ are certainly not sufficient. ‘Eliezer’s million lines of code’ might cover a much larger range of AI failures, but how could you ever be sure? The whole project just seems insanely dangerous. Or are you going to address safety concerns in another post in this series?
Ah! I just thought of a great scenario! The Real God Delusion. Talk about wireheading…
So the fAI has succeeded and it actually understands human psychology and their deepest desires and it actually wants to maximize our positive feelings in a balanced way, etc. It has studied humans intently and determines that the best way to make all humans feel best is to create a system of God and heaven- humans are prone to religiosity, it gives them a deep sense of meaning, etc. So our friendly neighbohrhood AI reads all religious texts and observes all rituals and determines the best type of god(s) and heaven(s) (it might make more than one for different people)… So the fAI creates God, gives us divine tasks that we feel very proud to accomplish when we can (religiosity), gives us rules to balance our internal biological conflicting desires, and uploads us after death into some fashion of paradise where we can feel eternal love...
Hey- just saying that even IF the fAI really understood human psychology, doesn’t mean that we will like it’s answer… We might NOT like what most other people do.
I was completely awed by how just totally-mind-blowing-amazing this stuff was the once and only time I tried it. Now, I knew the euphoric-orgasmic state I was in had been induced by a drug, and this knowledge would make me classify it as ‘not real happiness,’ but if someone had secretly dosed me after saving a life or having sex, I probably would have interpreted it as happiness proper. Sex and love make people happy in a very similar way as cocaine, and don’t seem to have the same negative effects as cocaine, but this is probably a dosage issue. There are sex/porn addicts whose metabolism or brain chemistry might be off. I’m sure that if you carefully monitored the pharmacokinetics of cocaine in a system, you could maximize cocaine utility by optimizing dosage and frequency such that you didn’t sensitize to it or burn out endogenous seretonin.
Would it be wrong for humans to maximize drug-induced euphoria? Then why not for an AI to?
What about rewarding with cocaine after accomplishing desired goals? Another million in the fAI fund… AHHH… Maybe Eliezer should become a sugar-daddy to his cronies to get more funds out of them. (Do this secretly so they think the high is natural and not that they can buy it on the street for $30)
The main problem as I see it is that humans DON’T KNOW what they want. How can you ask a superintelligence to help you accomplish something if you don’t know what it is? The programmers want it to tell them what they want. And then they get mad when it turns up the morphine drip…
Maybe another way to think about it is we want the superintelligence to think like a human and share human goals, but be smarter and take them to the next level through extrapolation.
But how do we even know that human goals are indefinitely extrapolatable? Maybe taking human algorithms to an extreme DO lead to everyone being wire-headed in one way or another. If you say, “I can’t just feel good without doing anything… here are the goals that make me feel good- and it CAN’T be a simulation,′ then maybe the superintelligence will just set up a series of scenarios in which people can live out their fantasies for real… but they will still all be staged fantasies.
Excuse my entrance into this discussion so late (I have been away), but I am wondering if you have answered the following questions in previous posts, and if so, which ones.
1) Why do you believe a superintelligence will be necessary for uploading?
2) Why do you believe there possibly ever could be a safe superintelligence of any sort? The more I read about the difficulties of friendly AI, the more hopeless the problem seems, especially considering the large amount of human thought and collaboration that will be necessary. You yourself said there are no non-technical solutions, but I can’t imagine you could possibly believe in a magic bullet that some individial super-genius will eurekia have an epiphany about by himself in his basement. And this won’t be like the cosmology conference to determine how the universe began, where everyone’s testosterone riddled ego battled for a victory of no consequence. It won’t even be a manhattan project, with nuclear weapons tests in barren waste-lands… Basically, if we’re not right the first time, we’re fucked. And how do you expect you’ll get that many minds to be that certain that they’ll agree it’s worth making and starting the… the… whateverthefuck it ends up being. Or do you think it’ll just take one maverick with a cult of loving followers to get it right?
3) But really, why don’t you just focus all your efforts on preventing any superintelligence from being created? Do you really believe it’ll come down to us (the righteously unbiased) versus them (the thoughtlessly fame-hungry computer scientists)? If so, who are they? Who are we for that matter?
4) If fAI will be that great, why should this problem be dealt with immediately by flesh, blood, and flawed humans instead of improved-upoloaded copies in the future?
Ok- Eliezer- you are just a human and therefore prone to anger and reaction to said anger, but you, in particular, have a professional responsibility not to come across as excluding people who disagree with you from the discussion and presenting yourself as the final destination of the proverbial buck. We are all in this together. I have only met you in person once, have only had a handful of conversations about you with people who actually know you, and have only been reading this blog for a few months, and yet I get a distinct impression that you have some sort of narcissistic Hero-God-Complex. I mean, what’s with dressing up in a robe and presenting yourself as the keeper of clandestine knowledge? Now, whether or not you actually feel this way, it is something you project and should endeavor not to, so that people (like sophiesdad) take your work more seriously. “Pyrimid head,” “Pirate King,” and “Emperor with no clothes” are NOT terms of endearment, and this might seem like a ridiculous admonission coming from a person who has self-presented as a ‘pretentious slut,’ but I’m trying to be provocative, not leaderly. YOU are asking all of these people to trust YOUR MIND with the dangers of fAI and the fate of the world and give you money for it! Sorry to hold you to such high standards, but if you present with a personality disorder any competent psychologist can identify, then this will be very hard for you… unless of course you want to go the “I’m the Messiah, abandon all and follow me!” route, set up the Church of Eliezer, and start a religious movement with which to get funding… Might work, but it will be hard to recruit serious scientists to work with you under those circumstances...
Oh… I should have read these comments to the end, somehow missed what you said to sophiesdad.
Eliezer… I am very disappointed. This is quite sad.
I should also add:
6) Where do you place the odds of you/your institute creating an unfriendly AI in an attempt to create a friendly one?
7) Do you have any external validation (ie, unassociated with your institute and not currently worshiping you) for this estimate, or does it come exclusively from calculations you made?
I have a few practical questions for you. If you don’t want to answer them in this tread, that’s fine, but I am curious:
1) Do you believe humans have a chance of achieving uploading without the use of a strong AI? If so, where do you place the odds?
2) Do you believe that uploaded human minds might be capable of improving themselves/increasing their own intelligence within the framework of human preference? If so, where do you place the odds?
3) Do you believe that increased-intelligence-uploaded humans might be able to create an fAI with more success than us meat-men? If so, where do you place the odds?
4) Where do you place the odds of you/your institute creating an fAI faster than 1-3 occurring?
5) Where do you place the odds of someone else creating an unfriendly AI faster than 1-3 occurring?