The description of behaviorists does seem a bit cartoonish, but still it’s a great post and an interesting thought provoking read. Good to see a commenter of the calibre of Richard Kennaway in the thread, too.
Hopefully_Anonymous
Who cares if Caledonian is banned from here? Hopefully he’ll post more on my blog as a result. I’ve never edited or deleted a post from Caledonian or anyone else (except to protect my anonymity). Neither has TGGP to my knowledge. As I’ve posted before on TGGP’s blog, I think there’s a hierarchy of blogs, and blogs that ban and delete for something other than stuff that’s illegal, can bring liability, or is botspam aren’t at the top of the heirarchy.
If no post of Caledonians was ever edited or deleted from here (except perhaps for excessive length), this blog would be just as good. Maybe, even better.
Post what you want to post most. The advice that you should go against your own instincts and pander is bad, in my opinion. The only things you should force yourself to do are: (1) try to post something every day, and (2) try to edit and delete comments as little as possible. I believe the result will be an excellent and authentic blog with the types of readers you want most (and that are most useful to you).
great post.
I don’t think promoting truth (or “truth”) will serve an aim of a better understanding of the world as much as promoting transparency. There seems to me to be something more naturally subversive to anti-rationality about promoting transparency than promoting “truth”.
In the last similar thread someone pointed out that we’re just talking about increasing existential risk in the tiny zone were we observe (or reasonably extrapolate) each other existing, not the entire universe. It confuses the issue to talk about destruction of the universe.
Really this is all recursive to Joy’s “Grey goo” argument. I think what needs to be made explicit is weighing our existential risk if we do or don’t engage in a particular activity. And since we’re not constrained to binary choices, there’s no reason for that to be a starting point, unless it’s nontransparent propaganda to encourage selection of a particular unnuanced choice.
A ban on the production of all novel physics situations seems more extreme than necessary (although the best arguments for that should probably be heard and analyzed). But unregulated, unreviewed freedom to produce novel physics situations also seems like it would be a bit extreme. At the least, I’d like to see more analysis of the risks of not engaging in such experimentation. This stuff is probably very hard to get right, and at some point we’ll probably get it fatally wrong in one way or another and all die. But let’s play the long odds with all the strategy we can, because the alternative seems like a recursive end state (almost) no matter what we do.
Oxford is a little different than the Wailing Wall, it’s one of the world’s earliest universities, and its been one of the world’s great universities for centuries. Eliezer, you would love Florence. In England and in other old countries, I’m most impressed by ancient pubs. One can see how an important church or castle can remain for centuries. But for a little old pub to eek it out for that long, there’s something special about that, IMO.
pdf, no I don’t mean the FAI project. I mean the things Eliezer discussed specifically in the OP and follow-up comments. He gives a long catalog of recommended actions to solve individual unhappiness. I’m pointing out that in many instances pharmaceutical or other solutions might be cheaper.
“No Hopefully, just think about it as math instead of anthropomorphizing here. This is kids stuff in terms of understanding intelligence.”
I disagree. It seems to me that you’re imagining closed systems that don’t seem to exist in the reality we live in.
Caledonian, yes, I simplified to the point of inaccuracy, but thanks for providing the footnote.
The interaction between brain and environment is complex, but reactions are variable enough that I think it’s difficult to say X environmental stimulus (or lack there of) produces Y emotional state in a human brain. That goes for obsety, poverty, the range. This is minus some extreme and developmental examples many people here could conjure up. But “rational fear of death”, “existential angst”, it’s entirely possible to go through life happy and excited while at least conceptually experiencing stuff like this. It’s also possible to go through life without thinking much about these things at all.
Carl Shulman first clued me into this line of thinking (in a comment on my blog) more strongly when he pointed out that a lot of the stress of being subordinated in status heirarchies could be eliminated pharmaceutically. I thought it was a brilliant point which potentially solved the situational need for hierarchical relationships between groups of humans without putting an inevitable health cost on those not at the top of them.
I bring this up because Eliezer seems to be proposing large social undertakings (although I suppose they could be done at the individual level, they’d still be large in aggregate) which would seem to me to come at an economic cost. If a pill or a treatment is cheaper and accomplishes the same outcome (happiness), then that would be an argument to go with the more efficient outcome in that case.
Michael, life doesn’t have to be “meaningful” for people to be happy. Nor do “genuinely loving relationships” seem to be necessary. It seems to me to be just a neurochemical state that can probably be induced by a variety of methods, not all of them social.
Like you, I noticed the cryonics throw-in. I thought it was problematic for a different reason. It’s a bit of a tell IMO that cryonics serves at least (if not only) as opiate for Eliezer. I look at cryonics as just a persistence maximizing hedge against information theoretic death, and probably a weak and unsuccessful one at that (for probabilistic reasons, not because the science is unsound).
I lived for a time with someone that was probably depressed due to genetic factors. They would always have rationalizations about their depression that had to do with social events and factors. It seemed pretty clear to me that their depression was pretty independent of those factors and was rooted in their biology. But for some reason, they were very resistent that they were sad simply because their brain produced a lot of the sad chemicals, with little correlation to social factors or life circumstance. I see a similar reluctance to acknowledge what I think is this common phenomenon in a lot of the commenters here.
“and it doesn’t have the dynamics that make “are my desires correct?” seem like a sensible thought.” Sound like overconfidence to me.
This may be the rare case where I’m more of a materialist reductionist than you, Eliezer. I think unhappiness is just brain structure/chemistry. I’d go further than Ferris and say excitement is too. The flip side of this is that you may be giving a lot of people bad advice and unrealistic expectations in this post. For a lot of people their unhappiness is a complicated unsolved challenge of bioengineering. With better technology, perhaps we’ll be able to solve it. Until then, they may spend a period of time being unhappy, not due to the fuzzy advice you give in the last paragraph. And not due to anything about “morality”.
Phillip and Robomoon deserve reputation points for putting this much thought-work into the topics of existential risk and optimizing our future reality (Phillip in particular).
Since I was a kid, disappointed that Agatha Christie mysteries were written so as not to be predictable in advance, I wanted to read a mystery series where at the end of each chapter, there’s always enough clues to predict who the murderer is. But it’s very, very hard after the 1st chapter, a little bit easier after the 2nd chapter, etc. That’s good mystery writing, in my opinion. Not creating a solution in the last few paragraphs that’s impossible to predict in advance, because its not grounded in any of the prior clues.
Mike Blume, I’d never heard of it, but Fleep is a truly fantastic recommendation. I read about 5 minutes worth of it and it keeps getting better.
Eliezer, it wasn’t a serious post. My serious view is that we’re probably all going to die, but the question is whether any of us can beat information theoretic death and buy that long odds lottery ticket for reanimation by more technologically advanced unknown parties. I sure hope I can.
Steven, actually you recover the argument a bit with that tactic. Perhaps in a Kasporovian irony, the machines were harnessing the “deep creativity” (part of a Kahneman system 1 intelligence?) of a massive parallel network of human brains to maximize their fuel efficiency and new fuel location innovations. If machines do become more powerful than us before they become as creative as us a Matrix type situation becomes somewhat more plausible. Thanks for inadvertantly making the Matrix a tiny bit more plausible to me, and thus more enjoyable.
“I think it’s cleaner to build the foundation around the basic laws of physics.” I think it’s more honest to build the foundation on the observer, over the course of their life arc, encountering different experiences.
hmm, there’s this thing called language I’m speaking and this thing called thoughts I’m thinking.
hmm, there are these things called scientists, and this body of thought called science. This is what they’ve purported to discovery about reality.
hmm, these are the tiny handfuls of experiments I’ve done myself over the course of my life.
which apparently has for many lead to
… … [death/permanent lack of conscious experience].
That’s my interaction with science, and I think gives the scope of most or all of our interactions with science, and determining the nature of reality generally. Reality seems big and messy, internally and externally. It seems humans past and present have figured some (even many) things out about reality, but as an individual passing through we have limited opportunities to confirm these discoveries. There seems to be a strong motivation to express undue certainty about reality to increase one’s status, as well perhaps to have strong belief in things just to ease one’s mind or as an aesthetic, but that doesn’t make the strong beliefs justified.
But that, not physics, seems to me to be the honest starting point. For Einstein, for Witten, for Eliezer, and for me.
Eliezer, I think you’ve given ample proof that Watson has written some things as cartoonish as your OP suggests. I don’t think this has been shown to be generalizable across all of the behaviorist scientists of his era. Ian Maxwell’s description of Behaviorists sounds like a reasonable way for science to be done pre-MRI’s, etc. But your criticism, in you OP, of Watson’s approach (or at least his rhetoric) hits the bulls eye and is a perfect contribution to the mission of this blog.