That’s the right first question to consider, and it’s something I was thinking about while writing that comment.
I don’t think it’s quite the right question to answer though. What I’m doing to generate these explanations is very different than “Go back to the EEA, and predict forward based on first principles”, and my point is more about why that’s not the thing to be doing in the first place more than about the specific explanation for the popularity of ice cream over bear fat.
It can sound nitpicky, but I think it’s important to make hypotheticals concrete because a lot of the time the concrete details you notice upon implementation change which abstractions it makes sense to use. Or, to continue the metaphor, picking little nits when found is generally how you avoid major lice infestations.
In order to “predict” ice cream I have to pretend I don’t already know things I already know. Which? Why? How are we making these choices? It will get much harder if you take away my knowledge of domestication, but are we to believe these aliens haven’t figured that out? That even if they don’t have domestication on their home planet, they traveled all this way and watched us with bears without noticing what we did to wolves? “Domestication” is hindsight in that it would take me much longer than five minutes as a cave man to figure out, but it’s a thing we did figure out as cave men before we had any reason to think about ice cream. And it’s it’s sight that I do have and that the aliens likely would too.
Similarly, I didn’t come up with the emulsification/digestion hypothesis until after learning from experience what happens when you consume a lot of pure oils by themselves. I’m sure a digestion expert could have predicted the result in advance, but I didn’t have to learn a new field of expertise because I could just run the experiment and then the obvious answer becomes obvious. A lot of times, explanations are a lot easier to verify once they’ve been identified than they are to generate in the first place, and the fact that the right explanations come to mind vastly more easily when you run the experiment is not a minor detail to gloss over. I mean, it’s possible that Zorgax is just musing idly and comes up with a dumb answer like “bear fat”, but if he came all this way to get the prediction right you bet your ass he’s abducting a few of us and running some experiments on how we handle eating pure fat.
As a general rule, in real life, fast feedback loops and half decent control laws dominate a priori reasoning. If I’m driving in the fog and can’t see but 10 feet ahead, I’m really uninterested in the question “What kind of rocks are at the bottom of the cliff 100 feet beyond the fog barrier?” and much more interested in making sure I notice the road swerving in time to keep on a track that points up the mountain. Or, in other words, I don’t care to predict which exact flavor of superstimuli I might be on track to overconsume, from the EEA. I care to notice before I get there, which is well in advance given how long ago we figured out domestication. I only need to keep my tastes tethered to reality so that when I get there ice cream and opioids don’t ruin my life—and I get to use all my current tools to do it.
I think this is the right focus for AI alignment too.
The way I see it, Eliezer has been making a critically important argument that if you keep driving in a straight line without checking the results, you inevitably end up driving off a cliff. And people really are this stupid, a lot of times. I’m very much on board with the whole “Holy fuck, guys, we can’t be driving with a stopping distance longer than our perceptual distance!” thing. The general lack of respect and terror is itself terrifying, because plenty of people have tried to fly too close to the sun and lost their wings because they were too stupid to notice the wax melting and descend.
And maybe he’s not actually saying this, but the connotations I associate with his framing, and more importantly the interpretation that seems widespread in the community, is that “We can’t proceed forward until we can predict vanilla ice cream specifically, from before observing domestication”. And that’s like saying “I can’t see the road all the way to the top of the mountain because of fog, so I will wisely stay here at the bottom”. And then feeling terror build from the pressure from people wanting to push forward. Quite reasonably, given that there actually aren’t any cliffs in view, and you can take at least the next step safely. And then reorient from there, with one more step down the road in view.
I don’t think this strategy is going to work, because I don’t think you can see that far ahead, no matter how hard you try. And I don’t think you can persuade people to stop completely, because I think they’re actually right not to.
I don’t think you have to see the whole road in advance because there’s a lot of years between livestock and widespread ice cream. Lots of chances to empirically notice the difference between cream and rendered fats. There’s still time to see it millennia in advance.
What’s important is making sure that’s enough.
It’s not a coincidence that I didn’t get to these explanations by doing EEA thinking at all. Ice cream is more popular than bear fat because of how it is cheaper to produce now. It’s easier to digest now. Aggliu was concerned with parasites this week. These aren’t things we need to refer to the EEA to understand, because they apply today. The only reason I could come up with these explanations, and trivially, is because I’m not throwing away most of what I know, declining to run cheap experiments, and then noticing how hard it is to reason 1M years in advance when I don’t have to.
The thread I followed to get there isn’t “What would people who knew less want, if they suddenly found themselves blasted with a firehose of new possibilities, and no ability to learn?”. The thread I followed is “What do I want, and why”. What have I learned, and what have we all learned. Or can we all learn—and what does this suggest going forward? This framing of people as agents fumbling through figuring out what’s good for them pays rent a lot more easily than the framing of “Our desires are set by the EEA”. No. Our priors are set by the EEA. But new evidence can overwhelm that prettyquickly—if you let it.
So for example, EEA thinking says “Well, I guess it makes sense that I eat too much sugar, because it’s energy which was probably scarce in the EEA”. Hard to do the experiment, not much you can do with that information if it proves true. On the other hand, if you let yourself engage with the question “Is a bunch of sugar actually good?”, you can run the experiment and learn “Ew, actually no. That’s gross”—and then watch your desires align with reality. This pays rent in fewer cavities and diabetes, and all sorts of good stuff.
Similarly, “NaCl was hard to get in the EEA, so therefore everyone is programmed to want lots of NaCl!”. I mean, maybe. But good luck testing that, and I actually don’t care. What I care about is knowing which salts I need in this environment, which will stop these damn cramps. And I can run that test by setting out a few glasses of water with different salts mixed in, and seeing what happens. The result of that experiment was that I already knew which I needed by taste, and it wasn’t NaCl that I found my self chugging the moment it touched my lips.
Or with opioids. I took opioids once at a dose that was prescribed to me, and by watching the effects learned from that one dose “Ooh, this feels amazing” and “I don’t have any desire to do that again”. It took a month or so for it to sink in, but one dose. I talked to a man the other day who had learned the same thing much deeper into that attractor—yet still in time to make all the difference.
Yes, “In EEA those are endogenous signaling chemicals” or whatever, but we can also learn what they are now. Warning against the dangers of superstimuli is important, but “Woooah man! Don’t EVER try drugs, because you’re hard coded by the EEA to destroy your life if you do that!” is untrue and counter productive. You can try opioids if you want, just pay real close attention, because the road may be slicker than you think and there are definitely cliffs ahead. Go on, try it. Are you sure you want to? A lot less tempting when framed like that, you know? How careful are you going to be if you do try it, compared to the guy responding “You’re not the boss of me Dad!” to the type of dad who evokes it?
So yes, lots of predictions and lots of rent paid. Just not those predictions.
Predictions about how I’ll feel if I eat a bowl full of bear fat the way one might with ice cream, despite never having eaten pure bear fat. Predictions about people’s abilities to align their desires to reality, and rent paid in actually aligning them. And in developing the skill of alignment so that I’m more capable of detecting and correcting alignment failures in the future, as they may arise.
I predict, too, that this will be crucial for aligning the behaviors of AI as well. Eliezer used to talk about how a mind that can hold religion fundamentally must be too broken to see reality clearly. So too, I predict, that a mind that can hold a desire for overconsumption of sugar must necessarily lack the understanding needed to align even more sophisticated minds.
Though that’s one I’d prefer to heed in advance of experimental confirmation.
That’s the right first question to consider, and it’s something I was thinking about while writing that comment.
I don’t think it’s quite the right question to answer though. What I’m doing to generate these explanations is very different than “Go back to the EEA, and predict forward based on first principles”, and my point is more about why that’s not the thing to be doing in the first place more than about the specific explanation for the popularity of ice cream over bear fat.
It can sound nitpicky, but I think it’s important to make hypotheticals concrete because a lot of the time the concrete details you notice upon implementation change which abstractions it makes sense to use. Or, to continue the metaphor, picking little nits when found is generally how you avoid major lice infestations.
In order to “predict” ice cream I have to pretend I don’t already know things I already know. Which? Why? How are we making these choices? It will get much harder if you take away my knowledge of domestication, but are we to believe these aliens haven’t figured that out? That even if they don’t have domestication on their home planet, they traveled all this way and watched us with bears without noticing what we did to wolves? “Domestication” is hindsight in that it would take me much longer than five minutes as a cave man to figure out, but it’s a thing we did figure out as cave men before we had any reason to think about ice cream. And it’s it’s sight that I do have and that the aliens likely would too.
Similarly, I didn’t come up with the emulsification/digestion hypothesis until after learning from experience what happens when you consume a lot of pure oils by themselves. I’m sure a digestion expert could have predicted the result in advance, but I didn’t have to learn a new field of expertise because I could just run the experiment and then the obvious answer becomes obvious. A lot of times, explanations are a lot easier to verify once they’ve been identified than they are to generate in the first place, and the fact that the right explanations come to mind vastly more easily when you run the experiment is not a minor detail to gloss over. I mean, it’s possible that Zorgax is just musing idly and comes up with a dumb answer like “bear fat”, but if he came all this way to get the prediction right you bet your ass he’s abducting a few of us and running some experiments on how we handle eating pure fat.
As a general rule, in real life, fast feedback loops and half decent control laws dominate a priori reasoning. If I’m driving in the fog and can’t see but 10 feet ahead, I’m really uninterested in the question “What kind of rocks are at the bottom of the cliff 100 feet beyond the fog barrier?” and much more interested in making sure I notice the road swerving in time to keep on a track that points up the mountain. Or, in other words, I don’t care to predict which exact flavor of superstimuli I might be on track to overconsume, from the EEA. I care to notice before I get there, which is well in advance given how long ago we figured out domestication. I only need to keep my tastes tethered to reality so that when I get there ice cream and opioids don’t ruin my life—and I get to use all my current tools to do it.
I think this is the right focus for AI alignment too.
The way I see it, Eliezer has been making a critically important argument that if you keep driving in a straight line without checking the results, you inevitably end up driving off a cliff. And people really are this stupid, a lot of times. I’m very much on board with the whole “Holy fuck, guys, we can’t be driving with a stopping distance longer than our perceptual distance!” thing. The general lack of respect and terror is itself terrifying, because plenty of people have tried to fly too close to the sun and lost their wings because they were too stupid to notice the wax melting and descend.
And maybe he’s not actually saying this, but the connotations I associate with his framing, and more importantly the interpretation that seems widespread in the community, is that “We can’t proceed forward until we can predict vanilla ice cream specifically, from before observing domestication”. And that’s like saying “I can’t see the road all the way to the top of the mountain because of fog, so I will wisely stay here at the bottom”. And then feeling terror build from the pressure from people wanting to push forward. Quite reasonably, given that there actually aren’t any cliffs in view, and you can take at least the next step safely. And then reorient from there, with one more step down the road in view.
I don’t think this strategy is going to work, because I don’t think you can see that far ahead, no matter how hard you try. And I don’t think you can persuade people to stop completely, because I think they’re actually right not to.
I don’t think you have to see the whole road in advance because there’s a lot of years between livestock and widespread ice cream. Lots of chances to empirically notice the difference between cream and rendered fats. There’s still time to see it millennia in advance.
What’s important is making sure that’s enough.
It’s not a coincidence that I didn’t get to these explanations by doing EEA thinking at all. Ice cream is more popular than bear fat because of how it is cheaper to produce now. It’s easier to digest now. Aggliu was concerned with parasites this week. These aren’t things we need to refer to the EEA to understand, because they apply today. The only reason I could come up with these explanations, and trivially, is because I’m not throwing away most of what I know, declining to run cheap experiments, and then noticing how hard it is to reason 1M years in advance when I don’t have to.
The thread I followed to get there isn’t “What would people who knew less want, if they suddenly found themselves blasted with a firehose of new possibilities, and no ability to learn?”. The thread I followed is “What do I want, and why”. What have I learned, and what have we all learned. Or can we all learn—and what does this suggest going forward? This framing of people as agents fumbling through figuring out what’s good for them pays rent a lot more easily than the framing of “Our desires are set by the EEA”. No. Our priors are set by the EEA. But new evidence can overwhelm that pretty quickly—if you let it.
So for example, EEA thinking says “Well, I guess it makes sense that I eat too much sugar, because it’s energy which was probably scarce in the EEA”. Hard to do the experiment, not much you can do with that information if it proves true. On the other hand, if you let yourself engage with the question “Is a bunch of sugar actually good?”, you can run the experiment and learn “Ew, actually no. That’s gross”—and then watch your desires align with reality. This pays rent in fewer cavities and diabetes, and all sorts of good stuff.
Similarly, “NaCl was hard to get in the EEA, so therefore everyone is programmed to want lots of NaCl!”. I mean, maybe. But good luck testing that, and I actually don’t care. What I care about is knowing which salts I need in this environment, which will stop these damn cramps. And I can run that test by setting out a few glasses of water with different salts mixed in, and seeing what happens. The result of that experiment was that I already knew which I needed by taste, and it wasn’t NaCl that I found my self chugging the moment it touched my lips.
Or with opioids. I took opioids once at a dose that was prescribed to me, and by watching the effects learned from that one dose “Ooh, this feels amazing” and “I don’t have any desire to do that again”. It took a month or so for it to sink in, but one dose. I talked to a man the other day who had learned the same thing much deeper into that attractor—yet still in time to make all the difference.
Yes, “In EEA those are endogenous signaling chemicals” or whatever, but we can also learn what they are now. Warning against the dangers of superstimuli is important, but “Woooah man! Don’t EVER try drugs, because you’re hard coded by the EEA to destroy your life if you do that!” is untrue and counter productive. You can try opioids if you want, just pay real close attention, because the road may be slicker than you think and there are definitely cliffs ahead. Go on, try it. Are you sure you want to? A lot less tempting when framed like that, you know? How careful are you going to be if you do try it, compared to the guy responding “You’re not the boss of me Dad!” to the type of dad who evokes it?
So yes, lots of predictions and lots of rent paid. Just not those predictions.
Predictions about how I’ll feel if I eat a bowl full of bear fat the way one might with ice cream, despite never having eaten pure bear fat. Predictions about people’s abilities to align their desires to reality, and rent paid in actually aligning them. And in developing the skill of alignment so that I’m more capable of detecting and correcting alignment failures in the future, as they may arise.
I predict, too, that this will be crucial for aligning the behaviors of AI as well. Eliezer used to talk about how a mind that can hold religion fundamentally must be too broken to see reality clearly. So too, I predict, that a mind that can hold a desire for overconsumption of sugar must necessarily lack the understanding needed to align even more sophisticated minds.
Though that’s one I’d prefer to heed in advance of experimental confirmation.