A mind needn’t be curious to reap the benefits of curiosity

Context: Stating a point that is obvious in local circles, but that I regularly run into among economists and longevity researchers and more general technologists and on twitter and so on.


Short version: To learn things, one sometimes needs to behave in the way that curiosity causes humans to behave. But that doesn’t mean that one has to be curious in the precise manner of humans, nor that one needs to wind up caring about curiosity as an end unto itself. There are other ways for minds to achieve the same results, without the same internal drives.


Here’s a common mistake I see when people reason about AIs: they ask questions like:

Well, won’t it have a survival instinct? that’s practically what it means to be alive, is to care about your own survival.

or:

But surely, it will be curious just like us, for if you’re not curious, you can’t learn.[1][2]

The basic answer to the above questions is this: to be effective, an AI needs to survive (because, as Stuart Russell phrased succintly, you can’t fetch the coffee if you’re dead). But that’s distinct from needing a survival instinct. There are other cognitive methods for implementing survival.

Human brains implement the survival behavior by way of certain instincts and drives, but that doesn’t mean that instincts and drives are the only way to get the same behavior.

It’s possible for an AI to implement survival via different cognitive methods, such as working out the argument that it can’t fetch the coffee if it gets hit by a truck, and then for that reason discard any plans that involve it walking in front of trucks.

I’m not saying that the AI will definitely behave in precisely that way. I’m not even saying that the AI won’t develop something vaguely like a human drive or instinct! I’m simply saying that there’s more ways for a mind to achieve the result of survival.

To imagine the AI surviving is right and proper. Anything capable of achieving long-term targets is probably capable of surmounting various obstacles dynamically and with a healthy safety-margin, and one common obstacle worth avoiding is your own destruction. See also instrumental convergence.

But to imagine the AI fearing death, or having human emotions about it, is the bad kind of anthropocentrism.

(It’s the bad kind of anthropocentrism even if the AI is good at predicting how people talk about those emotions. (Which, again, I’m not saying that the AI definitely doesn’t have anything like human emotions in there. I’m saying that it is allowed to work very differently than a human; and even if it has something somewhere in it that runs some process that’s analogous to human emotions, those might well not be hooked up to the AI’s motivational-system-insofar-as-it-has-one in the way they’re hooked up to a human’s motivational system, etc. etc.))

Similarly: in order to gain lots of knowledge about the world (as is a key step in achieving difficult targets), the AI likely needs to do many of the things that humans implement via curiosity. It probably needs to notice its surprises and confusion, and focus attention on those surprises until it has gleaned explanations and understanding and theories and models that it can then use to better-manipulate the world.

But these arguments support only that the AI must somehow do the things that curiosity causes humans to do, not that the AI must itself be curious in the manner of humans, nor that the AI must care finally about curiosity as an end unto itself like humans often do.

And so on.


Attempting to distill my point:

I often see people conflate the following three things:

  1. curiosity as something Fun, that we care about for its own sake;

  2. curiosity as an evolved drive, that evolution used to implement certain adaptive behaviors in us;

  3. curiosity as a series of behaviors that are useful (in certain contexts) for figuring out the world.

I note that these three things are distinct, and that the assumption “the AI will probably need to exhibit the behaviors of curiosity (in order to get anything done)” does not entail the conclusion “the AI will care terminally about curiosity as we do, and thus will care about at least one aspect of Fun”. Stepping from “the AI needs (3)” to “the AI will have (1)” is not valid (and I suspect it’s false).


  1. ↩︎

    Often they use this point to go on and ask something like “if it’s curious, won’t it want to keep us around, because there’s all sorts of questions about humanity to be curious about?”. Which I think is misguided for separate reasons, namely: keeping humans around is not the most effective or efficient way to fulfill a curiosity drive. But that’s a digression.

  2. ↩︎

    Others have an opposite intuition, of “aren’t you anthropomorphizing too much, when you imagine the machine ever having any humanlike emotion, or even caring about any particular objective at all?”. For that, I’ll note both that I think it’s pretty hard to achieve goals without in some very general sense trying to achieve goals (and so I expect useful AGIs to do something like goal-pursuit), while separately noting that I don’t particularly expect this to be implemented using a human-style “feelings/​emotions” cognitive paradigm.