I’m going to settle on a semi-formal definition of emergence which I believe is consistent with everything above, and run through some examples because I think your post misrepresents them and the emergence is interesting in these cases.
Preliminary definition: a “property” is a function mapping some things to some axis
Definition: a property is called emergent if a group of things is in its domain while the individual things in the group are not in its domain.
This isn’t the usual use of “property” but I don’t want to make up a nonsense word when a familiar one works just as well. In this case, “weighs >1kg” either isn’t a property, or everything is in its domain; I’d prefer to say weight is the only relevant property. Either way this is clearly not emergent because the question always makes sense.
Being suitable for living in is a complicated example, but in general it is not an emergent property. In particular, you can still ask how suitable for living in half a house is; it’s still in the domain of the “livability” property even if it has a much lower value. This is true all the way down to a single piece of wood sticking out of the ground, which can maybe be leaned against or used to hide from wind. If you break the house down into pieces like “planks of wood” and “positions on the ground” then I think it’s true, if trivial, that livability is an emergent property of a location with a structure in some sense—it’s the interactions between those that make something a house. And this gives useful predictions, such as “just increasing quality of materials doesn’t make a house good” and “just changing location doesn’t make a house good” even though both of these are normal actions to take to make a house better.
Being able to run Microsoft windows is an emergent property of a computer, in a way that is very interesting to me if I want to build a computer from parts on NewEgg, which I’ve done many times. It has often failed for silly reasons, like “I don’t have one of the pieces needed for the property to emerge.” Like the end of the housing example, I think this is a simple example where we understand the interactions, but it still is emergent and that emergence again gives concrete predictions, like “improving the CPU’s ability to run windows doesn’t help if it stops it interacting the right way” which with domain knowledge becomes “improving the CPU’s ability to run windows doesn’t help if you get a CPU with the wrong socket that doesn’t fit in your motherboard.”
I think this is useful, and I think it’s very relevant to a lot of LW-relevant subjects.
If intelligence is an emergent property, then just scaling up or down processing power of an emergent system may scale intelligence up and down directly, or it might not—depending on the other parts of the system.
If competence is an emergent property, then it may rely on the combination of behavior and context and task. Understanding what happens to competence when you change the task even in practical ways through e.g. transfer learning is the same understanding that would help prevent a paperclip maximizer.
If ability to talk your way out of a box is an emergent property, then the ability to do it in private chat channels may depend on many things about the people talking, the platform being used to communicate, etc. In particular it also predicts quite clearly that reading the transcripts might not be at all convincing to the reader that the AI role-player could talk his way out of their box. It also suggests that the framing and specific setup of the exercise might be important, and that if you want to argue that a specific instance of making it work is convincing, there is a substantial amount of work to do remaining.
This is getting a bit rambly so I’m going to try to wrap it up.
With a definition like this, saying something has an emergent property relies on and predicts two statements:
The thing has component parts
Those component parts connect
These statements give us different frameworks for looking at the thing and the property, by looking at each individual part or at the interactions between sets of parts. Being able to break a problem down like this is useful.
It also says that answering these individual questions like “what is this one component and how does it work” and “how do these two components interact” are not sufficient to give us an understanding of the full system.
Perhaps I’m misunderstanding something, but it seems to me that (1) one can adjust the domain on which any “property” is defined ad lib, (2) in many cases there’s a wide range of reasonable domains, (3) whether some property is “emergent” according to your definition is strongly dependent on the choice of domain, and (4) most of the examples discussed in this thread are like that.
Human consciousness is pretty much a paradigmatic example of an emergent property. Is there a function that tells us how conscious something is? Maaaybe, but if so I don’t see any particular reason why its domain shouldn’t include things that aren’t conscious at all (which get mapped to zero, or something extremely close to zero). Like, say, neurons. But if you do that then consciousness is no longer “emergent” by your definition, because then the brain’s constituent parts are no longer outside the domain of the function. (Is it silly to ask “Is a neuron conscious”? Surely not; indeed at least part of the point here is that we’re pretty sure neurons aren’t conscious. And in order to contrast conscious humans with other not-so-conscious things, we surely want to ask the question about chimpanzees, rhesus monkeys, dogs, rats, flies, amoebas—and if amoebas, why exactly not neurons?)
Size is pretty much a paradigmatic example of a not-interestingly-emergent property. (I hesitate to call anything flatly not-emergent.) Well, OK. So what’s the size of a carbon atom? An electron? There are various measures of size we can somewhat-arbitrarily decide to use, but they don’t all give the same answer for these small objects and I think it’s clearly defensible to claim that there is no such thing as “the size” of an electron. In which case, boom, “size” is an emergent property in your sense.
I don’t see how being able to run Windows is emergent in your sense. Can my laptop run Windows? Yes. Can its CPU on its own run Windows? No. Can the “H” key on its keyboard run Windows? No. The natural domain of the function seems to be broad enough to include the components of my laptop. Hence, emergence.
Maybe I am misunderstanding what you mean by “domain”?
I do agree that the fact that a computer may fail to be able to do something we want because of a not-entirely-obvious interaction between its parts is an important fact and has something to do with some notion of “emergence”. But I don’t see what it has to do with the definition you’re proposing here. The relevant fact isn’t that ability-to-run-Windows isn’t defined for the components of the machine, it’s that ability-to-run-Windows depends on interactions between the components. Which is true, and somewhat interesting, but an entirely different proposition. Likewise for your other examples where some-sort-of-emergence is an important fact—in all of which I agree that the interactions and context-dependencies you’re drawing attention to are worth paying attention to, but I don’t see that they have anything much to do with the specific definition of emergence you proposed, and more importantly I don’t see what the notion of emergence actually adds here. Again, I’m not denying that the behaviour of a system may be more than a naive some of the behaviours of its components, I’m not denying that this fact is often important—I just think it’s not exactly news, and that generally when people talk about something like consciousness being “emergent” they mean something more, and that if we define “emergence” broadly enough to include all these things then it looks to me like too broad a category for it to be useful (e.g.) to think of it as a single coherent field that merits study, rather than an umbrella description for lots of diverse phenomena with little in common.
Honestly I think that comment got away from me, and looking back on it I’m not sure that I’d endorse anything except the wrap up. I do think “from a quantum perspective, size is emergent” is true and interesting. I also think people use emergence as a magical stopword. But people also use all kinds of technical terms as magical stopwords, so dismissing something just on those grounds isn’t quite enough—but maybe there is enough reason to say that this specific word is more confusing than helpful.
I’m going to settle on a semi-formal definition of emergence which I believe is consistent with everything above, and run through some examples because I think your post misrepresents them and the emergence is interesting in these cases.
Preliminary definition: a “property” is a function mapping some things to some axis
Definition: a property is called emergent if a group of things is in its domain while the individual things in the group are not in its domain.
This isn’t the usual use of “property” but I don’t want to make up a nonsense word when a familiar one works just as well. In this case, “weighs >1kg” either isn’t a property, or everything is in its domain; I’d prefer to say weight is the only relevant property. Either way this is clearly not emergent because the question always makes sense.
Being suitable for living in is a complicated example, but in general it is not an emergent property. In particular, you can still ask how suitable for living in half a house is; it’s still in the domain of the “livability” property even if it has a much lower value. This is true all the way down to a single piece of wood sticking out of the ground, which can maybe be leaned against or used to hide from wind. If you break the house down into pieces like “planks of wood” and “positions on the ground” then I think it’s true, if trivial, that livability is an emergent property of a location with a structure in some sense—it’s the interactions between those that make something a house. And this gives useful predictions, such as “just increasing quality of materials doesn’t make a house good” and “just changing location doesn’t make a house good” even though both of these are normal actions to take to make a house better.
Being able to run Microsoft windows is an emergent property of a computer, in a way that is very interesting to me if I want to build a computer from parts on NewEgg, which I’ve done many times. It has often failed for silly reasons, like “I don’t have one of the pieces needed for the property to emerge.” Like the end of the housing example, I think this is a simple example where we understand the interactions, but it still is emergent and that emergence again gives concrete predictions, like “improving the CPU’s ability to run windows doesn’t help if it stops it interacting the right way” which with domain knowledge becomes “improving the CPU’s ability to run windows doesn’t help if you get a CPU with the wrong socket that doesn’t fit in your motherboard.”
I think this is useful, and I think it’s very relevant to a lot of LW-relevant subjects.
If intelligence is an emergent property, then just scaling up or down processing power of an emergent system may scale intelligence up and down directly, or it might not—depending on the other parts of the system.
If competence is an emergent property, then it may rely on the combination of behavior and context and task. Understanding what happens to competence when you change the task even in practical ways through e.g. transfer learning is the same understanding that would help prevent a paperclip maximizer.
If ability to talk your way out of a box is an emergent property, then the ability to do it in private chat channels may depend on many things about the people talking, the platform being used to communicate, etc. In particular it also predicts quite clearly that reading the transcripts might not be at all convincing to the reader that the AI role-player could talk his way out of their box. It also suggests that the framing and specific setup of the exercise might be important, and that if you want to argue that a specific instance of making it work is convincing, there is a substantial amount of work to do remaining.
This is getting a bit rambly so I’m going to try to wrap it up.
With a definition like this, saying something has an emergent property relies on and predicts two statements:
The thing has component parts
Those component parts connect
These statements give us different frameworks for looking at the thing and the property, by looking at each individual part or at the interactions between sets of parts. Being able to break a problem down like this is useful.
It also says that answering these individual questions like “what is this one component and how does it work” and “how do these two components interact” are not sufficient to give us an understanding of the full system.
Perhaps I’m misunderstanding something, but it seems to me that (1) one can adjust the domain on which any “property” is defined ad lib, (2) in many cases there’s a wide range of reasonable domains, (3) whether some property is “emergent” according to your definition is strongly dependent on the choice of domain, and (4) most of the examples discussed in this thread are like that.
Human consciousness is pretty much a paradigmatic example of an emergent property. Is there a function that tells us how conscious something is? Maaaybe, but if so I don’t see any particular reason why its domain shouldn’t include things that aren’t conscious at all (which get mapped to zero, or something extremely close to zero). Like, say, neurons. But if you do that then consciousness is no longer “emergent” by your definition, because then the brain’s constituent parts are no longer outside the domain of the function. (Is it silly to ask “Is a neuron conscious”? Surely not; indeed at least part of the point here is that we’re pretty sure neurons aren’t conscious. And in order to contrast conscious humans with other not-so-conscious things, we surely want to ask the question about chimpanzees, rhesus monkeys, dogs, rats, flies, amoebas—and if amoebas, why exactly not neurons?)
Size is pretty much a paradigmatic example of a not-interestingly-emergent property. (I hesitate to call anything flatly not-emergent.) Well, OK. So what’s the size of a carbon atom? An electron? There are various measures of size we can somewhat-arbitrarily decide to use, but they don’t all give the same answer for these small objects and I think it’s clearly defensible to claim that there is no such thing as “the size” of an electron. In which case, boom, “size” is an emergent property in your sense.
I don’t see how being able to run Windows is emergent in your sense. Can my laptop run Windows? Yes. Can its CPU on its own run Windows? No. Can the “H” key on its keyboard run Windows? No. The natural domain of the function seems to be broad enough to include the components of my laptop. Hence, emergence.
Maybe I am misunderstanding what you mean by “domain”?
I do agree that the fact that a computer may fail to be able to do something we want because of a not-entirely-obvious interaction between its parts is an important fact and has something to do with some notion of “emergence”. But I don’t see what it has to do with the definition you’re proposing here. The relevant fact isn’t that ability-to-run-Windows isn’t defined for the components of the machine, it’s that ability-to-run-Windows depends on interactions between the components. Which is true, and somewhat interesting, but an entirely different proposition. Likewise for your other examples where some-sort-of-emergence is an important fact—in all of which I agree that the interactions and context-dependencies you’re drawing attention to are worth paying attention to, but I don’t see that they have anything much to do with the specific definition of emergence you proposed, and more importantly I don’t see what the notion of emergence actually adds here. Again, I’m not denying that the behaviour of a system may be more than a naive some of the behaviours of its components, I’m not denying that this fact is often important—I just think it’s not exactly news, and that generally when people talk about something like consciousness being “emergent” they mean something more, and that if we define “emergence” broadly enough to include all these things then it looks to me like too broad a category for it to be useful (e.g.) to think of it as a single coherent field that merits study, rather than an umbrella description for lots of diverse phenomena with little in common.
Honestly I think that comment got away from me, and looking back on it I’m not sure that I’d endorse anything except the wrap up. I do think “from a quantum perspective, size is emergent” is true and interesting. I also think people use emergence as a magical stopword. But people also use all kinds of technical terms as magical stopwords, so dismissing something just on those grounds isn’t quite enough—but maybe there is enough reason to say that this specific word is more confusing than helpful.