I write and talk about game theory, moral philosophy, ethical economics and artificial intelligence—focused on non-zero-sum games and their importance in solving the world’s problems.
I have admitted I am wrong at least 10 times on the internet.
I write and talk about game theory, moral philosophy, ethical economics and artificial intelligence—focused on non-zero-sum games and their importance in solving the world’s problems.
I have admitted I am wrong at least 10 times on the internet.
Sorry about my lack of clarity: By “complex” I mean “intricately ordered” rather than the simple disorder generally expected of an entropic process. To taboo both this and alignment as “following the same pattern as”:
I’d like to make the case that emergent complexity is where…
a whole system is more intricately ordered than the sum of its parts
a system follows more closely the pattern of a macroscopic phenomenon than it follows the pattern of any of its component parts.
By a macroscopic phenomenon, I mean any (or all) of the following:
1. Another physical feature of the world which it fits to, like roads aligning with a map and its terrain (and obstacles).
2. Another instance of what appears to fulfil a similar purpose despite entirely different paths to get there or materials (like with convergence)
3. A conceptual feature of the world, like a purpose or function.
So, we can more readily understand an emergent phenomenon in relation to some other macroscopic phenomenon than we can were we to merely inspect the cells in isolation. In other words, there is usefulness identifying the 20+ varieties of eyes as “eyes” (2) even though they are not the same at all, on a cellular level. It is also meaningful to understand that they perform a function or purpose (3), and that they fit the physical world (by reflecting it relatively accurately) (1).
This is an error I see people making over and over… That different theory may be a useful new development! But that is what it is, not a defence of the original theory.
I think this is the crux of our disagreement. Yudkowsky was denying the usefulness of a term entirely because some people use it vaguely. I am trying to provide a less vague and more useful definition of the term—not to say Yudkowsky is unjustified in criticising the use of the term, but that he is unjustified in writing it off completely because of some superficial flaws in presentation, or some unrefined aspects of the concept.
An error that I see happening often is throwing out the baby with the bathwater, and I’ve read people on Less Wrong (even Yudkowsky I think, though I can’t remember where, sorry) write in support of ideas like “Error Correction” as a virtue and Bayesian updating whereby we take criticisms as an opportunity to refine a concept rather than writing it off completely.
I am trying to take part in that process, and I think Yudkowsky would have been better served had he done the same—suggested a better definition that is useful.
Thanks for your comment, but I think it misses the mark somewhat.
While googling to find someone who expresses a straw-man position in the real-world is a form of straw-manning itself, this comment goes further to misrepresent a colloquial use of the word “magical” to mean literal (supernatural) “magic”.
While I haven’t read the book referenced, the quotes provided do not give enough context to claim that the author doesn’t mean what he obviously means (to me at least) that the development of an emergent phenomena seems magical… does it not seem magical? Seeming magical is not a claim that something is not reducible to its component parts, it just means it’s not immediately reducible without some thorough investigation into the mechanisms at work. Part and parcel of the definition of emergence is that it is a non-magical (bottom-up) way of understanding phenomena that seem remarkable (magical), which is why he uses a clearly non-supernatural system like an anthill to illustrate it.
Despite all this, the purpose of the post was to give a clear definition of emergence that doesn’t fall into Yudkowsky’s strawman—not a claim that no one has ever used the word loosely in the past. As conceded in the preamble (paraphrasing) I don’t expect something written 18 years ago to perfectly reflect the conceptual landscape of today.
Thanks, and yes, I did scan over the comments when I first read the article, and noted many good points, but when I decided to write I wanted to focus on this particular angle and not get lost in an encyclopaedia defences. I’m very much in the same camp as the first comment you quote.
I appreciate your take on Yudkowsky’s overreach, and the historical context. That helps me understand his position better.
The semantic stop-sign is interesting, I do appreciate Yudkowsky coming up with these handy handles for ideas that often crop up in discussion. Your two examples make me think of the fallacy of composition, in that emergence seems to be a key feature of reality that, at least in part, makes the fallacy of composition a fallacy.
Thanks for your well considered comment.
Could you explain what exactly you mean by “complex” here?
So, here I’m just stating the requirement that the system adds complexity, and that it is not merely categorically different. So, heat, for instance could be seen is categorically different to the process that it “emerged” from, but it would not qualify as “emergent” it is clearly entropic, reducing complexity. Whereas an immune system is built on top of an organism’s complexity, it is a more complex system because it includes all the complexity of the system it emerged from + its own complexity (or to use your code example, all the base code plus the new branch).
The second part is more important to my particular way of understanding emergence.
What does “aligned” mean in this context?
I think I could potentially make this clearer as it seems “alignment” comes with a lot of baggage, and has potentially been worn out in general (vague) usage, making its correct usage seem obscure and difficult to place. By “aligned with” I mean not merely related to but, “following the same pattern as”, that pattern might be a function it plays or a physical or conceptual shape that is similar. So, the slime mold and the Tokyo rail system share a similar shape, they have converged on a similar outcome because they are aligned with a similar pattern (efficiency of transport given a particular map).
Cells that a toe consists of are different than cells that a testicle or an eye consist of.
I think we’re in agreement here, my point is that the eye or testicle perform a (macroscopic) function, the cells they are made of are less important than the function—of the 20+ different varieties of eyes, none of them are made of the same cells, but it still makes sense to call them eyes, because they align with the function, eyes are essentially cell-agnostic, as long as they converge on a function.
Again, thanks for the response, I’ll try to think of some edits that help make these aspects clearer in the text.
Thanks Jonas, that’s really nice of you to say, and a great suggestion. I’ve had a look at doing sequences here. Now that I have more content, I’ll take your request and run with it.
For now, over on the site I have the posts broken up into curated categories that work as rudimentary sequences, if you’d like to check them out. Appreciate your feedback!
Thanks? (Does that mean it’s well structured?) You’re the second person to have said this. The illustrations are original, as is all the writing.
As I mentioned to the other person who raised this concern, the blog I write (the source) is an outlet for my own ideas, using chat would sort of defeat the purpose.
I can assure you that the words and images are all original, I’m quite capable of vagueifying something myself—I don’t have a content quota to make, just trying to present ideas I’ve had, so it would be quite antithetical to the project to get chat to write it.
By “aligned” I’m not meaning “related to”, I mean “maps to the same conceptual shape”, “correlated with” or “analogous to”. So the nutrient pathways of slime molds are aligned with the Tokyo rail system, but they are not related (other than by sharing an alignment with a pattern). Whereas peanut butter is related to toast, but it’s not aligned with it.
But I appreciate the feedback, if you’re able to point to something specifically that’s vague, I’ll definitely get in there and tighten it up.
The “Soldier Mindset” flag is a fair enough call, I guess this could be seen as persuasion (a no-no). Perhaps, I would rather frame it as bypassing emotions (that are acting as barriers to understanding) in order to connect. Correctly understanding the other person’s position, or core beliefs, you actually have to let go of your own biases, and in the process might actually become more open to their position.
An idea I’m workshopping that occurred to me while developing the Contagious Beliefs Simulation.
Cognitive Bias is A Feature Not a Bug:
Understanding that cognitive bias is a feature not a bug, is key to negotiation and changing minds. I find that in arguments, I only really convince someone by relating my case to the values they find important, sometimes those are the same as mine, which makes it easy, if they are clearly different I try to understand their core values. Sometimes people will reject this approach posturing as an objective rational agent, at this point I treat “rationality” as their cognitive bias, because we are not rational agents, we are irrational agents who are driven by desires over which we have no control, and for which the goal is not truth, but the reduction of mental tension and uncertainty, social acceptance and cognitive coherence—a measure of how well new information aligns with our current knowledge and views (the opposite of cognitive dissonance).
In this way bias is deleterious, so why has it survived natural selection? Because it is highly adaptive, and cognitively efficient (cheap). And when we think about it, if we discount the existence of a designer it’s also logically impossible for it to be otherwise, unless it was hardwired (like imprinting instincts in animals) how else would we get this knowledge, it would be entirely inflexible, making us capable, but not intelligent, like a dog’s supremely powerful nose that it uses to sniff other dogs’ butts. It is our ability to use previous knowledge to assess and adopt or reject incoming information that is the core mechanism of intelligence.
So, when faced with someone you are trying to convince of something, if they don’t already agree with you, they might have some important previous knowledge you need to help them square with this new info.
I’m trying out podcasting as a format for the ideas I share here and on the blog. Keen to hear if people think it translates well, or needs more tweaking—do you need to be more verbose in a spoken form to allow more time for absorption? Any ideas how to clearly describe payoff matrices in an audio format… tear it apart guys.
Yes, I would see falling valuations as an additional solution rather than a problem. Paine’s proposal would absolutely affect housing prices, correcting their inflation rate to that of other goods (rather than being 5-10x higher). Housing would cease to be a threshold for runaway wealth accumulation (and inequality), and would become affordable for ordinary people.
I say this as a person who owns two houses, this is not technically in my individual interest right now, but it’s a fairer system. As you say, I’ll still be able to afford a haircut.
I agree measures would need to be taken to protect people who are over-leveraged, and would have to be implemented gradually so as not cause massive instability. Paine benefitted from the fact that his economy was only just beginning (well, at least amidst a revolution), while ours is in full swing.
Good catch, and thanks for introducing me to the ‘conceptual rounding error’.
I think there would be more overlap with a memeplex with the ‘remainder’ (in my conception of the term) being one of loose personification, like in the case of Moloch or Trump, where the dividual entity seems to have some central motivation, but is otherwise entirely multifarious.
I guess my inclination is that individuals have always been permeable to some extent, but exposure to many varied memeplexes (like religions, political ideologies or algorithms) can make that permeability pathological. The difficulty of defining what side of the equation ‘the dividual’ is, the cause (the memeplexes) or the resulting hyper-permeable individual, is reflective of the dividual’s own paradoxical nature.
Hi Olli (sorry about the 10 month late reply, somehow missed this),
in a 45 min class, have half of your classes begin with a 15 minute well-made educational video explaining the topic, with the rest being essentially the status quo
I appreciate all the points you’ve made here, and when you clarify that you’re talking about a supplement to traditional teaching, I can picture that as a very effective situation. I’d hold to the point that this will be costly for the reasons given above, but I have no problem increasing education funding dramatically, I think we should.
As well as my experience making educational resources my daughter and I get a lot of value out of the freely available Khan Academy (we are working through Calculus together) videos, and I can see that a more professional outfit might be able to take those as a scaffold to build something even more engaging for students.
school is critical infrastructure that we run professionally
I totally agree with this, and with your point that improvised amateur hours should be spent outside of the school environment (where it can thrive in a more free-market of ideas). At present I notice (with my daughter’s schooling) that media is often used in a scatter-gun way, drawing bits and pieces from Youtube, mixed in with ads and other unhelpful messaging, because it’s not purpose-built for schools. So, your perspective of seeing it as “critical infrastructure that we run professionally” is key.
Thanks for your comment, I appreciate your points, and see that Yudkowsky appreciates some use of higher-level abstractions as a pragmatic tool that is not erased by reductionism. But I still feel like you’re being a bit too charitable. I re-read the ‘it’s okay to use ‘emerge”’ parts several times, and as I understand it, he’s not meaning to refer to a higher-level abstraction, he’s using it in the general sense “whatever byproduct comes from this” in which case it would be just as meaningful to say “heat emerges from the body” which does not reflect any definition of emergence as a higher-level abstraction. I think the issue comes into focus with your final point:
But it is not correct to say that acknowledging intelligence as emergent doesn’t help us predict anything. If emergence can be described as a pattern that happens across different realms then it can help to predict things, through the use of analogy. If for instance we can see that neurones are selected and strengthened based on use, we can transfer some of our knowledge about natural selection in biological evolution to provide fruitful questions to ask, and research to do, on neural evolution. If we understand that an emergent system has reached equilibrium, it can help us to ask useful questions about what new systems might emerge on top of that system, questions we might not otherwise ask if we were not to recognise the shared pattern.
A question I often ask myself is “If the world itself is to become increasingly organised, at some point do we cease to be autonomous entities an on a floating rock, and become instead like automatic cells within a new vector of autonomy (the planet as super-organism)”. This question only comes about if we acknowledge that the world itself is subject to the same sorts of emergent processes that humans and other animals are (although not exactly, a planet doesn’t have much of a social life, and that could be essential to autonomy). I find these predictions based on principles of emergence interesting and potentially consequential.