Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. A base optimizer optimizes and creates a mesa-optimizer. Previously work under this concept was called Inner Optimizer or Optimization Daemons.
Natural selection is an optimization process (that optimizes for reproductive fitness) that produced humans (who are capable of pursuing goals that no longer correlate reliably with reproductive fitness). In this case, humans are optimization daemons of natural selection. In the context of AI alignment, the concern is that an artificial general intelligence exerting optimization pressure may produce mesa-optimizers that break alignment.1
Previously work under this concept was called Inner Optimizer or Optimization Daemons.
“Are daemons a problem for ideal agents?” (2017-02-11)
Some posts that reference optimization daemons:
“Cause prioritization for downside-focused value systems”: “Alternatively, perhaps goal preservation becomes more difficult the more capable AI systems become, in which case the future might be controlled by unstable goal functions taking turns over the steering wheel”
“Techniques for optimizing worst-case performance”: “The difficulty of optimizing worst-case performance is one of the most likely reasons that I think prosaic AI alignment might turn out to be impossible (if combined with an unlucky empirical situation).” (the phrase “unlucky empirical situation” links to the optimization daemons page on Arbital)