If the reason for our technological dominance is due to our ability to process culture, however, then the case for a discontinuous jump in capabilities is weaker. This is because our AI systems can already process culture somewhat efficiently right now (see GPT-2) and there doesn’t seem like a hard separation between “being able to process culture inefficiently” and “able to process culture efficiently” other than the initial jump from not being able to do it at all, which we have already passed.
I keep hearing people say this (the part “and there doesn’t seem to be a hard separation”), but I don’t intuitively agree! I’ve spelled out my position here. I have the intuition that there’s a basin of attraction for good reasoning (“making use of culture to improve how you reason”) that can generate a discontinuity. You can observe this among humans. Many people, including many EAs, don’t seem to “get it” when it comes to how to form internal world models and reason off of them in novel and informative ways. If someone doesn’t do this, or does it in a fashion that doesn’t sufficiently correspond to reality’s structure, they predictably won’t make original and groundbreaking intellectual contributions. By contrast, other people do “get it,” and their internal models are self-correcting to some degree at least, so if you ran uploaded copies of their brains for millennia, the results would be staggeringly different.
I keep hearing people say this (the part “and there doesn’t seem to be a hard separation”), but I don’t intuitively agree! I’ve spelled out my position here. I have the intuition that there’s a basin of attraction for good reasoning (“making use of culture to improve how you reason”) that can generate a discontinuity. You can observe this among humans. Many people, including many EAs, don’t seem to “get it” when it comes to how to form internal world models and reason off of them in novel and informative ways. If someone doesn’t do this, or does it in a fashion that doesn’t sufficiently correspond to reality’s structure, they predictably won’t make original and groundbreaking intellectual contributions. By contrast, other people do “get it,” and their internal models are self-correcting to some degree at least, so if you ran uploaded copies of their brains for millennia, the results would be staggeringly different.