Something to keep in mind: in many models, AGI will be surprising. To everyone outside the successful team, it will probably seem impossible right up until it is done. If you think this is true, then the Outside View recommends assigning a small probability of AGI happening in the next few years—even if that seems impossible, because “seeming impossible” isn’t reliable information that it’s not immanent.
What I would say ML researchers are missing is that we don’t have a good enough model of AGI to know how far we have left with high confidence. We know we’re missing something, but not how much we’re missing.
Something to keep in mind: in many models, AGI will be surprising. To everyone outside the successful team, it will probably seem impossible right up until it is done. If you think this is true, then the Outside View recommends assigning a small probability of AGI happening in the next few years—even if that seems impossible, because “seeming impossible” isn’t reliable information that it’s not immanent.
What I would say ML researchers are missing is that we don’t have a good enough model of AGI to know how far we have left with high confidence. We know we’re missing something, but not how much we’re missing.