As far as I am concerned, AGI should be able to do any intellectual task that a human can do. I think that inventing important new ideas tends to take at least a month, but possibly the length of a PhD thesis. So it seems to be a reasonable interpretation that we might see human level AI around mid-2030 to 2040, which happens to be about my personal median.
There is an argument to be made that at the larger scales of length, cognitive tasks become cleanly factored, or in other words it’s more accurate to model completing something like a PhD as different instantiations of yourself coordinating across time over low bandwidth channels, as opposed to you doing very high dimensional inference for a very long time. If that’s the case, then one would expect to roughly match human performance in indefinite time horizon tasks once that scale has been reached.
I don’t think I fully buy this, but I don’t outright reject it.
There is an argument to be made that at the larger scales of length, cognitive tasks become cleanly factored, or in other words it’s more accurate to model completing something like a PhD as different instantiations of yourself coordinating across time over low bandwidth channels, as opposed to you doing very high dimensional inference for a very long time. If that’s the case, then one would expect to roughly match human performance in indefinite time horizon tasks once that scale has been reached.
I don’t think I fully buy this, but I don’t outright reject it.
I agree, but my experience of doing a PhD actually feels more like an integrated project of discovery and creation.