When I’m working on a project, I’ve noticed a tendency in myself to correctly estimate the difficulty of my current subtask, in which I am almost always stuck on something that sounds dumb to be stuck on and not like making “real” progress on the project, but then to assume that once I’m done resolving the current dumb thing the rest of the project will be smooth sailing in terms of progress.
Anyway, I was just reading AI 2027, and it strikes me that our current task is to build an AI capable of doing AI research, and we’re currently stuck on impediments that feel dumb and non-central, but once we finish that task, we expect the rest of the path to the singularity to be smooth sailing in terms of progress.
Edit: s/the path the the singularity/the path to the singularity/
I mean, the whole premise of the Singularity is that once we solve the last few dumb impediments, the beings who’d have to deal with the subsequent dumb impediments would not be us, but the increasingly-superhuman AIs able to work through the dumb impediments at a much faster pace. Indeed, that’s just the standard Singularity narrative? (Flipping the definition: if there are still any dumb impediments left that are up to us to resolve, at our pathetic human speeds, then the Singularity hasn’t yet happened.)
I, personally, am inclined to agree that the AGI labs are underestimating just how many seemingly dumb impediments there still are on the way to the Singularity. But once the Singularity is underway, the dumb-impediment problem is no longer our problem, it’s the problem of entities much more capable of handling it. And the process of them working through those impediments at an inhuman speed is what the Singularity is.
I agree that that’s the premise. I just think that our historical track record of accuracy is poor when we say “surely we’llhave handled all the dumb impediments once we reach this milestone”. I don’t expect automated ML research to be an exception.
When I’m working on a project, I’ve noticed a tendency in myself to correctly estimate the difficulty of my current subtask, in which I am almost always stuck on something that sounds dumb to be stuck on and not like making “real” progress on the project, but then to assume that once I’m done resolving the current dumb thing the rest of the project will be smooth sailing in terms of progress.
Anyway, I was just reading AI 2027, and it strikes me that our current task is to build an AI capable of doing AI research, and we’re currently stuck on impediments that feel dumb and non-central, but once we finish that task, we expect the rest of the path to the singularity to be smooth sailing in terms of progress.
Edit: s/the path the the singularity/the path to the singularity/
I mean, the whole premise of the Singularity is that once we solve the last few dumb impediments, the beings who’d have to deal with the subsequent dumb impediments would not be us, but the increasingly-superhuman AIs able to work through the dumb impediments at a much faster pace. Indeed, that’s just the standard Singularity narrative? (Flipping the definition: if there are still any dumb impediments left that are up to us to resolve, at our pathetic human speeds, then the Singularity hasn’t yet happened.)
I, personally, am inclined to agree that the AGI labs are underestimating just how many seemingly dumb impediments there still are on the way to the Singularity. But once the Singularity is underway, the dumb-impediment problem is no longer our problem, it’s the problem of entities much more capable of handling it. And the process of them working through those impediments at an inhuman speed is what the Singularity is.
I wonder if your apparent disagreement here is actually because the OP wrote “the the” instead of “to the”?
(Final sentence)
With that typo fixed, I think they’re probably right.
I agree that that’s the premise. I just think that our historical track record of accuracy is poor when we say “surely we’llhave handled all the dumb impediments once we reach this milestone”. I don’t expect automated ML research to be an exception.