My guess is that it will be not so much milestones as seemingly unrelated puzzle pieces suddenly and unexpectedly coming together. This is usually how things happen. From VCRs to USB storage to app markets, you name it. Some invisible threshold gets crossed, and different technologies come together to create a killer product. Chances are, some equivalent of an AGI will sneak up on us in a form no one will have foreseen.
I agree that the eventual creation of AGI is likely to come from seemingly unrelated puzzle pieces coming together. On the other hand, anything that qualifies as an AGI is necessarily going to have the capabilities of a chat bot, a natural language parser, etc. etc. So these capabilities existing makes the puzzle of how to fit pieces together easier.
My point is simply that if you predict AGI in the next [time frame] you would expect to see [components of AGI] in [time frame], so I listed some things I would expect if I expected AGI soon, and I still expect all of those things soon. This makes it (in my mind) significantly different than just a “wild guess”.
I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn’t strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech.
Second, the absence of good natural language processing isn’t great evidence that AI is far off, since even if it’s a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
I don’t think that we disagree here very much but we are talking past each other a little bit.
I definitely agree with your first point; I simply wouldn’t call such an AI fully general. It could easily destroy the world though.
I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren’t extremely far off. I don’t think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
My guess is that it will be not so much milestones as seemingly unrelated puzzle pieces suddenly and unexpectedly coming together. This is usually how things happen. From VCRs to USB storage to app markets, you name it. Some invisible threshold gets crossed, and different technologies come together to create a killer product. Chances are, some equivalent of an AGI will sneak up on us in a form no one will have foreseen.
I agree that the eventual creation of AGI is likely to come from seemingly unrelated puzzle pieces coming together. On the other hand, anything that qualifies as an AGI is necessarily going to have the capabilities of a chat bot, a natural language parser, etc. etc. So these capabilities existing makes the puzzle of how to fit pieces together easier.
My point is simply that if you predict AGI in the next [time frame] you would expect to see [components of AGI] in [time frame], so I listed some things I would expect if I expected AGI soon, and I still expect all of those things soon. This makes it (in my mind) significantly different than just a “wild guess”.
If the “seed AI” idea is right, this claim can’t be taken for granted, especially if there’s no optimization for Friendliness.
I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn’t strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech.
Second, the absence of good natural language processing isn’t great evidence that AI is far off, since even if it’s a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
I don’t think that we disagree here very much but we are talking past each other a little bit.
I definitely agree with your first point; I simply wouldn’t call such an AI fully general. It could easily destroy the world though.
I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren’t extremely far off. I don’t think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
I agree.
Yay!