I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn’t strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech.
Second, the absence of good natural language processing isn’t great evidence that AI is far off, since even if it’s a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
I don’t think that we disagree here very much but we are talking past each other a little bit.
I definitely agree with your first point; I simply wouldn’t call such an AI fully general. It could easily destroy the world though.
I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren’t extremely far off. I don’t think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
If the “seed AI” idea is right, this claim can’t be taken for granted, especially if there’s no optimization for Friendliness.
I would make the case that anything that qualifies as an AGI would need to have some ability to interact with other agents, which would require an analogue of natural language processing, but I certainly agree that it isn’t strictly necessary for an AI to come about. I do still think of it as (weak) positive evidence though.
Two things. First, a seed AI could present an existential risk without ever requiring natural language processing, for example by engineering nanotech.
Second, the absence of good natural language processing isn’t great evidence that AI is far off, since even if it’s a required component of the full AGI, the seed AI might start without it and then add that functionality after a few iterations of other self-improvements.
I don’t think that we disagree here very much but we are talking past each other a little bit.
I definitely agree with your first point; I simply wouldn’t call such an AI fully general. It could easily destroy the world though.
I also agree with your second point, but I think in terms of a practical plan for people working on AI natural language processing would be a place to start, and having that technology means such a project is likely closer as well as demonstrating that the technical capabilities aren’t extremely far off. I don’t think any state of natural language processing would count as strong evidence but I do think it counts as weak evidence and something of a small milestone.
I agree.
Yay!