But I think that my disagreement with this first class of alarmist is no very fundamental, we can probably agree on a few things such as:
1. In principle, the kind of intelligence needed for AGI is a solved problem, all that we are doing now is trying to optimize for various cases.
2. The increase in computational resources is enough to get us closer and closer to AGI even without any more research effort being allocated to the subject.
This is definitely not something you will find agreement on. Thinking that this is something that alarmists would agree with you on suggests you are using a different definition of AGI than they are, and may have other significant misunderstandings of what they’re saying.
This is definitely not something you will find agreement on. Thinking that this is something that alarmists would agree with you on suggests you are using a different definition of AGI than they are, and may have other significant misunderstandings of what they’re saying.
Would you care to go into more details ?
If there’s different definitions of AGI then that’s quite a barrier to understanding generally. Never mind my confusions as a curious newbie.
This feels a really good time to jump in and ask for a working definition of AGI. (simple words, no links to essays please.)