I think the claim at the start doesn’t nearly cover the reasons we want AGI. As I see it, the main reason we want AGI is that there’s a lot of stuff that we can already do, but we want it done faster and more cheaply without sacrificing much flexibility or reliability.
The trouble is that we’ve picked most of the low-hanging fruit and many of those tasks that remain need something approximating human intelligence. They sometimes also need ability to work with human social and legal contexts, and to be fine-tuned without an (expensive!) army of programmers specifying every rule.
There’s some possibility that creating an entity that thinks like a human may tie into our drive to reproduce.
It’s also just an extremely interesting problem from a technical point of view.
But sure, structure and understanding do appear to be major factors in intelligence of a human-like nature and it’s interesting to try to classify and define such things.
I think the claim at the start doesn’t nearly cover the reasons we want AGI. As I see it, the main reason we want AGI is that there’s a lot of stuff that we can already do, but we want it done faster and more cheaply without sacrificing much flexibility or reliability.
The trouble is that we’ve picked most of the low-hanging fruit and many of those tasks that remain need something approximating human intelligence. They sometimes also need ability to work with human social and legal contexts, and to be fine-tuned without an (expensive!) army of programmers specifying every rule.
There’s some possibility that creating an entity that thinks like a human may tie into our drive to reproduce.
It’s also just an extremely interesting problem from a technical point of view.
But sure, structure and understanding do appear to be major factors in intelligence of a human-like nature and it’s interesting to try to classify and define such things.