Titotal, can you please add or link which definition of “AGI” you are using?
Stating it is decades away immediately weakens the rest of your post outright because it makes you sound non-credible, and you have written a series of excellent posts here.
Definitions for AGI:
Extending the Turing test to simply ‘as conversationally fluent as the median human’. This is months away if not already satisfied. Expecting it to be impossible to sus out the AGI when there are various artifacts despite the model being competent was unreasonable.
AGI has as broad a skillbase as the median human, and is as skillful at those skills at the median human. It only needs to be expert level in a few things. This is months to a few years away, mostly minimum level of modalities is needed. Vision, which GPT-4 has, some robotics control so the machine can do the basic things a median human can do, which several models have demonstrated to work pretty well, speech i/o which seems to be a solved problem, and so on. Note it’s fine if the model is just completely incapable of some things if it makes up for it with expert level performance in others, which is how humans near the median are.
AGI is like (2) but can learn any skill to a competent human level, if given structured feedback on the errors it makes. Needing many times as much feedback as a human is fine.
AGI is like (3) but is expert level at tasks in the domain of machines. By the point of (4) we’re talking about self replication being possible and humans no longer being necessary at all. The AGI never needs to learn human domain tasks like “how to charm other humans” or “how to make good art” or “how to use robot fingers as well as a human does” etc. It has to be able to code, manufacture, design to meet requirements, mine in the real world.
AGI is like (4) but is able to learn, if given human amounts of feedback, any task a human can do to expert level.
AGI is like (5) but is now at expert human level at everything humans can do in the world.
AGI is better than humans at any task. This is arguably an ASI but I have seen people throw an AGI tag on this.
Various forms of ‘self reflection’ and emotional affect are required. For some people it doesn’t matter only what the machine can do but how it accomplishes it. I don’t know how to test for this.
I do not think you have empirical basis to claim (1, 2, or 3) being “decades away”. 1 and 2 are very close, 3 is highly likely this decade because of the enormous increase in recent investment in it.
You’re a computational physicist so you are aware of the idea of criticality. Knowing of criticality, and assuming (3) is true, how does AGI remain “decades away” in any non world catastrophe timeline? Because if (3) is true, the AGI can be self improved to at least (5), limited only by compute, data, time etc.
Titotal, can you please add or link which definition of “AGI” you are using?
Stating it is decades away immediately weakens the rest of your post outright because it makes you sound non-credible, and you have written a series of excellent posts here.
Definitions for AGI:
Extending the Turing test to simply ‘as conversationally fluent as the median human’. This is months away if not already satisfied. Expecting it to be impossible to sus out the AGI when there are various artifacts despite the model being competent was unreasonable.
AGI has as broad a skillbase as the median human, and is as skillful at those skills at the median human. It only needs to be expert level in a few things. This is months to a few years away, mostly minimum level of modalities is needed. Vision, which GPT-4 has, some robotics control so the machine can do the basic things a median human can do, which several models have demonstrated to work pretty well, speech i/o which seems to be a solved problem, and so on. Note it’s fine if the model is just completely incapable of some things if it makes up for it with expert level performance in others, which is how humans near the median are.
AGI is like (2) but can learn any skill to a competent human level, if given structured feedback on the errors it makes. Needing many times as much feedback as a human is fine.
AGI is like (3) but is expert level at tasks in the domain of machines. By the point of (4) we’re talking about self replication being possible and humans no longer being necessary at all. The AGI never needs to learn human domain tasks like “how to charm other humans” or “how to make good art” or “how to use robot fingers as well as a human does” etc. It has to be able to code, manufacture, design to meet requirements, mine in the real world.
AGI is like (4) but is able to learn, if given human amounts of feedback, any task a human can do to expert level.
AGI is like (5) but is now at expert human level at everything humans can do in the world.
AGI is better than humans at any task. This is arguably an ASI but I have seen people throw an AGI tag on this.
Various forms of ‘self reflection’ and emotional affect are required. For some people it doesn’t matter only what the machine can do but how it accomplishes it. I don’t know how to test for this.
I do not think you have empirical basis to claim (1, 2, or 3) being “decades away”. 1 and 2 are very close, 3 is highly likely this decade because of the enormous increase in recent investment in it.
You’re a computational physicist so you are aware of the idea of criticality. Knowing of criticality, and assuming (3) is true, how does AGI remain “decades away” in any non world catastrophe timeline? Because if (3) is true, the AGI can be self improved to at least (5), limited only by compute, data, time etc.