I agree with all of that. My definition isn’t crisp enough; doing crappy general thinking and learning isn’t good enough. It probably needs to be roughly human level or above at those things before it’s takeover-capable and therefore really dangerous.
I didn’t intend to add the alignment definitions to the definition of AGI.
I’d argue that LLMs actually can’t think about anything outside of their training set, and it’s just that everything humans have thought about so far is inside their training set. But I don’t think that discussion matters here.
I agree that Claude isn’t an ASI by that definition. even if it did have longer-term goal-directed agency and self-directed online learning added, it would still be far subhuman in some important areas, arguably in general reasoning that’s critical for complex novel tasks like taking over the world or the economy. ASI needs to mean superhuman in every important way. And of course important is vague.
I guess a more reasonable goal is working toward the minimum description length that gets across all of those considerations. And a big problem is that timeline predictions to important/dangerous AI are mixed in with theories about what will make it important/dangerous. One terminological move I’ve been trying is the word “competent” to invoke intuitions about getting useful (and therefore potentially dangerous) stuff done.
I agree with all of that. My definition isn’t crisp enough; doing crappy general thinking and learning isn’t good enough. It probably needs to be roughly human level or above at those things before it’s takeover-capable and therefore really dangerous.
I didn’t intend to add the alignment definitions to the definition of AGI.
I’d argue that LLMs actually can’t think about anything outside of their training set, and it’s just that everything humans have thought about so far is inside their training set. But I don’t think that discussion matters here.
I agree that Claude isn’t an ASI by that definition. even if it did have longer-term goal-directed agency and self-directed online learning added, it would still be far subhuman in some important areas, arguably in general reasoning that’s critical for complex novel tasks like taking over the world or the economy. ASI needs to mean superhuman in every important way. And of course important is vague.
I guess a more reasonable goal is working toward the minimum description length that gets across all of those considerations. And a big problem is that timeline predictions to important/dangerous AI are mixed in with theories about what will make it important/dangerous. One terminological move I’ve been trying is the word “competent” to invoke intuitions about getting useful (and therefore potentially dangerous) stuff done.