ACTIVITY A: Think about how an AI will form a model of what a human wants and is trying to do.
ACTIVITY B: Think about the gears underlying human intelligence and motivation.
You’re doing Activity A every day. I’m doing Activity B every day.
My comment was trying to say: “The people like you, doing Activity A, may talk about there being multiple models which tend to agree in-distribution but not OOD. Meanwhile, the people like me, doing Activity B, may talk about subagents. There’s a conceptual parallel between these two different discussions.”
And I think you thought I was saying: “We both agree that the real ultimate goal right now is Activity A. I’m leaving a comment that I think will help you engage in Activity A, because Activity A is the thing to do. And my comment is: (something about humans having subagents).”
This was a whole 2 weeks ago, so all I can say for sure was that I was at least ambiguous about your point.
But I feel like I kind of gave a reply anyway—I don’t think the parallel with subagents is very deep. But there’s a very strong parallel (or maybe not even a parallel, maybe this is just the thing I’m talking about) with generative modeling.
Hmm. I think you missed my point…
There are two different activities:
ACTIVITY A: Think about how an AI will form a model of what a human wants and is trying to do.
ACTIVITY B: Think about the gears underlying human intelligence and motivation.
You’re doing Activity A every day. I’m doing Activity B every day.
My comment was trying to say: “The people like you, doing Activity A, may talk about there being multiple models which tend to agree in-distribution but not OOD. Meanwhile, the people like me, doing Activity B, may talk about subagents. There’s a conceptual parallel between these two different discussions.”
And I think you thought I was saying: “We both agree that the real ultimate goal right now is Activity A. I’m leaving a comment that I think will help you engage in Activity A, because Activity A is the thing to do. And my comment is: (something about humans having subagents).”
Does that help?
This was a whole 2 weeks ago, so all I can say for sure was that I was at least ambiguous about your point.
But I feel like I kind of gave a reply anyway—I don’t think the parallel with subagents is very deep. But there’s a very strong parallel (or maybe not even a parallel, maybe this is just the thing I’m talking about) with generative modeling.