I’m not sure whether it implies that you should be able to make a task-based AGI.
Yeah I don’t understand what you mean by virtues in this context, but I don’t see why consequentialism-in-service-of-virtues would create different problems than the more general consequentialism-in-service-of-anything-else. If I understood why you think it’s different then we might communicate better.
(Later you mention unboundedness too, which I think should be added to difficulty here)
By unbounded I just meant the kind of task where it’s always possible to do better by using a better plan. It basically just means that an agent will select the highest difficulty version of the task that is achievable. I didn’t intend it as a different thing from difficulty, it’s basically the same.
I’m not sure about that, because the fact that the task is being completed in service of some virtue might limit the scope of actions that are considered for it. Again I think it’s on me to paint a more detailed picture of the way the agent works and how it comes about in order for us to be able to think that through.
True, but I don’t think the virtue part is relevant. This applies to all instrumental goals, see here (maybe also the John-Max discussion in the comments).
Yeah I don’t understand what you mean by virtues in this context, but I don’t see why consequentialism-in-service-of-virtues would create different problems than the more general consequentialism-in-service-of-anything-else. If I understood why you think it’s different then we might communicate better.
By unbounded I just meant the kind of task where it’s always possible to do better by using a better plan. It basically just means that an agent will select the highest difficulty version of the task that is achievable. I didn’t intend it as a different thing from difficulty, it’s basically the same.
True, but I don’t think the virtue part is relevant. This applies to all instrumental goals, see here (maybe also the John-Max discussion in the comments).