I have an interest in regular AI but as far as I can tell it’s mostly distinct from rationality. AI seems much more about task completion and decision-making in narrow regions with strongly identified problems; AGIs and humans are much more about task selection and decision-making in broad regions with weakly identified problems.
I meant the parts of AI relevant to AGI. While certain subfields of AI are specialized task-solvers (and the researchers know it), there are plenty of fields that want to solve the general problem.
Isn’t AGI also mostly distinct from rationality? I don’t get the connection.
I have an interest in regular AI but as far as I can tell it’s mostly distinct from rationality. AI seems much more about task completion and decision-making in narrow regions with strongly identified problems; AGIs and humans are much more about task selection and decision-making in broad regions with weakly identified problems.
I meant the parts of AI relevant to AGI. While certain subfields of AI are specialized task-solvers (and the researchers know it), there are plenty of fields that want to solve the general problem.
Isn’t AGI also mostly distinct from rationality? I don’t get the connection.