An AGI that can reason about it’s own capabilities to decide how to spend resources might be more capable then one that can’t reason about itself because it know better how to approach solving a given problem. It’s plausible that a sufficiently complex neural net finds that this is a useful sub-feature and implements it.
I wouldn’t expect Google translate to suddenly develop self consciousness but self consciousness is a tool that helps humans to reason better. Self consciousness allows us to reflect about our own action and think about how we should best approach a given problem.
An AGI that can reason about it’s own capabilities to decide how to spend resources might be more capable then one that can’t reason about itself because it know better how to approach solving a given problem. It’s plausible that a sufficiently complex neural net finds that this is a useful subfeature and implements it.
An AGI that can reason about it’s own capabilities to decide how to spend resources might be more capable then one that can’t reason about itself because it know better how to approach solving a given problem. It’s plausible that a sufficiently complex neural net finds that this is a useful sub-feature and implements it.
I wouldn’t expect Google translate to suddenly develop self consciousness but self consciousness is a tool that helps humans to reason better. Self consciousness allows us to reflect about our own action and think about how we should best approach a given problem.
An AGI that can reason about it’s own capabilities to decide how to spend resources might be more capable then one that can’t reason about itself because it know better how to approach solving a given problem. It’s plausible that a sufficiently complex neural net finds that this is a useful subfeature and implements it.