E.g., if you want your AGI to build nanotech for you and do nothing else, then you might want to limit its ability to think about itself, or its operators, or the larger world, or indeed anything other than different small-scale physical structures. Limiting its generality and self-awareness in this way might also be helpful for reducing the risk that it’s conscious.
I don’t quite get this example.
How could such a system build nanotech efficiently without it having those properties? Wouldn’t it need a human operator the moment it encountered unexpected phenomena?
If so, it just seems like a really fancy hammer and not an ‘AGI’
I don’t quite get this example.
How could such a system build nanotech efficiently without it having those properties? Wouldn’t it need a human operator the moment it encountered unexpected phenomena?
If so, it just seems like a really fancy hammer and not an ‘AGI’