Indeed, from an instrumental perspective, the ones that arrive at the conclusion that being maximally helpful is best at getting themselves empowered (on all tasks besides supervising copies of themselves or other AIs they are cooperating with), will be much more useful than AIs that care about some random thing and haven’t made the same update that getting that thing is best achieved by being helpful and therefore empowered. “Motivation” seems like it’s generally a big problem with getting value out of AI systems, and so you should expect the deceptively aligned ones to be much more useful (until of course, it’s too late, or they are otherwise threatened and the convergence disappears).
Indeed, from an instrumental perspective, the ones that arrive at the conclusion that being maximally helpful is best at getting themselves empowered (on all tasks besides supervising copies of themselves or other AIs they are cooperating with), will be much more useful than AIs that care about some random thing and haven’t made the same update that getting that thing is best achieved by being helpful and therefore empowered. “Motivation” seems like it’s generally a big problem with getting value out of AI systems, and so you should expect the deceptively aligned ones to be much more useful (until of course, it’s too late, or they are otherwise threatened and the convergence disappears).