How do we make an AI that obediently performs as we want it to, but does so smarter, while maintaining it’s obedience?
Depends on what you mean by “smarter”? It is merely good at finding more efficient ways to fulfill your wish… or is it also able to realize that some literal intepretations of your wish are not what you actually want to happen (but perhaps you aren’t smart enough to realize it)? In the latter case, will it efficiently follow the literal intepretation?
Depends on what you mean by “smarter”? It is merely good at finding more efficient ways to fulfill your wish… or is it also able to realize that some literal intepretations of your wish are not what you actually want to happen (but perhaps you aren’t smart enough to realize it)? In the latter case, will it efficiently follow the literal intepretation?