Yet Google maps does pick destinations consistent with human intent.
Most of the time—but with a few highly inconvenient exceptions. A human travel agent would do much better. IBM’s Watson is an even less compelling example. Many of its responses are just bizarre, but it makes up for that with blazing search speed/volume and reaction times. And yet it still got beaten by a U.S. Congresscritter.
But an AGI does not have all those goals and values, e.g. an inherent aversion against revising its goals according to another agent.
You seem to be implying that the AGI will be programmed to seek human help in interpreting/crystallizing its own goals. I agree that such an approach is a likely strategy by the programmers, and that it is inadequately addressed in the target paper.
Most of the time—but with a few highly inconvenient exceptions. A human travel agent would do much better. IBM’s Watson is an even less compelling example. Many of its responses are just bizarre, but it makes up for that with blazing search speed/volume and reaction times. And yet it still got beaten by a U.S. Congresscritter.
You seem to be implying that the AGI will be programmed to seek human help in interpreting/crystallizing its own goals. I agree that such an approach is a likely strategy by the programmers, and that it is inadequately addressed in the target paper.