You know, his scenario of erasing humanity as a byproduct of an optimization process indifferent to human values amounts to the unfriendly AI scenarios we discuss, just relaxing the requirement that the optimization process be sentient.
I wonder if the following is a valid generalization of the specific problem that motivates the MIRI folks:
Our ability to scale up and speed up achievement of goals has outpaced or will soon outpace our ability to find goals that we won’t regret.
You know, his scenario of erasing humanity as a byproduct of an optimization process indifferent to human values amounts to the unfriendly AI scenarios we discuss, just relaxing the requirement that the optimization process be sentient.
I wonder if the following is a valid generalization of the specific problem that motivates the MIRI folks:
Our ability to scale up and speed up achievement of goals has outpaced or will soon outpace our ability to find goals that we won’t regret.