The likely outcome of a Malthusian/Darwinian upload scenario isn’t many near-subsistence human-like lives, it’s something seriously inhuman and probably valueless. The analogy is incredibly weak.
You know, his scenario of erasing humanity as a byproduct of an optimization process indifferent to human values amounts to the unfriendly AI scenarios we discuss, just relaxing the requirement that the optimization process be sentient.
I wonder if the following is a valid generalization of the specific problem that motivates the MIRI folks:
Our ability to scale up and speed up achievement of goals has outpaced or will soon outpace our ability to find goals that we won’t regret.
Or, more succinctly, if we don’t solve coherent extraoplated volition, we are screwed regardless of whether Kruel or Yudkowski is right about the specific threat of unfriendly AI.
The likely outcome of a Malthusian/Darwinian upload scenario isn’t many near-subsistence human-like lives, it’s something seriously inhuman and probably valueless. The analogy is incredibly weak.
You know, his scenario of erasing humanity as a byproduct of an optimization process indifferent to human values amounts to the unfriendly AI scenarios we discuss, just relaxing the requirement that the optimization process be sentient.
I wonder if the following is a valid generalization of the specific problem that motivates the MIRI folks:
Our ability to scale up and speed up achievement of goals has outpaced or will soon outpace our ability to find goals that we won’t regret.
Thanks for the link to that Nick Bostrom paper. It’s the best writing I’ve yet seen on the posthuman prospect.
Or, more succinctly, if we don’t solve coherent extraoplated volition, we are screwed regardless of whether Kruel or Yudkowski is right about the specific threat of unfriendly AI.