It’s good that nothing here seems new to you. But the final section implies that Friendly AI, as proposed by Eliezer, is impossible in principle if it relies on finding terminal human values, which don’t exist. I’m sure there are people here who don’t yet believe that.
FAI/CEV is supposed to find a set of values with reflective equilibrium, which is different than terminal values. But whether a reflective equilibrium exists, or is unique, or would encompass any values we didn’t share with other mammals, is unclear.
It’s good that nothing here seems new to you. But the final section implies that Friendly AI, as proposed by Eliezer, is impossible in principle if it relies on finding terminal human values, which don’t exist. I’m sure there are people here who don’t yet believe that.
FAI/CEV is supposed to find a set of values with reflective equilibrium, which is different than terminal values. But whether a reflective equilibrium exists, or is unique, or would encompass any values we didn’t share with other mammals, is unclear.