What constitutes pessimism about morality, and why do you think that one fits Eliezer? He certainly appears more pessimistic across a broad area, and has hinted at concrete arguments for being so.
Value fragility / value complexity. How close do you need to get to human values to get 50% of the value of the universe, and how complicated must the representation be? Also in the past there was orthogonality, but that’s now widely believed.
I think the distance from human values or complexity of values is not a crux, as web/books corpus overdetermines them in great detail (for corrigibility purposes). It’s mostly about alignment by default, whether human values in particular can be noticed in there, or if correctly specifying how to find them is much harder than finding some other deceptively human-value-shaped thing. If they can be found easily once there are tools to go looking for them at all, it doesn’t matter how complex they are or how important it is to get everything right, that happens by default.
But also there is this pervasive assumption of it being possible to formulate values in closed form, as tractable finite data, which occasionally fuels arguments. Like, value is said to be complex, but of finite complexity. In an open environment, this doesn’t need to be the case, a code/data distinction is only salient when we can make important conclusions by only looking at code and not at data. In an open environment, data is unbounded, can’t be demonstrated all at once. So it doesn’t make much sense to talk about complexity of values at all, without corrigibility alignment can’t work out anyway.
See, MIRI in the past has sounded dangerously optimistic to me on that score. While I thought EY sounded more sensible than the people pushing genetic enhancement of humans, it’s only now that I find his presence reassuring, thanks in part to the ongoing story he’s been writing. Otherwise I might be yelling at MIRI to be more pessimistic about fragility of value, especially with regard to people who might wind up in possession of a corrigible ‘Tool AI’.
What constitutes pessimism about morality, and why do you think that one fits Eliezer? He certainly appears more pessimistic across a broad area, and has hinted at concrete arguments for being so.
Value fragility / value complexity. How close do you need to get to human values to get 50% of the value of the universe, and how complicated must the representation be? Also in the past there was orthogonality, but that’s now widely believed.
I think the distance from human values or complexity of values is not a crux, as web/books corpus overdetermines them in great detail (for corrigibility purposes). It’s mostly about alignment by default, whether human values in particular can be noticed in there, or if correctly specifying how to find them is much harder than finding some other deceptively human-value-shaped thing. If they can be found easily once there are tools to go looking for them at all, it doesn’t matter how complex they are or how important it is to get everything right, that happens by default.
But also there is this pervasive assumption of it being possible to formulate values in closed form, as tractable finite data, which occasionally fuels arguments. Like, value is said to be complex, but of finite complexity. In an open environment, this doesn’t need to be the case, a code/data distinction is only salient when we can make important conclusions by only looking at code and not at data. In an open environment, data is unbounded, can’t be demonstrated all at once. So it doesn’t make much sense to talk about complexity of values at all, without corrigibility alignment can’t work out anyway.
See, MIRI in the past has sounded dangerously optimistic to me on that score. While I thought EY sounded more sensible than the people pushing genetic enhancement of humans, it’s only now that I find his presence reassuring, thanks in part to the ongoing story he’s been writing. Otherwise I might be yelling at MIRI to be more pessimistic about fragility of value, especially with regard to people who might wind up in possession of a corrigible ‘Tool AI’.