Like, imo, “most programs which make a mind upload device also kill humanity” is (if true) an interesting and somewhat compelling first claim to make in a discussion of AI risk, to which the claim “but one can at least in principle have a distribution on programs such that most programs which make mind uploads no not also kill humans” alone is not a comparably interesting or compelling response.
I disagree somewhat, but—whatever the facts about programs—at least it is not appropriate to claim “not only do most programs which make a mind upload device also kill humanity, it’s an issue with the space of programs themselves, not with the way we generate distributions over those programs.” That is not true.
It is at least not true “in principle” and perhaps it is not true for more substantial reasons (depending on the task you want and its alignment tax, psychology becomes more or less important in explaining the difficulty, as I gave examples for). On this, we perhaps agree?
I disagree somewhat, but—whatever the facts about programs—at least it is not appropriate to claim “not only do most programs which make a mind upload device also kill humanity, it’s an issue with the space of programs themselves, not with the way we generate distributions over those programs.” That is not true.
Hmm, I think that yes, us probably being killed by a program that makes a mind upload device is (if true) an issue with the way we generated a distribution over those programs. But also, it might be fine to say it’s an issue with the space of programs (with an implicit uniform prior on programs up to some length or an implicit length prior) itself.
Like, in the example of two equal gas containers connected by a currently open sliding door, it is fair/correct to say, at least as a first explanation: “it’s an issue with the space of gas particle configurations itself that you won’t be able to close the door with >55% of the particles on the left side”. This is despite the fact that one could in principle be sliding the door in a very precise way so as to leave >55% of the particles on the left side (like, one could in principle be drawing the post-closing microstate from some much better distribution than the naive uniform prior over usual microstates). My claim is that the discussion so far leaves open whether the AI mind upload thing is analogous to this example.
It is at least not true “in principle” and perhaps it is not true for more substantial reasons (depending on the task you want and its alignment tax, psychology becomes more or less important in explaining the difficulty, as I gave examples for). On this, we perhaps agree?
I’m open to [the claim about program-space itself being not human-friendly] not turning out to be a good/correct zeroth-order explanation for why a practical mind-upload-device-making AI would kill humanity (even if the program-space claim is true and the practical claim is true). I just don’t think the discussion above this comment so far provides good arguments on this question in either direction.
I disagree somewhat, but—whatever the facts about programs—at least it is not appropriate to claim “not only do most programs which make a mind upload device also kill humanity, it’s an issue with the space of programs themselves, not with the way we generate distributions over those programs.” That is not true.
It is at least not true “in principle” and perhaps it is not true for more substantial reasons (depending on the task you want and its alignment tax, psychology becomes more or less important in explaining the difficulty, as I gave examples for). On this, we perhaps agree?
Hmm, I think that yes, us probably being killed by a program that makes a mind upload device is (if true) an issue with the way we generated a distribution over those programs. But also, it might be fine to say it’s an issue with the space of programs (with an implicit uniform prior on programs up to some length or an implicit length prior) itself.
Like, in the example of two equal gas containers connected by a currently open sliding door, it is fair/correct to say, at least as a first explanation: “it’s an issue with the space of gas particle configurations itself that you won’t be able to close the door with >55% of the particles on the left side”. This is despite the fact that one could in principle be sliding the door in a very precise way so as to leave >55% of the particles on the left side (like, one could in principle be drawing the post-closing microstate from some much better distribution than the naive uniform prior over usual microstates). My claim is that the discussion so far leaves open whether the AI mind upload thing is analogous to this example.
I’m open to [the claim about program-space itself being not human-friendly] not turning out to be a good/correct zeroth-order explanation for why a practical mind-upload-device-making AI would kill humanity (even if the program-space claim is true and the practical claim is true). I just don’t think the discussion above this comment so far provides good arguments on this question in either direction.