The resulting probability distribution of events will definitely not reflect your prior probability distribution, so I think Thomas’ argument still doesn’t go through. It will reflect the shape of the wave-function.
This is a good point. But I don’t think “particles being moved the minimum necessary distance to achieve the outcome” actually favors explosions. I think it probably favors the sensor hardware getting corrupted, or it might actually favor messing with the firemens’ brains to make them decide to come earlier (or messing with your mother’s brain to make her jump out of the building)—because both of these are highly sensitive systems where small changes can have large effects.
Does this undermine the parable? Kinda, I think. If you built a machine that samples from some bizarre inhuman distribution, and then you get bizarre outcomes, then the problem is not really about your wish any more, the problem is that you built a weirdly-sampling machine. (And then we can debate about the extent to which NNs are weirdly-sampling machines, I guess.)
Does this undermine the parable? Kinda, I think. If you built a machine that samples from some bizarre inhuman distribution, and then you get bizarre outcomes, then the problem is not really about your wish any more, the problem is that you built a weirdly-sampling machine. (And then we can debate about the extent to which NNs are weirdly-sampling machines, I guess.)
This is roughly how I would interpret the post. Physics itself is a bizarre inhuman distribution, and in-general many probability distributions from which you might want to sample from will be bizarre and inhuman.
Agree that it’s then arguable to what degree the optimization pressure of a mature AGI arising from NNs would also be bizarre. My guess is quite bizarre, since a lot of the constraints it will face will be constraints of physics.
This is a good point. But I don’t think “particles being moved the minimum necessary distance to achieve the outcome” actually favors explosions. I think it probably favors the sensor hardware getting corrupted, or it might actually favor messing with the firemens’ brains to make them decide to come earlier (or messing with your mother’s brain to make her jump out of the building)—because both of these are highly sensitive systems where small changes can have large effects.
Does this undermine the parable? Kinda, I think. If you built a machine that samples from some bizarre inhuman distribution, and then you get bizarre outcomes, then the problem is not really about your wish any more, the problem is that you built a weirdly-sampling machine. (And then we can debate about the extent to which NNs are weirdly-sampling machines, I guess.)
This is roughly how I would interpret the post. Physics itself is a bizarre inhuman distribution, and in-general many probability distributions from which you might want to sample from will be bizarre and inhuman.
Agree that it’s then arguable to what degree the optimization pressure of a mature AGI arising from NNs would also be bizarre. My guess is quite bizarre, since a lot of the constraints it will face will be constraints of physics.