Unless you do conditional sampling of a learned distribution, where you constrain the samples to be in a specific a-priori-extremely-unlikely subspace, in which case sampling becomes isomorphic to optimization in theory
Right. I think the optimists would say that conditional sampling works great in practice, and that this bodes well for applying similar techniques to more ambitious domains. There’s no chance of this image being in the Stable Diffusion pretraining set:
One could reply, “Oh, sure, it’s obvious that you can conditionally sample a learned distribution to safely do all sorts of economically valuable cognitive tasks, but that’s not the danger of true AGI.” And I ultimately think you’re correct about that. But I don’t think the conditional-sampling thing was obvious in 2004.
One could reply, “Oh, sure, it’s obvious that you can conditionally sample a learned distribution to safely do all sorts of economically valuable cognitive tasks, but that’s not the danger of true AGI.” And I ultimately think you’re correct about that. But I don’t think the conditional-sampling thing was obvious in 2004.
Idk. We already knew that you could use basic regression and singular vector methods to do lots of economically valuable tasks, since that was something that was done in 2004. Conditional-sampling “just” adds in the noise around these sorts of methods, so it goes to say that this might work too.
Adding noise obviously doesn’t matter in 1 dimension except for making the outcomes worse. The reason we use it for e.g. images is that adding the noise does matter in high-dimensional spaces because without the noise you end up with the highest-probability outcome, which is out of distribution. So in a way it seems like a relatively minor fix to generalize something we already knew was profitable in lots of cases.
On the other hand, I didn’t learn the probability thing until playing with some neural network ideas for outlier detection and learning they didn’t work. So in that sense it’s literally true that it wasn’t obvious (to a lot of people) back before deep learning took off.
And I can’t deny that people were surprised that neural networks could learn to do art. To me this became relatively obvious with early GANs, which were later than 2004 but earlier than most people updated on it.
Right. I think the optimists would say that conditional sampling works great in practice, and that this bodes well for applying similar techniques to more ambitious domains. There’s no chance of this image being in the Stable Diffusion pretraining set:
One could reply, “Oh, sure, it’s obvious that you can conditionally sample a learned distribution to safely do all sorts of economically valuable cognitive tasks, but that’s not the danger of true AGI.” And I ultimately think you’re correct about that. But I don’t think the conditional-sampling thing was obvious in 2004.
Idk. We already knew that you could use basic regression and singular vector methods to do lots of economically valuable tasks, since that was something that was done in 2004. Conditional-sampling “just” adds in the noise around these sorts of methods, so it goes to say that this might work too.
Adding noise obviously doesn’t matter in 1 dimension except for making the outcomes worse. The reason we use it for e.g. images is that adding the noise does matter in high-dimensional spaces because without the noise you end up with the highest-probability outcome, which is out of distribution. So in a way it seems like a relatively minor fix to generalize something we already knew was profitable in lots of cases.
On the other hand, I didn’t learn the probability thing until playing with some neural network ideas for outlier detection and learning they didn’t work. So in that sense it’s literally true that it wasn’t obvious (to a lot of people) back before deep learning took off.
And I can’t deny that people were surprised that neural networks could learn to do art. To me this became relatively obvious with early GANs, which were later than 2004 but earlier than most people updated on it.
So basically I don’t disagree but in retrospect it doesn’t seem that shocking.