So I guess my model says that ‘merely static representations’ of semantic volumetric histories will constitute the first optimally specific board states of Nature in history, and we will use them to define loss functions so that we can do supervised learning on human games (recorded volumetric episodes) and learn a transition model (predictive model of the time evolution of recorded volumetric episodes, or ‘next-moment prediction’) and an action space (generative model of recorded human actions), then we will combine this with Engineered Search and some other stuff, then solve Go (kill everyone).
I think getting this to work in a way that actually kills everyone, rather than merely is AlphaFold or similar, is really really hard—in the sense that it requires more architectural insight than you’re giving credit for. (This is a contingent claim in the sense that it depends on details of the world that aren’t really about intelligence—for example, if it were pretty easy to make an engineered supervirus that kills everyone, then AlphaFold + current ambient tech could have been enough.) I think the easiest way is to invent the more general thing. The systems you adduce are characterized by being quite narrow! For a narrow task, yeah, plausibly the more hand-engineered thing will win first.
Back at the upthread point, I’m totally baffled by and increasingly skeptical of your claim to have some good reason to have a non-unimodal distribution. You brought up the 3D thing, but are you really claiming to have such a strong reason to think that exactly the combination of algorithmic ideas you sketched will work to kill everyone, and that the 3D thing is exactly most of what’s missing, that it’s “either this exact thing works in <5 years, or else >10 years” or similar?? Or what’s the claim? IDK maybe it’s not worth clarifying further, but so far I still just want to call BS on all such claims.
I think getting this to work in a way that actually kills everyone, rather than merely is AlphaFold or similar, is really really hard—in the sense that it requires more architectural insight than you’re giving credit for. (This is a contingent claim in the sense that it depends on details of the world that aren’t really about intelligence—for example, if it were pretty easy to make an engineered supervirus that kills everyone, then AlphaFold + current ambient tech could have been enough.) I think the easiest way is to invent the more general thing. The systems you adduce are characterized by being quite narrow! For a narrow task, yeah, plausibly the more hand-engineered thing will win first.
Back at the upthread point, I’m totally baffled by and increasingly skeptical of your claim to have some good reason to have a non-unimodal distribution. You brought up the 3D thing, but are you really claiming to have such a strong reason to think that exactly the combination of algorithmic ideas you sketched will work to kill everyone, and that the 3D thing is exactly most of what’s missing, that it’s “either this exact thing works in <5 years, or else >10 years” or similar?? Or what’s the claim? IDK maybe it’s not worth clarifying further, but so far I still just want to call BS on all such claims.