Nah, I don’t think so. Take the diamond maximizer problem—one problem is finding the function that physically maximizes diamond, e.g. as Julia code. The other one is getting your maximizer/neural network to point to that reliably maximizable function.
As for the “properly optimized human values”, yes. Our world looks quite DeepDream dogs-like compared to the ancestral environment (and, now that I think of it, maybe the degrowth/retvrn/convservative people can be thought of as claiming that our world is already “human value slop” in a number of ways—if you take a look at YouTube shorts and New York Times Square they’re not that different).
Nah, I don’t think so. Take the diamond maximizer problem—one problem is finding the function that physically maximizes diamond, e.g. as Julia code. The other one is getting your maximizer/neural network to point to that reliably maximizable function.
As for the “properly optimized human values”, yes. Our world looks quite DeepDream dogs-like compared to the ancestral environment (and, now that I think of it, maybe the degrowth/retvrn/convservative people can be thought of as claiming that our world is already “human value slop” in a number of ways—if you take a look at YouTube shorts and New York Times Square they’re not that different).