I think the good part of this post is a reobservation of the fact that real-world intelligence requires power-seeking (and power-seeking involves stuff like making accurate world-models) and that the bad part of the post seems to be confusion about how feasible it is to implement power-seeking and what methods would be used.
I think the good part of this post is a reobservation of the fact that real-world intelligence requires power-seeking (and power-seeking involves stuff like making accurate world-models) and that the bad part of the post seems to be confusion about how feasible it is to implement power-seeking and what methods would be used.