Charlie Steiner comments on Can we expect more value from AI alignment than from an ASI with the goal of running alternate trajectories of our universe?