But the way you are reading it seems to mean her “strawmann[ed]” point is irrelevant to the claim she made! That is, if we can get 50% of the way to aligned for current models, and we keep doing research and finding partial solutions at each stage getting 50% of the way to aligned for future models, and at each stage those solutions are both insufficient for full alignment, and don’t solve the next set of problems, we still fail. Specifically, not only do we fail, we fail in a way that means “we shouldn’t expect the techniques that worked on a relatively tiny model from 2023 to scale to more capable, autonomous future systems.” Which is the think she then disagrees with in the remainder of that paragraph you’re trying to defends.
But the way you are reading it seems to mean her “strawmann[ed]” point is irrelevant to the claim she made! That is, if we can get 50% of the way to aligned for current models, and we keep doing research and finding partial solutions at each stage getting 50% of the way to aligned for future models, and at each stage those solutions are both insufficient for full alignment, and don’t solve the next set of problems, we still fail. Specifically, not only do we fail, we fail in a way that means “we shouldn’t expect the techniques that worked on a relatively tiny model from 2023 to scale to more capable, autonomous future systems.” Which is the think she then disagrees with in the remainder of that paragraph you’re trying to defends.
I agree.