Ramana Kumar comments on Alignment allows “nonrobust” decision-influences and doesn’t require robust grading