One big reason I might expect an AI to do a bad job at alignment research is if it doesn’t do a good job (according to humans) of resolving cases where humans are inconsistent or disagree. How do you detect this in string theory research? Part of the reason we know so much about physics is humans aren’t that inconsistent about it and don’t disagree that much. And if you go to sub-topics where humans do disagree, how do you judge its performance (because ‘be very convincing to your operators’ is an objective with a different kind of danger).
Another potential red flag is if the AI gives humans what they ask for even when that’s ‘dumb’ according to some sophisticated understanding of human values. This could definitely show up in string theory research (note when some ideas suggest non-string-theory paradigms might be better, and push back on the humans if the humans try to ignore this), it’s just intellectually difficult (maybe easier in loop quantum gravity research heyo gottem) and not as salient without the context of alignment and human values.
One big reason I might expect an AI to do a bad job at alignment research is if it doesn’t do a good job (according to humans) of resolving cases where humans are inconsistent or disagree. How do you detect this in string theory research? Part of the reason we know so much about physics is humans aren’t that inconsistent about it and don’t disagree that much. And if you go to sub-topics where humans do disagree, how do you judge its performance (because ‘be very convincing to your operators’ is an objective with a different kind of danger).
Another potential red flag is if the AI gives humans what they ask for even when that’s ‘dumb’ according to some sophisticated understanding of human values. This could definitely show up in string theory research (note when some ideas suggest non-string-theory paradigms might be better, and push back on the humans if the humans try to ignore this), it’s just intellectually difficult (maybe easier in loop quantum gravity research heyo gottem) and not as salient without the context of alignment and human values.