Another distinguishing property of (AGI) alignment work is that it’s forward looking and trying to solve future alignment problems. Given the large increase in AI safety work from academia, this feels like a useful property to keep in mind.
(Of course, this is not to say that we couldn’t use current day problems as proxies for those future problems.)
Another distinguishing property of (AGI) alignment work is that it’s forward looking and trying to solve future alignment problems. Given the large increase in AI safety work from academia, this feels like a useful property to keep in mind.
(Of course, this is not to say that we couldn’t use current day problems as proxies for those future problems.)