One (speculative) way even incomplete unlearning might be directly useful for alignment (e.g. for getting AIs to ‘care’ about human values): https://www.lesswrong.com/posts/WkJDgpaPeCJDMJkoL/quick-takes-on-ai-is-easy-to-control?commentId=eP2iCBKP7Kneo3AdF.
One (speculative) way even incomplete unlearning might be directly useful for alignment (e.g. for getting AIs to ‘care’ about human values): https://www.lesswrong.com/posts/WkJDgpaPeCJDMJkoL/quick-takes-on-ai-is-easy-to-control?commentId=eP2iCBKP7Kneo3AdF.