In regards to safely-wieldable tool-AI versus ‘alignment’, I recommend thinking in terms of ‘intent alignment’ versus ‘values alignment’ as Seth Herd describes here: Conflating value alignment and intent alignment is causing confusion
In regards to safely-wieldable tool-AI versus ‘alignment’, I recommend thinking in terms of ‘intent alignment’ versus ‘values alignment’ as Seth Herd describes here: Conflating value alignment and intent alignment is causing confusion