The main difference that I see is, containment supposes that you’re actively opposed to the AGI in some fashion—the AGI wants to get out, and you don’t want to let it. This is believed by many to be impossible. Thus, the idea is that if an AGI is unaligned, containment won’t work—and if an AGI is aligned, containment is unnecessary.
By contrast, alignment means you’re not opposed to the AGI—you want what the AGI wants. This is a very difficult problem to achieve, but doesn’t rely on actually outwitting a superintelligence.
I agree that it’s hard to imagine what a foolproof alignment solution would even look like—that’s one of the difficulties of the problem.
The main difference that I see is, containment supposes that you’re actively opposed to the AGI in some fashion—the AGI wants to get out, and you don’t want to let it. This is believed by many to be impossible. Thus, the idea is that if an AGI is unaligned, containment won’t work—and if an AGI is aligned, containment is unnecessary.
By contrast, alignment means you’re not opposed to the AGI—you want what the AGI wants. This is a very difficult problem to achieve, but doesn’t rely on actually outwitting a superintelligence.
I agree that it’s hard to imagine what a foolproof alignment solution would even look like—that’s one of the difficulties of the problem.