I agree you could ask your AI “will you promise to be aligned?”. I think I already discuss this option in the post — ctrl+f “What promise should we request?” and see the stuff after it. I don’t use the literal wording you suggest, but I discuss things which are ways to cash it out imo.
also quickly copying something I wrote on this question from a chat with a friend:
Should we just ask the AI to promise to be nice to us? I agree this is an option worth considering (and I mention it in the post), but I’m not that comfortable with the prospect of living together with the AI forever. Roughly I worry that “be nice to us” creates a situation where we are more permanently living together with the AI and human life/valuing/whatever isn’t developing in a legitimate way. Whereas the “ban AI” wish tries to be a more limited thing so we can still continue developing in our own human way. I think I can imagine this “be nice to us pls” wish going wrong for aliens employing me, when maybe “pls just ban AI and stay away from us otherwise” wouldn’t go wrong for them.
another meta note: Imo it’s a solid trick for thinking about these AI topics better to (at least occasionally) taboo all words with the root “align”.
I agree you could ask your AI “will you promise to be aligned?”. I think I already discuss this option in the post — ctrl+f “What promise should we request?” and see the stuff after it. I don’t use the literal wording you suggest, but I discuss things which are ways to cash it out imo.
also quickly copying something I wrote on this question from a chat with a friend:
another meta note: Imo it’s a solid trick for thinking about these AI topics better to (at least occasionally) taboo all words with the root “align”.