I think that this field is indeed underresearched. Focus is either on LLMs or on single payer environment. Meanwhile, what matters for Alignment is how AI will interact with other agents, such as people. And we don’t haveto wait for AGI to be able to research AI cooperation/competition in simple environments.
One idea I had is “traitor chess”—have several AIs playing one side of chess party cooperatively, with one (or more) of them being a “misaligned” agent that is trying to sabotage others. And/or some AIs having a separate secret goal, such as saving a particular pawn. Them interacting with each other could be very interesting.
I think that this field is indeed underresearched. Focus is either on LLMs or on single payer environment. Meanwhile, what matters for Alignment is how AI will interact with other agents, such as people. And we don’t haveto wait for AGI to be able to research AI cooperation/competition in simple environments.
One idea I had is “traitor chess”—have several AIs playing one side of chess party cooperatively, with one (or more) of them being a “misaligned” agent that is trying to sabotage others. And/or some AIs having a separate secret goal, such as saving a particular pawn. Them interacting with each other could be very interesting.