It’s especially dangerous because this AI is easily made relevant in the real world as compared to AlphaZero for example. Geopolitical pressure to advance these Diplomacy AIs is far from desirable.
Teun van der Weij
Why do you think a similar model is not useful for real-world diplomacy?
I think your policy suggestion is reasonable.
However, implementing and executing this might be hard: what exactly is an LLM? Does a slight variation on the GPT architecture count as well? How are you going to punish law violators?
How do you account for other worries? For example, like PeterMcCluskey points out, this policy might lead to reduced interpretability due to more superposition.
Policy seems hard to do at times, but others with more AI governance experience might provide more valuable insight than I can.
Cool work.
I was briefly looking at your code, and it seems like you did not normalize the activations when using PCA. Am I correct? If so, do you expect that to have a significant effect?
Simple distribution approximation: When sampled 100 times, can language models yield 80% A and 20% B?
Interesting. I’d guess that the prompting is not clear enough for the base model. The Human/Assistant template does not really apply to base models. I’d be curious to see what you get when you do a bit more prompt engineering adjusted for base models.
Might be a good way to further test this indeed. So maybe something like
green
andelephant
?
It seems that human-level play is possible in regular Diplomacy now, judging by this tweet by Meta AI. They state that: