If China thinks that AI is very important and that US is winning the AI race, it will have very strong incentive to start the war with Taiwan which has a chance to escalate to WW3. Thus selling chips to China lowers chances of nuclear war.
This reduces x-risk, but one may argue that China is bad in AI safety and thus total risk is increasing. However, I think that equilibrium strategy when several AGIs are created simultaneously lowers chances that a single misaligned AI takes over the world.
We have been working on sideloading—that is, on creating as good model as possible of a currently living person. One of approaches is to create an agent in which different parts mimic parts of human mind—like unconsciousness and long-term memory.