Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?
Is there a relevant difference in how much the eventual winner will incorporate AI safety measures? Or do you think it is merely an issue of actually solving the [friendly AI] problem, and once it is solved, it will surely be used?