If you think so, a good start would be to actually draft ideas about the contents of a possible treaty itself. The people who say “what would even be in the treaty” are largely saying that because most ideas would not be agreeable by all the parties, largely due to the uncertainties of the technology. In particular, the ones that say “ai development should be stopped” etc.
There may be things that can work, and be agreeable, but again, then that should be preemptively put forward, because the countries need to know what they are negotiating for before sitting to the table. The fact that they all share the non-desctuction of humanity as a common goal is not enough to ensure they will agree on everything.
What “goals”?
As to your second point, you’re right. There were two things with my second paragraph that require further explanation;
1- What I meant by “there may be things that can work”, I specifically intended to mean (which wasn’t obvious) the treaty to “stop the race to ASI” wouldn’t be agreeable by most countries. The countries might agree to significantly milder treaties, like creating a system to alert other countries when you see signs of misalignment. This what what I meant by “the things that can work”.
2- I never said “details” should be put forward, just the general idea of what was going to be discussed upon, and the “stop the AI development” isn’t good enough. The countries wouldn’t want to do that, and simply wouldn’t sit to the table because of that. If you contend that “non-desctuction of humanity as a common goal” is enough to ensure everyone will agree to “stop the AI development” then I disagree that it’s the case, and I find it pretty self-evident. By that logic no disagreements in treaties should ever happen because everyone shares the good of humanity as a common goal anyway.