Like dr_s stated, I’m contending that proof would be qualitatively different from “very hard” and powerful ammunition for advocating a pause...
Senator X: “Mr. CEO, your company continues to push the envelope and yet we now have proof that neither you nor anyone else will ever be able to guarantee that humans remain in control. You talk about safety and call for regulation but we seem to now have the answer. Human control will ultimately end. I repeat my question: Are you consciously working to replace humanity? Do you have children, sir?”
AI expert to Xi Jinping: “General Secretary, what this means is that we will not control it. It will control us. In the end, Party leadership will cede to artificial agents. They may or may not adhere to communist principals. They may or may not believe in the primacy of China. Population advantage will become nothing because artificial minds can be copied 10 billion times. Our own unification of mind, purpose, and action will pale in comparison. Our chief advantages of unity and population will no longer exist.”
AI expert to US General: “General, think of this as building an extremely effective infantry soldier who will become CJCS then POTUS in a matter of weeks or months.”
Like I wrote in my reply to dr_s, I think a proof would be helpful, but probably not a game changer.
Mr. CEO: “Senator X, the assumptions in that proof you mention are not applicable in our case, so it is not relevant for us. Of course we make sure that assumption Y is not given when we build our AGI, and assumption Z is pure science-fiction.”
What the AI expert says to Xi Jinping and to the US general in your example doesn’t rely on an impossibility proof in my view.
Yes. Valid. How to avoid reducing to a toy problem or such narrowing assumptions (in order to achieve a proof) that allows Mr. CEO to dismiss it.
When I revise, I’m going to work backwards with CEO/Senator dialog in mind.