Feels true to me, but what’s the distinction between theoretical and non-theoretical arguments?
Having decent grounding for the theory at hand would be a start. To take the ignition of the atmosphere example, they did have a solid enough grasp of the underlying physics, with validated equations to plug numbers into. Another example would be global warming, where even though nobody has great equations, the big picture is pretty clear, and there were periods when the Earth was much hotter in the past (but still supported rich ecosystems, which is why most people don’t take the “existential risk” part seriously).
Whereas, even the notion of “intelligence” remains very vague, straight out of philosophy’s domain, let alone concepts like “ASI”, so pretty much all argumentation relies on analogies and intuitions, also prime philosophy stuff.
Policy has also ever been guided by arguments with little related maths, for example, the MAKING FEDERAL ARCHITECTURE BEAUTIFUL AGAIN executive order.
I mean, sure, all sorts of random nonsense can sway national policy from time to time, but strictly-ish enforced global bans are in an entirely different league.
Maybe the problem with AI existential risk arguments is that they’re not very convincing.
Indeed, and I’m proposing an explanation why.
And by the same token, subsequent punishment would be prosocial too. Why, then, would Alice want to disclaim it? Because, of course, in reality the facts of the matter whether somebody deserves punishment are rarely unambiguous, so it makes sense for people to hedge. But that’s basically wanting to have the cake and eat it too.
The honorable thing for Alice to do would be to weigh the reliability of the evidence that she possesses, and disclose it only if she thinks that it’s sufficient to justify the likely punishment that would follow it. No amount of nuances of wording and tone could replace this essential consideration.