I agree that a write-up of SIAI’s argument for the Scary Idea, in the manner you describe, would be quite interesting to see.
However, I strongly suspect that when the argument is laid out formally, what we’ll find is that
-- given our current knowledge about the pdf’s of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can’t conclude hardly anything with much of any confidence …
So, I think that the formalization will lead to the conclusion that
-- “we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity”
-- “we can also NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly NOT lead to bad consequences for humanity”
I.e., I strongly suspect the formalization
-- will NOT support the Scary Idea
-- will also not support complacency about AGI safety and AGI existential risk
I think the conclusion of the formalization exercise, if it’s conducted, will basically be to reaffirm common sense, rather than to bolster extreme views like the Scary Idea....
-- Ben Goertzel
I have thought a bit about these decision theory issues lately and my ideas seem somewhat similar to yours though not identical; see
http://goertzel.org/CounterfactualReprogrammingDecisionTheory.pdf
if you’re curious...
-- Ben Goertzel