Maybe you should edit the post to add something like this:
My proposal is not about the hardest parts of the Alignment problem. My proposal is not trying to solve theoretical problems with Inner Alignment or Outer Alignment (Goodhart, loopholes). I’m just assuming those problems won’t be relevant enough. Or humanity simply won’t create anything AGI-like (see CAIS).
Instead of discussing the usual problems in Alignment theory, I merely argue X. X is not a universally accepted claim, here’s evidence that it’s not universally accepted: [write the evidence here].
...
By focusing on the external legal system, many key problems associated with alignment (as recited in the Summary of Argument) are addressed. One worth highlighting is 4.4, which suggests AISVL can assure alignment in perpetuity despite changes in values, environmental conditions, and technologies, i.e., a practical implementation of Yudkowsky’s CEV.
I think the key problems are not “addressed”, you just assume they won’t exist. And laws are not a “practical implementation of CEV”.
Maybe you should edit the post to add something like this:
...
I think the key problems are not “addressed”, you just assume they won’t exist. And laws are not a “practical implementation of CEV”.