I have personally signed the FLI Statement on Superintelligence. I think this is an easy thing to do, which is very useful for those working on political advocacy for AI regulation. I would encourage everyone to do so, and to encourage others to do the same. I believe impactful regulation can become feasible if the extent of agreement on these issues (amongst experts, and amongst the general public) can be made very legible.
Although this open statement accepts nonexpert signatures as well, I think it is particularly important for experts to take a public stance in order to make the facts on the ground highly legible to nontechnical decision-makers. (Nonexpert signatures, of course, help to show a preponderance of public support for AI regulation.) For those on the fence, Ishual has written an FAQ responding to common reasons not to sign.
In addition to signing, you can also write a statement of support and email it to letters@futureoflife.org. This statement can give more information on your agreement with the FLI statement. I think this is a good thing to do; it gives readers a lot more evidence about what signatures mean. It needs to be under 600 characters.
For examples of what other people have written in their statements of support, you can look at the page: https://superintelligence-statement.org/ EG, here is Samuel Buteau’s statement:
“Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely.”
(If you’re still hungry to sign more statements after the one, or if you don’t quite like the FLI statement but might be interested in signing a different statement, you can PM Ishual about their efforts.)
Seems like the thing to do would be to compare this to Stuart Armstrong’s old work on the difficulties inherent in corrigibility, to see if your proposal falls prey to one of his theorems or somehow avoids them.