I operate by Crocker’s rules.
I try to not make people regret telling me things. So in particular:
- I expect to be safe to ask if your post would give AI labs dangerous ideas.
- If you worry I’ll produce such posts, I’ll try to keep your worry from making them more likely even if I disagree. Not thinking there will be easier if you don’t spell it out in the initial contact.
I think using AI as an arbiter is fine if both parties agree. If one party doesn’t agree and therefore looks unreasonable, that’s a feature. Check whether this document is the office-political immune response to that prospect. Recall prediction markets:
If you are resigned to or intend that immune response, say something, I might have an idea that makes everyone better off.