I operate by Crocker’s rules.
I try to not make people regret telling me things. So in particular:
- I expect to be safe to ask if your post would give AI labs dangerous ideas.
- If you worry I’ll produce such posts, I’ll try to keep your worry from making them more likely even if I disagree. Not thinking there will be easier if you don’t spell it out in the initial contact.
If the purpose of a utility function is to provide evidence about the behavior of the group, we can preprocess the data structure into that form: Suppose Alice may update the distribution over group decisions by ε. Then the direction she pushes in is her utility function, and the constraints “add up to 100%” and “size ε” cancel out the “affine transformation” degrees of freedom. Now such directions can be added up.