Another model says that detailed communication with supporters is bad because (1) supporters are generally giving out of positive affect toward the organization, and (2) that positive affect can’t be increased much once they grok the mission enough to start donating, but (3) the positive affect they feel toward the charity can be overwhelmed by the absolute number of the organization’s statements with which they disagree, and (4) more detailed communication with supporters increases this absolute number more quickly than limited communication that repeats the same points again and again (e.g. in a newsletter).
As an example datapoint Eliezer’s reply to Holden caused a net decrease (not necessarily an enormous one) in both my positive affect for and abstract evaluation of the merit of the organisation based off one particularly bad argument that shocked me. It prompted some degree (again not necessarily a large degree) of updating towards the possibility that SingInst could suffer the same kind of mind-killed thinking and behavior I expect from other organisations in the class of pet-cause idealistic charities. (And that matters more for FAI oriented charities than save-the-puppies charities, with the whole think-right or destroy the world thing.)
When allowing for the possibility that I am wrong and Eliezer is right you have to expect most other supporters to be wrong a non-trivial proportion of the time too so too much talking is going to have negative side effects.
Which issue are you talking about? Is there already a comments thread about it on Eliezer’s post?
Found it. It was nested too deep in a comment tree.
The particular line was:
I would ask him what he knows now, in advance, that all those sane intelligent people will miss. I don’t see how you could (well-justifiedly) access that epistemic state.
The position is something I think it is best I don’t mention again until (unless) I get around to writing the post “Predicting Failure Without Details” to express the position clearly with references and what limits apply to that kind of reasoning.
As an example datapoint Eliezer’s reply to Holden caused a net decrease (not necessarily an enormous one) in both my positive affect for and abstract evaluation of the merit of the organisation based off one particularly bad argument that shocked me. It prompted some degree (again not necessarily a large degree) of updating towards the possibility that SingInst could suffer the same kind of mind-killed thinking and behavior I expect from other organisations in the class of pet-cause idealistic charities. (And that matters more for FAI oriented charities than save-the-puppies charities, with the whole think-right or destroy the world thing.)
When allowing for the possibility that I am wrong and Eliezer is right you have to expect most other supporters to be wrong a non-trivial proportion of the time too so too much talking is going to have negative side effects.
Which issue are you talking about? Is there already a comments thread about it on Eliezer’s post?
Found it. It was nested too deep in a comment tree.
The particular line was:
The position is something I think it is best I don’t mention again until (unless) I get around to writing the post “Predicting Failure Without Details” to express the position clearly with references and what limits apply to that kind of reasoning.
Isn’t it just straight-up outside view prediction?
I thought so.