As a more general point, the framework seems less helpful in the case of religion and politics because people are generally unwilling to carefully consider arguments with the goal of having accurate beliefs. By and large, when people are unwilling to carefully consider arguments with the goal of having accurate beliefs, this is evidence that it is not useful to try to think carefully about this area. This follows from the idea mentioned above that people tend to try to have accurate views when it is in their present interests to have accurate views. So if this is the main way the framework breaks down, then the framework is mostly breaking down in cases where good epistemology is relatively unimportant.
It seems to me that you are mainly using this framework in cases involving charity and setting policy to affect long run outcomes, i.e. cases where the short run individual selfish (CDT) impact of good decisions is low. But by the logic above those are places where the framework would be less applicable.
The Ungar et al forecasting article you link to merits much more examination. Some of its takeaways:
Brief training in probability methods meaningfully improves forecasting accuracy over a control group, and over ‘scenario analysis’ training
Teams outperforming individuals in forecasting, and teams with good aggregation algorithms beat prediction markets (although the prediction markets didn’t have the scale or time to permit “prediction hedge funds” to emerge and hire lots of analytical talent
Aggregated opinion has less squared error in forecasting, and somewhat more sophisticated algorithms do even better, especially by transforming probabilities away from 0.5
It seems to me that you are mainly using this framework in cases involving charity and setting policy to affect long run outcomes, i.e. cases where the short run individual selfish (CDT) impact of good decisions is low. But by the logic above that is one of the places where the framework would be less applicable.
[Edited to add: This may be a stronger point than my original comment recognized. One thing I’d like to add in addition is that a lot of effective altruism topics are pretty apolitical. The fact that we can get people to think rationally about apolitical topics much more easily, and thereby allow us to stress-test our views about these topics much more easily, seems like a significant consideration in favor of avoiding politically-charged topics. I didn’t fully appreciate that before thinking about Carl’s comment.]
I agree that this is a consideration in favor of thinking it isn’t helpful to think carefully about how to set policy to affect long-run outcomes. One qualifier is that when I said people’s “interests,” I didn’t mean to limit my claim to their “selfish interests” or their concern about what happens right now. I meant to focus on the desires that they currently have, including their present concerns about the welfare of others and the future of humanity.
Another issue is that we have strong evidence that certain types of careful thinking about how to do good does result in conclusions that can command wide support in the form of GiveWell’s success so far. I see GiveWell’s work as in many ways continuous with trying to find out how to optimize for long-run impact.
I think there is more uncertainty about the value of trying to move into speculative considerations about very long-run impacts. This framework may ultimately suggest that you can’t arrive at conclusions that will command the support of a broad coalition of impressive people. This would be an update against the value of looking into speculative issues. I hope to find some areas where credible work can be done, and I’m optimistic that people who do care about long-run outcomes will be help stress-test my conclusions. I also hope to articulate more of my thinking about why it is potentially helpful to try to think about speculative long-run considerations.
It’s nice to have this down in linkable format.
It seems to me that you are mainly using this framework in cases involving charity and setting policy to affect long run outcomes, i.e. cases where the short run individual selfish (CDT) impact of good decisions is low. But by the logic above those are places where the framework would be less applicable.
The Ungar et al forecasting article you link to merits much more examination. Some of its takeaways:
Brief training in probability methods meaningfully improves forecasting accuracy over a control group, and over ‘scenario analysis’ training
Teams outperforming individuals in forecasting, and teams with good aggregation algorithms beat prediction markets (although the prediction markets didn’t have the scale or time to permit “prediction hedge funds” to emerge and hire lots of analytical talent
Aggregated opinion has less squared error in forecasting, and somewhat more sophisticated algorithms do even better, especially by transforming probabilities away from 0.5
[Edited to add: This may be a stronger point than my original comment recognized. One thing I’d like to add in addition is that a lot of effective altruism topics are pretty apolitical. The fact that we can get people to think rationally about apolitical topics much more easily, and thereby allow us to stress-test our views about these topics much more easily, seems like a significant consideration in favor of avoiding politically-charged topics. I didn’t fully appreciate that before thinking about Carl’s comment.]
I agree that this is a consideration in favor of thinking it isn’t helpful to think carefully about how to set policy to affect long-run outcomes. One qualifier is that when I said people’s “interests,” I didn’t mean to limit my claim to their “selfish interests” or their concern about what happens right now. I meant to focus on the desires that they currently have, including their present concerns about the welfare of others and the future of humanity.
Another issue is that we have strong evidence that certain types of careful thinking about how to do good does result in conclusions that can command wide support in the form of GiveWell’s success so far. I see GiveWell’s work as in many ways continuous with trying to find out how to optimize for long-run impact.
I think there is more uncertainty about the value of trying to move into speculative considerations about very long-run impacts. This framework may ultimately suggest that you can’t arrive at conclusions that will command the support of a broad coalition of impressive people. This would be an update against the value of looking into speculative issues. I hope to find some areas where credible work can be done, and I’m optimistic that people who do care about long-run outcomes will be help stress-test my conclusions. I also hope to articulate more of my thinking about why it is potentially helpful to try to think about speculative long-run considerations.