I think that I will one day lose the ability to change my mind. I will become dull, stubborn and conservative, and keep publishing rephrasings of my same old views, as do many philosophers and academics.
I’m concerned about this too.
~
Now that you have increased your valuation of Givewell, you should do the same for the Future of Humanity Institute and Center for Study of Existential Risk, right? If you still do not value FHI and CSER, perhaps you still want to improve the future with interventions like AMF’s that have ‘proven’ near-future benefits.
I think this is less clear. I know a moderate amount about what GiveWell is making and how they intend to achieve and demonstrate progress in their domain. I think this is currently not true for FHI and CSER, but I could be convinced otherwise.
~
But what about scanning for asteroids? This has a pretty straightforward case for its benefits, but is not ‘proven’, and is never going to be supported by an RCT or cohort study.
I agree with the thoughts expressed by Carl Shulman and would have said the same thing. However, I do think it’s possible there is scalable existential risk interventions that are reasonably understood and progress can be confidently made on them. I just don’t yet know what they are.
For the record, my position is not “I need an RCT or I won’t trust it”.
I’m concerned about this too.
~
I think this is less clear. I know a moderate amount about what GiveWell is making and how they intend to achieve and demonstrate progress in their domain. I think this is currently not true for FHI and CSER, but I could be convinced otherwise.
~
I agree with the thoughts expressed by Carl Shulman and would have said the same thing. However, I do think it’s possible there is scalable existential risk interventions that are reasonably understood and progress can be confidently made on them. I just don’t yet know what they are.
For the record, my position is not “I need an RCT or I won’t trust it”.