Regardless, as Brian Tomasik points out, helping people be more rational contributes to improving the world, and thus the ultimate goal of the EA movement.
I agree that increasing rationality would improve the world, but would it improve the world more than other efforts? I believe you will face stiff competition from MIRI for effective altruist’s charitable donations. From the Brian Tomasik essay you referenced…
…because AI is likely to control the future of Earth’s light cone absent a catastrophe before then, ultimately all other applications matter through their influence on AI.
Separately…
Is encouraging philosophical reflection in general plausibly competitive with more direct work to explore the philosophical consequences of AI? My guess is that direct work like MIRI’s is more important per dollar.
Why should I support Intentional Insights instead of MIRI? I’m sure I won’t be the only potential donor to ask this question, so I recommend that you craft a solid response.
I agree that increasing rationality would improve the world, but would it improve the world more than other efforts? I believe you will face stiff competition from MIRI for effective altruist’s charitable donations. From the Brian Tomasik essay you referenced…
Separately…
Why should I support Intentional Insights instead of MIRI? I’m sure I won’t be the only potential donor to ask this question, so I recommend that you craft a solid response.
Excellent, thank you for the feedback on what to craft! I will think about this further, and appreciate the ideas!