I’m donating to CFAR but not SI because CFAR would help in a wider variety of scenarios.
If AGI will be developed by a single person or a very small team, it seems likely that it won’t be done by someone we recognize in advance as likely to do it (for example, think of the inventions of the airplane or the web). CFAR is more oriented toward influencing large enough numbers of smart people that it will be more likely to reach such a developer.
Single-person AGI development seems like a low probability scenario to me, but the more people that are needed to create an AGI, the less plausible it seems that intelligence will be intelligible enough to go foom. So I imagine a relatively high fraction of scenarios in which UFAI takes over the world as coming from very small development teams.
Plus it’s quite possible that we’re all asking the wrong questions about existential risks. CFAR seems more likely than SI to help in those scenarios.
I’ve done well at stock market speculation over the past 12 years. It took me 20 years of doing it to become good at it. If I’d had LW available when I started I expect I could have become good in around 5 years.
One of the difficulties was that with most strategies, 1-year returns are sufficiently dominated by luck that the feedback is nearly useless.
The contest might be valuable if it lasted close to 5 years and wasn’t a winner-take-all event. Would that attract any participants?