I think crypto is an interesting example, since LWers are very overrepresented among the people who made money of Bitcoin. But this could very likely be more due to the fact that tech interested people in general invested in Bitcoin, regardless of reasoning, and it just happened to turn out well.
Predicting black swans and booms would technically mean that LW would outperform the market, although it might take a long time and many events to prove it.
While I believe the probability for LW to outperform the stock market long term is significantly less than 50%, I believe testing the hypothesis is worth testing, since the potential gain is significant (even if I personally might not benefit from it). Also, I find the topic interesting, and will enjoy it regardless of outcome.
I agree it is not clear if it is net postive or negative that they open source the models, here are the main arguments for and against I could think of:
Pros with open sourcing models
- Gives AI alignment researchers access to smarter models to experiment on
- Decreases income for leading AI labs such as OpenAI and Google, since people can use open source models instead.
Cons with open sourcing models
- Capability researchers can do better experiements on how to improve capabilities
- The open source community could develop code to faster train and run inference on models, indirectly enhancing capability development.
- Better open source models could lead to more AI startups succeeding, which might lead to more AI research funding. This seems like a stretch to me.
- If Meta would share any meaningful improvements on how to train models that is of course directly contributing to other labs capabilities, but llama to me doesn’t seem that innovative. I’m happy to be corrected if I am wrong on this point.