Like, fundamentally the question is something like “how efficient and accurate is the AI research market?”
I would distinguish two factors:
How powerful and well-directed is the field’s optimization?
How much does the technology inherently lend itself to information asymmetries?
You could turn the “powerful and well-directed” dial up to the maximum allowed by physics, and still not thereby guarantee that information asymmetries are rare, because the way that a society applies maximum optimization pressure to reaching AGI ASAP might route through a lot of individuals and groups going down different rabbit holes. A researcher could be rationally optimistic about her rabbit hole based on specialized knowledge or experience that’s hard to instantly transmit to investors, the field as a whole, etc.
I would distinguish two factors:
How powerful and well-directed is the field’s optimization?
How much does the technology inherently lend itself to information asymmetries?
You could turn the “powerful and well-directed” dial up to the maximum allowed by physics, and still not thereby guarantee that information asymmetries are rare, because the way that a society applies maximum optimization pressure to reaching AGI ASAP might route through a lot of individuals and groups going down different rabbit holes. A researcher could be rationally optimistic about her rabbit hole based on specialized knowledge or experience that’s hard to instantly transmit to investors, the field as a whole, etc.