I think it’s just a matter of what’s more technologically achievable. Building LLMs turned out to be a lot easier than understanding neuroscience to a level even remotely close to what’s necessary to achieve 1 or 2. And both of those also require huge political capital due to needing (likely dangerous) human experimentation that would currently be considered unacceptable.
Is this due to it basically being a Pareto frontier of political capital needed vs probability of causing doom, or some other reason?
I think it’s just a matter of what’s more technologically achievable. Building LLMs turned out to be a lot easier than understanding neuroscience to a level even remotely close to what’s necessary to achieve 1 or 2. And both of those also require huge political capital due to needing (likely dangerous) human experimentation that would currently be considered unacceptable.