I am just speaking from general models and I have no specific model for FAI, so I was/am probably wrong.
I still don’t understand the bottleneck. There aren’t promising projects to get funded. Isn’t this just another way of saying that the problem is hard, and most research attempts will be futile, and thus to accelerate the progress, unpromising projects need to be funded? I.e., what is the bottleneck if it’s not funding? “Brilliant ideas” are not under our direct control, so this cannot be part of our operating bottleneck.
Solution space is really high-dimensional, so just funding random points has basically no chance of getting you much closer to a functioning solution. There aren’t even enough people who understand what the AI Alignment problem is to fund all of them, and frequently funding people can have downsides. Two common downsides of funding people:
They have an effect on the social context in which work happens, and if they don’t do good work, they scare away other contributors, or worsen the methodology of your field
If you give away money like candy, you attract lots of people who will try to pretend doing the work you want to do and just take away your money. There are definitely enough people who just want to take your money to exhaust $10B in financial resources (or really any reasonable amount of resources). In a lemon’s market, you need to maintain some level of vigilance, otherwise you can easily lose all of your resources at almost any level of wealth.
I am just speaking from general models and I have no specific model for FAI, so I was/am probably wrong.
I still don’t understand the bottleneck. There aren’t promising projects to get funded. Isn’t this just another way of saying that the problem is hard, and most research attempts will be futile, and thus to accelerate the progress, unpromising projects need to be funded? I.e., what is the bottleneck if it’s not funding? “Brilliant ideas” are not under our direct control, so this cannot be part of our operating bottleneck.
Solution space is really high-dimensional, so just funding random points has basically no chance of getting you much closer to a functioning solution. There aren’t even enough people who understand what the AI Alignment problem is to fund all of them, and frequently funding people can have downsides. Two common downsides of funding people:
They have an effect on the social context in which work happens, and if they don’t do good work, they scare away other contributors, or worsen the methodology of your field
If you give away money like candy, you attract lots of people who will try to pretend doing the work you want to do and just take away your money. There are definitely enough people who just want to take your money to exhaust $10B in financial resources (or really any reasonable amount of resources). In a lemon’s market, you need to maintain some level of vigilance, otherwise you can easily lose all of your resources at almost any level of wealth.
One good example of what funding can do is nanotech. https://www.lesswrong.com/posts/Ck5cgNS2Eozc8mBeJ/a-review-of-where-is-my-flying-car-by-j-storrs-hall describes how strong funding killed of the nanotech industry by getting people to compete for that funding.