What are good reasons not to create a for-profit AI applied deep-learning startup starting with a team who are concerned about AI risk?
Gaining expertise, reputation and network are valuable, especially if you’re concerned about AI risk. Revenue will be higher in worlds where AI advances more quickly, which is altruistically useful. The climate is very favourable for this kind of company to be funded presently, both by angels/VCs and by grant-funding such as FLI’s. This would have a chance of growing much faster than MIRI, due to the for-profit company structure, and could be aborted if it was excessively speeding-up AI progress, or was otherwise net harmful.
If the founders did it, they’d have to be careful to retain control so that shutting down the company would be an option, and a decision they had the authority to make. It’s easy for investors and VCs to influence or take over running a company, without extraordinary pushback from founders, especially if the company is making money. “Could be aborted if it was excessively speeding-up AI progress, or was otherwise net harmful” sounds glib to me. Making a plan of how to do that, exactly, and under what conditions it would be done, and putting that into contracts right from the start, would be important. Otherwise, it wouldn’t get done.
Studying risk analysis, and failure analysis, and the human factors of how people respond to emergencies would be helpful.
Well one would decide whether it was worth doing partially on the basis that investors interested in AI risk, including Jaan Tallinn and Elon Musk were willing to fund it in the early-mid stages. Of course, if you’re soliciting funds from people who are already interested in AI risk, then you can’t claim to be influencing AI-investors to become interested in AI risk—you can’t have your cake and eat it too.
What are good reasons not to create a for-profit AI applied deep-learning startup starting with a team who are concerned about AI risk?
Gaining expertise, reputation and network are valuable, especially if you’re concerned about AI risk. Revenue will be higher in worlds where AI advances more quickly, which is altruistically useful. The climate is very favourable for this kind of company to be funded presently, both by angels/VCs and by grant-funding such as FLI’s. This would have a chance of growing much faster than MIRI, due to the for-profit company structure, and could be aborted if it was excessively speeding-up AI progress, or was otherwise net harmful.
Why should or shouldn’t this be done?
Do it if you can.
If the founders did it, they’d have to be careful to retain control so that shutting down the company would be an option, and a decision they had the authority to make. It’s easy for investors and VCs to influence or take over running a company, without extraordinary pushback from founders, especially if the company is making money. “Could be aborted if it was excessively speeding-up AI progress, or was otherwise net harmful” sounds glib to me. Making a plan of how to do that, exactly, and under what conditions it would be done, and putting that into contracts right from the start, would be important. Otherwise, it wouldn’t get done.
Studying risk analysis, and failure analysis, and the human factors of how people respond to emergencies would be helpful.
Well one would decide whether it was worth doing partially on the basis that investors interested in AI risk, including Jaan Tallinn and Elon Musk were willing to fund it in the early-mid stages. Of course, if you’re soliciting funds from people who are already interested in AI risk, then you can’t claim to be influencing AI-investors to become interested in AI risk—you can’t have your cake and eat it too.