The OpenAI people I’ve talked to say that they’re less open than the name would suggest, and are willing to do things less openly to the extent that that makes sense to them. On the other hand, Gym and Universe are in fact pretty open and I think they probably made the world slightly worse, by slightly accelerating AI progress. It’s possible that this might be offset by benefits to OpenAI’s reputation if they’re more willing to spread safety memes as they acquire more mind share.
Your story of OpenAI is incomplete in at least one important respect: Musk was actually an early investor in DeepMind before it was acquired by Google.
Finally, what do people think about the prospects of influencing OpenAI to err more on the side of safety from the inside? It’s possible people like Paul can’t do much about this yet by virtue of not having acquired sufficient influence within the company, and maybe just having more people like Paul working at OpenAI could strengthen that influence enough to matter.
I think our prospects for influence in a good direction are nonzero only if we make it common knowledge that no one credible thinks the original mandate of OpenAI promoted long-run AI safety. Beyond that I don’t know.
The OpenAI people I’ve talked to say that they’re less open than the name would suggest, and are willing to do things less openly to the extent that that makes sense to them. On the other hand, Gym and Universe are in fact pretty open and I think they probably made the world slightly worse, by slightly accelerating AI progress. It’s possible that this might be offset by benefits to OpenAI’s reputation if they’re more willing to spread safety memes as they acquire more mind share.
Your story of OpenAI is incomplete in at least one important respect: Musk was actually an early investor in DeepMind before it was acquired by Google.
Finally, what do people think about the prospects of influencing OpenAI to err more on the side of safety from the inside? It’s possible people like Paul can’t do much about this yet by virtue of not having acquired sufficient influence within the company, and maybe just having more people like Paul working at OpenAI could strengthen that influence enough to matter.
I think our prospects for influence in a good direction are nonzero only if we make it common knowledge that no one credible thinks the original mandate of OpenAI promoted long-run AI safety. Beyond that I don’t know.