Nobody involved with OpenAI’s launch can reasonably have been said to have done anything else of relative importance in their lives. The net impact of their lives is their contribution to the huge negative impact of OpenAI’s launch, plus a rounding error.
Using this statement to describe Elon Musk and Sam Altman seems to imply that founding a single AI company is much more important than privatizing spaceflight, inventing practical self-driving cars and leading Y-Combinator.
From the little I know about how Elon Musk and Sam Altman they see the world, both of them would agree AI as the most important issue near-term for humanity and they started OpenAI in order to do something about it. The question isn’t whether OpenAI is important. It’s whether OpenAI has had a positive or negative effect.
OpenAI is the only organization I know of which is explicitly dedicated to AI safety and is pushing the technical field of AI forward. This seems like a good thing to me. Pushing the technical field of AI forward is how you provide an empirical test of whether you know what you’re talking about. If you do the AI safety without the technical advancement you can get lost in an ivory tower.
I’m curious about this whole “nascent spirit of cooperation” thing. We’re a species that can kinda sorta cooperate on nuclear Armageddon, carbon emissions and vaccines. Cooperation on AI seems like a much harder problem because the capital expenditures are so low, the strategic advantage is so high and the technology advances so fast.
Using this statement to describe Elon Musk and Sam Altman seems to imply that founding a single AI company is much more important than privatizing spaceflight, inventing practical self-driving cars and leading Y-Combinator.
From Eliezer’s perspective that’s the case. Privatized spaceflight or self-driving cars won’t change the likelihood that humanity survives signficiantly and Eliezer sees the amount of value that can be created if humanity survives and there’s FAI as big enough that those other things are relatively unimportant.
Using this statement to describe Elon Musk and Sam Altman seems to imply that founding a single AI company is much more important than privatizing spaceflight, inventing practical self-driving cars and leading Y-Combinator.
From the little I know about how Elon Musk and Sam Altman they see the world, both of them would agree AI as the most important issue near-term for humanity and they started OpenAI in order to do something about it. The question isn’t whether OpenAI is important. It’s whether OpenAI has had a positive or negative effect.
OpenAI is the only organization I know of which is explicitly dedicated to AI safety and is pushing the technical field of AI forward. This seems like a good thing to me. Pushing the technical field of AI forward is how you provide an empirical test of whether you know what you’re talking about. If you do the AI safety without the technical advancement you can get lost in an ivory tower.
I’m curious about this whole “nascent spirit of cooperation” thing. We’re a species that can kinda sorta cooperate on nuclear Armageddon, carbon emissions and vaccines. Cooperation on AI seems like a much harder problem because the capital expenditures are so low, the strategic advantage is so high and the technology advances so fast.
From Eliezer’s perspective that’s the case. Privatized spaceflight or self-driving cars won’t change the likelihood that humanity survives signficiantly and Eliezer sees the amount of value that can be created if humanity survives and there’s FAI as big enough that those other things are relatively unimportant.