I think it is insanely unethical that the large AI labs are not proactively decentralizing ownership, while their success is still uncertain. OpenAI and Anthropic should both be public companies so ordinary people can own a stake in the future they are building and not be dependent on charity forever if that future comes. They choose not to do this.
Not like that would solve much. Maybe give a couple chances to own a tiny amount of stock in OpenAI to US citizens? What chance is exactly going anyone from a third world country to have, for example? Generally speaking, the trajectory towards “someone will rule the world as its AI master so it might as well be us” leads to nothing but cyberpunk dystopias at best.
I think that public ownership is helpful but insufficient to make building strong AGI ethical. Still, at the margin, I expect better outcomes with more decentralized power and ownership. As you disburse power, power is more likely to be wielded in ways representative of broader human values—but I still prefer not building it at all.
Not like that would solve much. Maybe give a couple chances to own a tiny amount of stock in OpenAI to US citizens? What chance is exactly going anyone from a third world country to have, for example? Generally speaking, the trajectory towards “someone will rule the world as its AI master so it might as well be us” leads to nothing but cyberpunk dystopias at best.
I think that public ownership is helpful but insufficient to make building strong AGI ethical. Still, at the margin, I expect better outcomes with more decentralized power and ownership. As you disburse power, power is more likely to be wielded in ways representative of broader human values—but I still prefer not building it at all.