I think the answer is very clearly no. Onus is a tech company with shareholder obligations, shareholders who were not elected to represent humanity. Regardless of what rhetoric the leaders of Onus espouse, we cannot trust this body to make decisions that could potentially alter the long-term trajectory of humanity. We should treat Onus’s control as a transition period to a regime where we have a more legitimate body control the AGI.
I think we should treat companies and states as transitional, as is human control. The long term future should be a friendly superintelligence. Suppose onus make the AI. They keep the AI, working on the superintelligence. A foomed superintelligence utterly transforms the world, companies and money are irrelevant.
If the AI is in the hands of the governments, we get awkward, worst of both worlds kludges dreamed up based on political reasoning by non-experts.
If the AI is in the hands of a team of smart ethical people that pragmatically have a lot of slack, then good things can happen.
This could be a company. Any company with AGI could easily make a few billion on the side to amuse shareholders. It isn’t like the human programmers are compelled to turn the universe into banknotes. The group could be a charity. It could be government funded. It could be all the experts burning personal savings to get together and work on the AI. So long as the decisions are technical choices, not political compromises.
I think we should treat companies and states as transitional, as is human control. The long term future should be a friendly superintelligence. Suppose onus make the AI. They keep the AI, working on the superintelligence. A foomed superintelligence utterly transforms the world, companies and money are irrelevant.
If the AI is in the hands of the governments, we get awkward, worst of both worlds kludges dreamed up based on political reasoning by non-experts.
If the AI is in the hands of a team of smart ethical people that pragmatically have a lot of slack, then good things can happen.
This could be a company. Any company with AGI could easily make a few billion on the side to amuse shareholders. It isn’t like the human programmers are compelled to turn the universe into banknotes. The group could be a charity. It could be government funded. It could be all the experts burning personal savings to get together and work on the AI. So long as the decisions are technical choices, not political compromises.