Couldn’t we just… set up a financial agreement where the first N employees don’t own stock and have a set salary?
My main concern is that they’ll have enough power to be functionally wealthy all-the-same, or be able to get it via other means (e.g. Altman with his side hardware investment / company).
Couldn’t we just… set up a financial agreement where the first N employees don’t own stock and have a set salary?
Maybe, could be nice… But since the first N employees usually get to sign off on major decisions, why would they go along with such an agreement? Or are you suggesting governments should convene to force this sort of arrangement on them?
My main concern is that they’ll have enough power to be functionally wealthy all-the-same, or be able to get it via other means (e.g. Altman with his side hardware investment / company).
I’m not sure I understand this part actually, could you elaborate? Is this your concern with the OGI model or with your salary-only for first-N employees idea?
But since the first N employees usually get to sign off on major decisions, why would they go along with such an agreement?
I’m imagining a world where a group of people step forward to take a lot of responsibility for navigating humanity through this treacherous transition, and do not want themselves to be corrupted by financial incentives (and wish to accurately signal this to the external world). I’ll point out that this is not unheard of, Altman literally took no equity in OpenAI (though IMO was eventually corrupted by the power nonetheless).
To help with the incentives and coordination, instead having the first frontier AI megaowner step forward and unconditionally relinquish some of their power, they could sign on to a conditional contract to do so. It would only activate if other megaowners did the same.
I’ll point out that this is not unheard of, Altman literally took no equity in OpenAI (though IMO was eventually corrupted by the power nonetheless).
He may have been corrupted later by power later. Alternatively, he may have been playing the long game, knowing that he would have that power eventually even if he took no equity.
I’m not sure I understand this part actually, could you elaborate? Is this your concern with the OGI model or with your salary-only for first-N employees idea?
Couldn’t we just… set up a financial agreement where the first N employees don’t own stock and have a set salary?
My main concern is that they’ll have enough power to be functionally wealthy all-the-same, or be able to get it via other means (e.g. Altman with his side hardware investment / company).
Maybe, could be nice… But since the first N employees usually get to sign off on major decisions, why would they go along with such an agreement? Or are you suggesting governments should convene to force this sort of arrangement on them?
I’m not sure I understand this part actually, could you elaborate? Is this your concern with the OGI model or with your salary-only for first-N employees idea?
I’m imagining a world where a group of people step forward to take a lot of responsibility for navigating humanity through this treacherous transition, and do not want themselves to be corrupted by financial incentives (and wish to accurately signal this to the external world). I’ll point out that this is not unheard of, Altman literally took no equity in OpenAI (though IMO was eventually corrupted by the power nonetheless).
To help with the incentives and coordination, instead having the first frontier AI megaowner step forward and unconditionally relinquish some of their power, they could sign on to a conditional contract to do so. It would only activate if other megaowners did the same.
Ok yes, that would be great.
He may have been corrupted later by power later. Alternatively, he may have been playing the long game, knowing that he would have that power eventually even if he took no equity.
This is a concern I am raising with my own idea.