I think the problem here is the assumption that there is only one AI company. If there are multiple AI companies and they don’t form a trust, then they need to bid against each other to acquire safety researchers, right? This is like in economics where if you are the only person selling bread, you can sell it for ε less than its value to any given customer, but if there are multiple people selling bread you need to sell it for ε minus your competitors’ prices.
This is not a problem, this is completely within the framework!
Even with a single AI accelerationist corporation and a single employee, you may reason about bargaining power (Wikipedia).
What is true regardless of the number of accelerationist corps is that the influx of safety-branded researchers willing to work for them will drive down the safety premium.
This is a much more interesting point IMHO. If the positive externalities are on the labour then you want to oversupply Utilitarian X-Riskers as much as possible. If the positive externalities are mostly on the compensation then you want a closed shop union.
Of course this is predicated on an individual-level causal attribution of utility being how utilitarians make decisions.
I think the problem here is the assumption that there is only one AI company. If there are multiple AI companies and they don’t form a trust, then they need to bid against each other to acquire safety researchers, right? This is like in economics where if you are the only person selling bread, you can sell it for ε less than its value to any given customer, but if there are multiple people selling bread you need to sell it for ε minus your competitors’ prices.
This is not a problem, this is completely within the framework!
Even with a single AI accelerationist corporation and a single employee, you may reason about bargaining power (Wikipedia).
What is true regardless of the number of accelerationist corps is that the influx of safety-branded researchers willing to work for them will drive down the safety premium.
For instance, 80k hours tried to increase their supply, and is proud to have done so. Like they say, their advisees now work at DeepMind and Anthropic!
This is a much more interesting point IMHO. If the positive externalities are on the labour then you want to oversupply Utilitarian X-Riskers as much as possible. If the positive externalities are mostly on the compensation then you want a closed shop union.
Of course this is predicated on an individual-level causal attribution of utility being how utilitarians make decisions.