Is the reason that you expect AI developer margins to be reasonable that you expect the small number of AI developers to still compete with each other on price and thereby erode each other’s margins?
Yes.
What if they were to form a cartel/monopoly? Being the only source of cheaper and/or smarter than human labor would be extremely profitable, right?
A monopoly on computers or electricity could also take big profits in this scenario. I think the big things are always that it’s illegal and that high prices drive new entrants.
but AI developers could implicitly or explicitly collude with each other in ways besides price, such as indoctrinating their AIs with the same ideology, which governments do not forbid and may even encourage
I think this would also be illegal if justified by the AI company’s preferences rather than customer preferences, and it would at least make them a salient political target for people who disagree. It might be OK if they were competing to attract employees/investors/institutional customers and in practice I think it would be most likely happen as a move by the dominant faction in political/cultural conflict in a broader society, and this would be a consideration raising the importance of AI researchers and potentially capitalists in that conflict.
I agree if you are someone who stands to lose from that conflict then you may be annoyed by some near-term applications of alignment, but I still think (i) alignment is distinct from those applications even if it facilitates them, (ii) if you don’t like how AI empowers your political opponents then I strongly think you should push back on AI development itself rather than hoping that no one can control AI.
Yes.
A monopoly on computers or electricity could also take big profits in this scenario. I think the big things are always that it’s illegal and that high prices drive new entrants.
I think this would also be illegal if justified by the AI company’s preferences rather than customer preferences, and it would at least make them a salient political target for people who disagree. It might be OK if they were competing to attract employees/investors/institutional customers and in practice I think it would be most likely happen as a move by the dominant faction in political/cultural conflict in a broader society, and this would be a consideration raising the importance of AI researchers and potentially capitalists in that conflict.
I agree if you are someone who stands to lose from that conflict then you may be annoyed by some near-term applications of alignment, but I still think (i) alignment is distinct from those applications even if it facilitates them, (ii) if you don’t like how AI empowers your political opponents then I strongly think you should push back on AI development itself rather than hoping that no one can control AI.