Implicit in my views is that the problem would be mostly resolved if people had aligned AI representatives which helped them wield their (current) power effectively.
Can you make the case for this a bit more? How are AI representatives going to help people prevent themselves becoming disempowered / economically redundant? (Especially given that you explicitly state you are skeptical of “generally make humans+AI (rather than just AI) more competitive”).
Mandatory interoperability for alignment and fine-tuning
Furthermore, I don’t really see how fine-tuning access helps create AI representatives. Models are already trained to be helpful and most people don’t have very useful personal data that would make their AI work much better for them (that can’t be put in context of any model).
The hope here would be to get the reductions in concentration of power that come from open source
The concentration of power from closed source AI comes from (1) the AI companies’ profits and (2) the AI companies having access to more advanced AI than the public. Open source solves (1), but fine-tuning access solves neither. (Obviously your “Deploying models more frequently” proposal does help with (2)).
Fine-tuning access could address (1) if there’s still sufficient access to drive down prices insofar as the fine-tuned model operators capture profit that would otherwise go to the main AI labs.
Fine-tuning access allows the public to safely access models that might be too dangerous to open-source/open-weight.
Can you make the case for this a bit more? How are AI representatives going to help people prevent themselves becoming disempowered / economically redundant? (Especially given that you explicitly state you are skeptical of “generally make humans+AI (rather than just AI) more competitive”).
Furthermore, I don’t really see how fine-tuning access helps create AI representatives. Models are already trained to be helpful and most people don’t have very useful personal data that would make their AI work much better for them (that can’t be put in context of any model).
The concentration of power from closed source AI comes from (1) the AI companies’ profits and (2) the AI companies having access to more advanced AI than the public. Open source solves (1), but fine-tuning access solves neither. (Obviously your “Deploying models more frequently” proposal does help with (2)).
Fine-tuning access could address (1) if there’s still sufficient access to drive down prices insofar as the fine-tuned model operators capture profit that would otherwise go to the main AI labs.
Fine-tuning access allows the public to safely access models that might be too dangerous to open-source/open-weight.