I’m not sure I understand your question. By AI companies “making copying hard enough”, I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.
That’s not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don’t do that would win on the marketplace.
I’m not sure I understand your question. By AI companies “making copying hard enough”, I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.
I just meant, reducing in their AIs the property which you postulate is the primary advantage of AI over human labor.
That’s not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don’t do that would win on the marketplace.