Non-copyability as a security feature
It seems hard to imagine that there’s anything humans can do that AIs (+robots) won’t eventually also be able to do. And AIs are cheaply copyable, allowing you to save costs on training and parallelize the work much more. That’s the fundamental argument why you’d expect to see AI displace a lot of human labor.
But both AIs and humans are vulnerable to being tricked into sharing secrets, but so far AIs are more vulnerable, and there’s not really any algorithms on the horizon that seem likely to change this. Furthermore, if one exploits the copyability of AIs to run them at bigger scale, then that makes it possible for attackers to scale their exploits correspondingly.
This becomes a problem when one wants the AI to be able to learn from experience. You can’t condition an AI on experience from one customer and then use that AI on tasks from another customer, as then you have a high risk of leaking information. By contrast, humans automatically learn from experience, with acceptable security profiles.
Whatever happened to holding software companies to the standard of not rolling vulnerable user data into their widely distributed business logic?Say AI companies could effectively make copying hard enough to provide security benefits to scrape-ees [ if I’m reading you right, that’s approximately who you’re trying to protect ]. Say also that this “easy-to-copy” property of AIs, is “the fundamental” thing expected increase the demand for AI labor relative to human labor. . . . Hard-alignment-problem-complete problem specification, no?
I’m not sure I understand your question. By AI companies “making copying hard enough”, I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.
I just meant, reducing in their AIs the property which you postulate is the primary advantage of AI over human labor.
That’s not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don’t do that would win on the marketplace.