Whatever happened to holding software companies to the standard of not rolling vulnerable user data into their widely distributed business logic?
Say AI companies could effectively make copying hard enough to provide security benefits to scrape-ees [ if I’m reading you right, that’s approximately who you’re trying to protect ]. Say also that this “easy-to-copy” property of AIs, is “the fundamental” thing expected increase the demand for AI labor relative to human labor. . . . Hard-alignment-problem-complete problem specification, no?
I’m not sure I understand your question. By AI companies “making copying hard enough”, I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.
That’s not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don’t do that would win on the marketplace.
Whatever happened to holding software companies to the standard of not rolling vulnerable user data into their widely distributed business logic?Say AI companies could effectively make copying hard enough to provide security benefits to scrape-ees [ if I’m reading you right, that’s approximately who you’re trying to protect ]. Say also that this “easy-to-copy” property of AIs, is “the fundamental” thing expected increase the demand for AI labor relative to human labor. . . . Hard-alignment-problem-complete problem specification, no?
I’m not sure I understand your question. By AI companies “making copying hard enough”, I assume you mean making AIs not leak secrets from their prompt/training (or other conditioning). It seems true to me that this will raise the relevance of AI in society. Whether this increase is hard-alignment-problem-complete seems to depend on other background assumptions not discussed here.
I just meant, reducing in their AIs the property which you postulate is the primary advantage of AI over human labor.
That’s not really possible, though as a superficial approximation you could just keep the weights secret and refuse to run it beyond a certain scale. If you were to do so, though, it would just make the AI less useful and therefore the people who don’t do that would win on the marketplace.