I wonder, to what extent are poor choices like Anthropic’s a result of an uncertain liability landscape surrounding models? With things like the Character.AI lawsuit still in play, and the exact rules uncertain, any large corporate entity with a consumer facing product is going to take the attitude of “better safe than sorry”.
We need to put in place some sort of uniform liability code for released models.
I wonder, to what extent are poor choices like Anthropic’s a result of an uncertain liability landscape surrounding models? With things like the Character.AI lawsuit still in play, and the exact rules uncertain, any large corporate entity with a consumer facing product is going to take the attitude of “better safe than sorry”.
We need to put in place some sort of uniform liability code for released models.