There is a hidden legal standard that this law would like to endorse about existing laws, but I am not sure that it sets out. It is at least suggesting a precedent when it talks about “Things that would be crimes requiring intent”, in that there is an argument that LLMs or any AIs do not yet have the requisite mental state, since they don’t really have mental states. So they can’t be liable for crimes becasue of mens rea reasons, and since you did not know you can’t have intent. This law is trying to argue that that is basically bullshit.
(i) Acts with no meaningful human intervention; and
(ii) Would, if committed by a human, constitute a crime specified in the penal law that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
seems to imply that the lawmakers believe that there should not be a way for using an AI to sever liability, criminal or civil, for an action, regardless of what you intended the AI do, and if you are in a position to do that this makes it the providing companies problem. They fucked up, but it informs the prosecutor that they should prosecute, because the provider fucked up.
Basically, it is 1 law away from doing the “if your dog commited a violation, x happens”, for “If an AI commits a tort, who is liabile” with an answer that is not nobody. There is an argument under current law that there is a level of independence where that is nobody, because the AI can’t have relative intent. This law tries to say “IT REALLY SHOULD BE THE AI COMPANY” if liability would sink in an independent agent without it’s own assets.
I think this might be an attempted countermeasure against prompt injection. That is, it wants to mix autoregressive and reconstructed residuals. Otherwise, it might lose it’s train of thought (end up continuing the article not following the prompt).