Example in California:
I OBJECT to the use of my personal information, including my information on Facebook, to train, fine-tune, or otherwise improve AI.
I assert that my information on Facebook includes sensitive personal information as defined by the California Consumer Privacy Act: I have had discussions about my religious or philosophical beliefs on Facebook.
I therefore exercise my right to limit the disclosure of my sensitive personal information.
Despite any precautions by Meta, adversaries may later discover “jailbreaks” or otherwise adversarial prompts to reveal my sensitive personal information. Therefore, I request that Meta do not use my personal information to train, fine-tune, or otherwise improve AI.
I expect this objection request to be handled with due care, confidentiality, and information security.
It’s not infinitely no-blame. Instead, Just culture would distinguish (my paraphrase):
Blameless behaviour:
Human error: To Err is Human. Human error is inevitable[1] and shall not be punished — not even singling out individuals for additional training.
At-risk behaviour: People can become complacent with experience, and use shortcuts — most often when under time pressure, or to workaround dysfunctional systems. At-risk behaviour shall not be punished, but people should share their lessons and may need to receive additional training.
Blameworthy behaviour:
Reckless behaviour: Someone who knows the risks to be unjustifiedly high, and still acts in that unsafe and norm-deviant manner. This is worthy of discipline, and possibly legal action — it’s similar to recklessness in law.
note that, if the same behaviour were the norm, then just culture no longer considers that person to have acted recklessly! Instead, the norm — a cultural factor in safety — was inadequate. (The legal system may disagree and assign liability nonetheless.)
Malicious behaviour: Similar to the purposeful level of criminal intent. This is worthy of a criminal investigation.
Instead, the focus is on designing the whole system (mechanical and electronic; and human and cultural):
to be robust to component failures — not just mechanical or electronic components, but also human components. Usually this means redundancy and error-checking, but robustness can also be obtained by simplifying the system and reducing dependencies;
so that human errors are less likely. For example, an exposed “in case of fire, break glass” fire alarm call point may give frequent false alarms from people accidentally bumping into them, so you add a simple hinged cover that stops these accidental alarms.
From healthcare perspectives, ISMP has a good article, and here is a good table summary.
From an aviation perspective, Eurocontrol has a model policy, which aims to facilitate accident investigations by making evidence collected by accident investigators inadmissible in criminal courts (without preventing prosecutors from independently collecting evidence).
And LessWrong also has a closely-related article by Raemon!
Human decision-making, too, has a mean time between failures.