Hey, thanks for your comment. I do think this is right. The style of writing on my blog is a lot more bullish, and does not index uncertainty. I appreciate this is not in the ethos of LW, so I am going to change how/what I crosspost here. I prefer being bullish because this seems to get a lot more feedback and good questions such as yours.
As for your questions—I intend to write a second part and address these. It is much easier for me to say something is needed versus actually trying to elucidate on what this looks like. So I expect to go do more work on this and return with a more fleshed out theory. In the meantime you may enjoy reading Peter’s paper on supervision: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5122871
Gauraventh
My kids won’t be workers
Legal Supervision of Frontier AI Labs is the answer.
Gauraventh’s Shortform
Wow, my intuition was that it was really hard to get mentors onboard with supervising scholars given time constraints most senior researchers have, so seeing 87 mentors apply feels is wild!
The Bill has passed the appropriations committee and will now move onto the Assembly floor. There were some changes made to the Bill. From the press release:
Removing perjury – Replace criminal penalties for perjury with civil penalties. There are now no criminal penalties in the bill. Opponents had misrepresented this provision, and a civil penalty serves well as a deterrent against lying to the government.
Eliminating the FMD – Remove the proposed new state regulatory body (formerly the Frontier Model Division, or FMD). SB 1047’s enforcement was always done through the AG’s office, and this amendment streamlines the regulatory structure without significantly impacting the ability to hold bad actors accountable. Some of the FMD’s functions have been moved to the existing Government Operations Agency.
Adjusting legal standards—The legal standard under which developers must attest they have fulfilled their commitments under the bill has changed from “reasonable assurance” standard to a standard of “reasonable care,” which is defined under centuries of common law as the care a reasonable person would have taken. We lay out a few elements of reasonable care in AI development, including whether they consulted NIST standards in establishing their safety plans, and how their safety plan compares to other companies in the industry.
New threshold to protect startups’ ability to fine-tune open sourced models – Established a threshold to determine which fine-tuned models are covered under SB 1047. Only models that were fine-tuned at a cost of at least $10 million are now covered. If a model is fine-tuned at a cost of less than $10 million dollars, the model is not covered and the developer doing the fine tuning has no obligations under the bill. The overwhelming majority of developers fine-tuning open sourced models will not be covered and therefore will have no obligations under the bill.
Narrowing, but not eliminating, pre-harm enforcement – Cutting the AG’s ability to seek civil penalties unless a harm has occurred or there is an imminent threat to public safety.
CultFrisbee
Fixed!
From Meta: https://www.meta.com/superintelligence/
Personal Superintelligence