It’s interesting the the term ‘abused’ was used with respect to AI. It makes me wonder if the bill has misalignment risks in mind at all or only misuse risks.
I would be very surprised if they had anything like the Yudkowskian paradigm in mind when they were thinking of this.
Why? ~All the other gov stuff I’m aware of that talks about “GCR” or that talks about AI in the context of “high-consequence [catastrophic] events, regardless of the low probability” cites Bostrom, MIRI, Ord, or Stuart Russell.
(But I agree they’re likely to have views closer to Superintelligence, Human Compatible, or The Precipice, rather than AGI Ruin. I just think of those views as pretty close to the Yudkowskian paradigm—eg, Bostrom is big on paperclippers and foom.)
Bostrom and MIRI being cited is pretty cool. I would have thought they’d be outside the Overton window.
EDIT: Do you know when the earliest citations occurred?
I would be very surprised if they had anything like the Yudkowskian paradigm in mind when they were thinking of this.
Why? ~All the other gov stuff I’m aware of that talks about “GCR” or that talks about AI in the context of “high-consequence [catastrophic] events, regardless of the low probability” cites Bostrom, MIRI, Ord, or Stuart Russell.
(But I agree they’re likely to have views closer to Superintelligence, Human Compatible, or The Precipice, rather than AGI Ruin. I just think of those views as pretty close to the Yudkowskian paradigm—eg, Bostrom is big on paperclippers and foom.)
Bostrom and MIRI being cited is pretty cool. I would have thought they’d be outside the Overton window. EDIT: Do you know when the earliest citations occurred?
E.g., Preparing for the Future of Artificial Intelligence and Wired in 2016.