The PDF is self-contained, but three additional points:
I assumed no familiarity about AI control from the audience. Accordingly, the target audience for the exercises is newcomers, and are about the basics.
If you want to get into AI control, and like exercises, consider doing these.
Conversely, if you are already pretty familiar with control, I don’t expect you’ll get much out of these exercises. (A good fraction of the problems is about re-deriving things that already appear in AI control papers etc., so if you already know those, it’s pretty pointless.)
I felt like some of the exercises weren’t that good, and am not satisfied with my answers to some of them—I spent a limited time on this. I thought it’s worth sharing the set anyways.
(I compensated by highlighting problems that were relatively good, and by flagging the answers I thought were weak; the rest is on the reader.)
I largely focused on monitoring schemes, but don’t interpret this as meaning there’s nothing else to AI control.
You can send feedback by messaging me or anonymously here.
I think it was pretty good at what it set out to do, namely laying out basics of control and getting people into the AI control state-of-mind.
I collected feedback on which exercises attendees most liked. All six who gave feedback mentioned the last problem (“incriminating evidence”, i.e. what to do if you are an AI company that catches your AIs red-handed). I think they are right; I’d have more high-level planning (and less details of monitoring-schemes) if I were to re-run this.
Attendees wanted to have group discussions, and that took a large fraction of the time. I should have taken that into account in advance; some discussion is valuable. I also think that the marginal group discussion time wasn’t valuable, and should have pushed for less when organizing.
Attendees generally found the baseline answers (solutions) helpful, I think.
A couple people left early. I figure it’s for a combination of 1) the exercises were pretty cognitively demanding, 2) weak motivation (these people were not full-time professionals), and 3) the schedule and practicalities were a bit chaotic.
I was expecting some math. Maybe something about the expected amount of work you can get out of an AI before it coups you, if you assume the number of actions required to coup is n, the trusted monitor has false positive rate p, etc?
I recently gave a workshop in AI control, for which I created an exercise set.
The exercise set can be found here: https://drive.google.com/file/d/1hmwnQ4qQiC5j19yYJ2wbeEjcHO2g4z-G/view?usp=sharing
The PDF is self-contained, but three additional points:
I assumed no familiarity about AI control from the audience. Accordingly, the target audience for the exercises is newcomers, and are about the basics.
If you want to get into AI control, and like exercises, consider doing these.
Conversely, if you are already pretty familiar with control, I don’t expect you’ll get much out of these exercises. (A good fraction of the problems is about re-deriving things that already appear in AI control papers etc., so if you already know those, it’s pretty pointless.)
I felt like some of the exercises weren’t that good, and am not satisfied with my answers to some of them—I spent a limited time on this. I thought it’s worth sharing the set anyways.
(I compensated by highlighting problems that were relatively good, and by flagging the answers I thought were weak; the rest is on the reader.)
I largely focused on monitoring schemes, but don’t interpret this as meaning there’s nothing else to AI control.
You can send feedback by messaging me or anonymously here.
How well did this workshop/exercise set go?
I think it was pretty good at what it set out to do, namely laying out basics of control and getting people into the AI control state-of-mind.
I collected feedback on which exercises attendees most liked. All six who gave feedback mentioned the last problem (“incriminating evidence”, i.e. what to do if you are an AI company that catches your AIs red-handed). I think they are right; I’d have more high-level planning (and less details of monitoring-schemes) if I were to re-run this.
Attendees wanted to have group discussions, and that took a large fraction of the time. I should have taken that into account in advance; some discussion is valuable. I also think that the marginal group discussion time wasn’t valuable, and should have pushed for less when organizing.
Attendees generally found the baseline answers (solutions) helpful, I think.
A couple people left early. I figure it’s for a combination of 1) the exercises were pretty cognitively demanding, 2) weak motivation (these people were not full-time professionals), and 3) the schedule and practicalities were a bit chaotic.
I was expecting some math. Maybe something about the expected amount of work you can get out of an AI before it coups you, if you assume the number of actions required to coup is n, the trusted monitor has false positive rate p, etc?