This is about… I wouldn’t say “beliefs”—I will make a lot of caveats like “we are not sure”, “there are some smart people who disagree”, “this is an arguments against this view”, etc. (mental note: do it MORE, thank you for your observation) - but about “motivation” and “discourse”. Not about technical skills, that’s true.
I have a feeling that there is an attractor “I am AI-researcher and ML is AWESOME, and I will try to make it even more AWESOME, and yes, there are this safety folks and I know some of their memes and may be they have some legitimate concerns, but we will solve it later and everything will be OK”. And I think that when someone learns some ML-related technical skills before basic AI Safety concepts and discourse, it’s very easy for them to get into this attractor. And from this point it’s pretty hard to return back. So I want to create something like a vaccine against this attractor.
Technical skills are neccesary, but for most of them there are already good courses, textbooks and such. The skills I saw no texbooks for are “to understand AIsafetyspeak” and “to see why alignment-related problem X is hard and why obvious solutions may not work”. Because of the previously mentioned attractor I think it’s better to teach this skills before technical skills.
I make an assumption that average 15-16-year-olds in my target audience know how to program at least a little bit (In Russia basic programming in theory is in the mandatory school program. I don’t know about US), but don’t know calculus (but I think smart school student can easily understand a concept of a derivative without strict mathematical definition).
Thanks for your answer!
This is about… I wouldn’t say “beliefs”—I will make a lot of caveats like “we are not sure”, “there are some smart people who disagree”, “this is an arguments against this view”, etc. (mental note: do it MORE, thank you for your observation) - but about “motivation” and “discourse”. Not about technical skills, that’s true.
I have a feeling that there is an attractor “I am AI-researcher and ML is AWESOME, and I will try to make it even more AWESOME, and yes, there are this safety folks and I know some of their memes and may be they have some legitimate concerns, but we will solve it later and everything will be OK”. And I think that when someone learns some ML-related technical skills before basic AI Safety concepts and discourse, it’s very easy for them to get into this attractor. And from this point it’s pretty hard to return back. So I want to create something like a vaccine against this attractor.
Technical skills are neccesary, but for most of them there are already good courses, textbooks and such. The skills I saw no texbooks for are “to understand AIsafetyspeak” and “to see why alignment-related problem X is hard and why obvious solutions may not work”. Because of the previously mentioned attractor I think it’s better to teach this skills before technical skills.
I make an assumption that average 15-16-year-olds in my target audience know how to program at least a little bit (In Russia basic programming in theory is in the mandatory school program. I don’t know about US), but don’t know calculus (but I think smart school student can easily understand a concept of a derivative without strict mathematical definition).