I teach freshman Rhetoric & Writing at uni. We focus on persuasion. May i use this essay as an assigned reading? It works well because it articulates a fine-grained persuasive strategy in a context that the students are probably going to care about.
I am overhauling my whole curriculum over this summer to make it immediately relevant to students’ understanding and navigation of the world they will be graduating into in four years.
Thanks for a great article. I am so frustrated and dumbfounded by our (the U.S.A.’s) lack of federal response.
You said to another Replier that you were looking into some work with lawmakers in the U.S. The sooner the better!
If i can take issue with your approach (and I’m sure you are far more tuned into the situation than i am, so please tell me what I’m not seeing; I’ve only become aware of things in the past 5 days), i wonder why you didn’t mention China in the article. Seems to me that policymakers’ attention might be piqued by recuring to a familiar threat. And they will have to factor China into their decision-making. Do you agree with folks who see the U.S. confronted with the choice between slowing things down for AI Safety and speeding things up to outpace China? I’m assuming the U.S. will nationalize the efforts at some point to provide security and CoC.
Thank you for your kind words! Of course you can use this essay.
China does come up in our conversations. I didn’t mention it here because the aim of this post is to reflect on what we’ve learned across more than 70 meetings, rather than to present a scripted pitch—no two conversations have been the same! So it doesn’t cover every single question that may arise.
You’re right to point out that this is an important one. It’s too big to capture fully in a format like this, but here’s my view in a nutshell: Broadly speaking, I believe that racing ahead to develop a technology we fundamentally do not understand—one that poses risks not only through misuse but by its very nature—is neither a desirable nor inevitable path. There’s a lot at stake, and we’re working to find a different approach: one in which we develop the technology with safeguards, while ensuring we deepen our understanding and maintain control over it.
I teach freshman Rhetoric & Writing at uni. We focus on persuasion. May i use this essay as an assigned reading? It works well because it articulates a fine-grained persuasive strategy in a context that the students are probably going to care about.
I am overhauling my whole curriculum over this summer to make it immediately relevant to students’ understanding and navigation of the world they will be graduating into in four years.
Thanks for a great article. I am so frustrated and dumbfounded by our (the U.S.A.’s) lack of federal response.
You said to another Replier that you were looking into some work with lawmakers in the U.S. The sooner the better!
If i can take issue with your approach (and I’m sure you are far more tuned into the situation than i am, so please tell me what I’m not seeing; I’ve only become aware of things in the past 5 days), i wonder why you didn’t mention China in the article. Seems to me that policymakers’ attention might be piqued by recuring to a familiar threat. And they will have to factor China into their decision-making. Do you agree with folks who see the U.S. confronted with the choice between slowing things down for AI Safety and speeding things up to outpace China? I’m assuming the U.S. will nationalize the efforts at some point to provide security and CoC.
Thanks again, so much. Please keep going!
Thank you for your kind words! Of course you can use this essay.
China does come up in our conversations. I didn’t mention it here because the aim of this post is to reflect on what we’ve learned across more than 70 meetings, rather than to present a scripted pitch—no two conversations have been the same! So it doesn’t cover every single question that may arise.
You’re right to point out that this is an important one. It’s too big to capture fully in a format like this, but here’s my view in a nutshell: Broadly speaking, I believe that racing ahead to develop a technology we fundamentally do not understand—one that poses risks not only through misuse but by its very nature—is neither a desirable nor inevitable path. There’s a lot at stake, and we’re working to find a different approach: one in which we develop the technology with safeguards, while ensuring we deepen our understanding and maintain control over it.
I suppose part of the strategy in approaching folks with this is to know when/what to hold back, especially an initial cold call.
Thank you again for your work. Thank you 100x.