Um … are we talking about capabilities research, or something else?
We are talking about capabilities research, in part. We are also talking about stuff like FTX and things adjacent to it (of which there has been a good amount in my retelling of this ecosystem!).
I mean, if you were to know that a great AI-safety genius was going around committing serious crimes that harm people in the community, then yes, you should be taking steps to stop it and bring them to justice, even if that would impair their AI-safety work.
I mean, sure, I am probably the last person someone could try to accuse of “not having tried to take steps to bring the relevant people to justice”. But if the “taking people to justice” step isn’t working, then you maybe want to think about quitting.
We are talking about capabilities research, in part. We are also talking about stuff like FTX and things adjacent to it (of which there has been a good amount in my retelling of this ecosystem!).
I mean, sure, I am probably the last person someone could try to accuse of “not having tried to take steps to bring the relevant people to justice”. But if the “taking people to justice” step isn’t working, then you maybe want to think about quitting.
Okay, good. That’s what I thought, I just wanted to make sure I wasn’t making a not-knowing-what-the-conversation-was-really-about error. (“Never give anyone wise advice unless you know exactly what you’re both talking about. Got it.”)