I’m a college student at a pretty leftist institution doing work in AI alignment. My professor works in pandemics and wanted to do research with me, so the natural conclusion for the both of us was to do work in pandemic risk from advanced AI. I think a big portion of my project was presenting x-risk to an audience unfamiliar with it, so I was excited to introduce the topic to my peers!!
But at the end of the presentation, someone stated that my project neglected to consider the harm AI and tech companies do to minorities and their communities, saying that people shouldn’t be concerned with existential risk in the future as communities today are being affected—and that I should not have done this research.
I feel pretty humiliated by this response. Being told that the work I care about doesn’t truly matter (for reasons I can’t argue against since it would make me look racist … ) feels harsh.
I am also secondly annoyed that people at my college do not receive the discussion of x-risk well, and it ends up putting the work that people do in a negative light. I want to improve the discussions at my college to the point of actually being able to have them in the first place, but it seems to be getting more difficult.
I’ve run the AI Alignment club here in previous semesters, but it hasn’t gone as well as I expected. Others seem worried about AI’s water usage, which might be a fair concern, but it really isn’t the biggest problem at the moment?? I feel like the rationalist community and my college are two separate worlds at this point!
The point of this shortform was to simply to rant about how hard doing outreach can be :/
Epistemic status: rant
I’m a college student at a pretty leftist institution doing work in AI alignment. My professor works in pandemics and wanted to do research with me, so the natural conclusion for the both of us was to do work in pandemic risk from advanced AI. I think a big portion of my project was presenting x-risk to an audience unfamiliar with it, so I was excited to introduce the topic to my peers!!
But at the end of the presentation, someone stated that my project neglected to consider the harm AI and tech companies do to minorities and their communities, saying that people shouldn’t be concerned with existential risk in the future as communities today are being affected—and that I should not have done this research.
I feel pretty humiliated by this response. Being told that the work I care about doesn’t truly matter (for reasons I can’t argue against since it would make me look racist … ) feels harsh.
I am also secondly annoyed that people at my college do not receive the discussion of x-risk well, and it ends up putting the work that people do in a negative light. I want to improve the discussions at my college to the point of actually being able to have them in the first place, but it seems to be getting more difficult.
I’ve run the AI Alignment club here in previous semesters, but it hasn’t gone as well as I expected. Others seem worried about AI’s water usage, which might be a fair concern, but it really isn’t the biggest problem at the moment?? I feel like the rationalist community and my college are two separate worlds at this point!
The point of this shortform was to simply to rant about how hard doing outreach can be :/