[Question] What Do AI Safety Pitches Not Get About Your Field?

When I was first introduced to AI Safety, coming from a background studying psychology, I kept getting frustrated about the way people defined the and used the word “intelligence”. They weren’t able to address my questions about cultural intelligence, social evolution, and general intelligence in a way I found rigorous enough to be convincing. I felt like professionals couldn’t answer what I considered to be basic and relevant questions about general intelligence, which meant that I took a lot longer to take AI Safety seriously than I otherwise would have. It feels possible to me that other people have run into AI Safety pitches and been turned off because of something similar—a communication issue because both parties approached the conversation with very different background information. I’d love to try to minimize these occurrences, so if you’ve had anything similar happen, could you please share:

What is something that you feel AI Safety pitches usually don’t seem to understand about your field/​background? What’s a common place where you feel you’ve become stuck in a conversation with AI Safety pitches? What question/​information makes/​made the conversation stop progressing and start circling?

(Cross-posted from the EA forum)