Sorry, you might be taking my dialog too seriously, unless you’ve made such observations yourself, which of course is quite possible since you used to work at OpenAI. I’m personally far from the places where such dialogs might be occurring, so don’t have any observations of them myself. It was completely imagined in my head, as a dark comedy about how counter to human (or most human’s) nature strategic thinking/action about AI safety is, and partly a bid for sympathy for the people caught in the whiplashes, to whom this kind of thinking or intuition doesn’t come naturally.
Edit: To clarify a bit more, B’s reactions like “WTF!” were written more for comedic effect, rather than trying to be realistic or based on my best understanding/predictions of how a typical AI researcher would actually react. It might still be capturing some truth, but again just want to make sure people aren’t taking my dialog more seriously than I intend.
I’m taking the dialogue seriously but not literally. I don’t think the actual phrases are anywhere near realistic. But the emotional tenor you capture of people doing safety-related work that they were told was very important, then feeling frustrated by arguments that it might actually be bad, seems pretty real. Mostly I think people in B’s position stop dialoguing with people in A’s position, though, because it’s hard for them to continue while B resents A (especially because A often resents B too).
Some examples that feel like B-A pairs to me include: people interested in “ML safety” vs people interested in agent foundations (especially back around 2018-2022); people who support Anthropic vs people who don’t; OpenPhil vs Habryka; and “mainstream” rationalists vs Vassar, Taylor, etc.
Sorry, you might be taking my dialog too seriously, unless you’ve made such observations yourself, which of course is quite possible since you used to work at OpenAI. I’m personally far from the places where such dialogs might be occurring, so don’t have any observations of them myself. It was completely imagined in my head, as a dark comedy about how counter to human (or most human’s) nature strategic thinking/action about AI safety is, and partly a bid for sympathy for the people caught in the whiplashes, to whom this kind of thinking or intuition doesn’t come naturally.
Edit: To clarify a bit more, B’s reactions like “WTF!” were written more for comedic effect, rather than trying to be realistic or based on my best understanding/predictions of how a typical AI researcher would actually react. It might still be capturing some truth, but again just want to make sure people aren’t taking my dialog more seriously than I intend.
I’m taking the dialogue seriously but not literally. I don’t think the actual phrases are anywhere near realistic. But the emotional tenor you capture of people doing safety-related work that they were told was very important, then feeling frustrated by arguments that it might actually be bad, seems pretty real. Mostly I think people in B’s position stop dialoguing with people in A’s position, though, because it’s hard for them to continue while B resents A (especially because A often resents B too).
Some examples that feel like B-A pairs to me include: people interested in “ML safety” vs people interested in agent foundations (especially back around 2018-2022); people who support Anthropic vs people who don’t; OpenPhil vs Habryka; and “mainstream” rationalists vs Vassar, Taylor, etc.