Topics I would be excited to have a dialogue about [will add to this list as I think of more]:
I want to talk to someone who thinks p(human extinction | superhuman AGI developed in next 50 years) < 50% and understand why they think that
I want to talk to someone who thinks the probability of existential risk from AI is much higher than the probability of human extinction due to AI (ie most x-risk from AI isn’t scenarios where all humans end up dead soon after)
I want to talk to someone who has thoughts on university AI safety groups (are they harmful or helpful?)
I want to talk to someone who has pretty long AI timelines (median >= 50 years until AGI)
I want to have a conversation with someone who has strong intuitions about what counts as high/low integrity behaviour. Growing up I sort of got used to lying to adults and bureaucracies and then had to make a conscious effort to adopt some rules to be more honest. I think I would find it interesting to talk to someone who has relevant experiences or intuitions about how minor instances of lying can be pretty harmful.
If you have a rationality skill that you think can be taught over text, I would be excited to try learning it.
I mostly expect to ask questions and point out where and why I’m confused or disagree with your points rather than make novel arguments myself, though am open to different formats that make it easier/more convenient/more useful for the other person to have a dialogue with me.
Topics I would be excited to have a dialogue about [will add to this list as I think of more]:
I want to talk to someone who thinks p(human extinction | superhuman AGI developed in next 50 years) < 50% and understand why they think that
I want to talk to someone who thinks the probability of existential risk from AI is much higher than the probability of human extinction due to AI (ie most x-risk from AI isn’t scenarios where all humans end up dead soon after)
I want to talk to someone who has thoughts on university AI safety groups (are they harmful or helpful?)
I want to talk to someone who has pretty long AI timelines (median >= 50 years until AGI)
I want to have a conversation with someone who has strong intuitions about what counts as high/low integrity behaviour. Growing up I sort of got used to lying to adults and bureaucracies and then had to make a conscious effort to adopt some rules to be more honest. I think I would find it interesting to talk to someone who has relevant experiences or intuitions about how minor instances of lying can be pretty harmful.
If you have a rationality skill that you think can be taught over text, I would be excited to try learning it.
I mostly expect to ask questions and point out where and why I’m confused or disagree with your points rather than make novel arguments myself, though am open to different formats that make it easier/more convenient/more useful for the other person to have a dialogue with me.