There is also the question of what should this type of research actually look like.
I think that’s an answer to “why aren’t people supporting MIRI’s specific research agenda?” but I see SoerenE’s question as about “is there a good reason to not be worried about AI danger?”
(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there’s space for everyone to work in the direction that suits them best.)
There is also the question of what should this type of research actually look like.
I think that’s an answer to “why aren’t people supporting MIRI’s specific research agenda?” but I see SoerenE’s question as about “is there a good reason to not be worried about AI danger?”
(In the steelman universe, I think people understand that different research priorities will stem from different intuitions and skills, and think that there’s space for everyone to work in the direction that suits them best.)