Knowing how things could go wrong gives useful knowledge about scenarios/pathways to avoid
Our knowledge of how to make things go right is not zero
My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I’ve gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI’s approach doesn’t really acknowledge the two as separate.
Thanks for comment. I will reply as follows:
Knowing how things could go wrong gives useful knowledge about scenarios/pathways to avoid
Our knowledge of how to make things go right is not zero
My intention with the article is to draw attention to some broader non-technical difficulties in implementing FAI. One worrying theme in the reponses I’ve gotten is a conflation between knowledge of AGI risk and building a FAI. I think they are separate projects, and that success of the second relies on comprehensive prior knowledge of the first. Apparently MIRI’s approach doesn’t really acknowledge the two as separate.