(In addition, MIRI claims that a FAI could be easier to implement than an AGI in general- i.e. that if you solve the philosophical difficulties regarding FAI, this also makes it easier to create an AGI in general. For example, MIRI’s specific most-likely scenario for the creation of an AGI is a sub-human AI that self-modifies to become smarter very quickly; MIRI’s research on modeling self-modification, while aimed at solving one specific problem that stands in the way of Friendliness, also has potential applications towards understanding self-modification in general.)
“worked on” != “solved”
(In addition, MIRI claims that a FAI could be easier to implement than an AGI in general- i.e. that if you solve the philosophical difficulties regarding FAI, this also makes it easier to create an AGI in general. For example, MIRI’s specific most-likely scenario for the creation of an AGI is a sub-human AI that self-modifies to become smarter very quickly; MIRI’s research on modeling self-modification, while aimed at solving one specific problem that stands in the way of Friendliness, also has potential applications towards understanding self-modification in general.)