The full sentence reads: “MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact.” (emphasis added) Clearly, if smarter-than-human intelligence ends up having a positive impact independently (or in spite of) MIRI’s efforts, that would count as a success only in a Pickwickian sort of sense. To succeed in the sense obviously intended by the authors of the mission statement, MIRI would have to be at least partially causally implicated in the process leading to the creation of FAI.
So the question remains: on what grounds do you believe that, if smarter-than-human intelligence ends up having a positive impact, this will be necessarily at least partly due to MIRI’s efforts? I find that view implausible, and instead agree with Carl Shulman that “the impact of MIRI in particular has to be far smaller subset of the expected impact of the cause as a whole,” for the reasons he mentions.
The full sentence reads: “MIRI exists to ensure that the creation of smarter-than-human intelligence has a positive impact.” (emphasis added) Clearly, if smarter-than-human intelligence ends up having a positive impact independently (or in spite of) MIRI’s efforts, that would count as a success only in a Pickwickian sort of sense. To succeed in the sense obviously intended by the authors of the mission statement, MIRI would have to be at least partially causally implicated in the process leading to the creation of FAI.
So the question remains: on what grounds do you believe that, if smarter-than-human intelligence ends up having a positive impact, this will be necessarily at least partly due to MIRI’s efforts? I find that view implausible, and instead agree with Carl Shulman that “the impact of MIRI in particular has to be far smaller subset of the expected impact of the cause as a whole,” for the reasons he mentions.
I subscribe to the view that AGI is bad by default, and don’t see anyone else working on the friendliness problem.