Surely what MIRI would ideally like to do is to find a way of making intelligence not “emergent”, so that it’s easier to make something intelligent that behaves predictably enough to be classified as Friendly.
I don’t believe that MIRI has been consciously paying attention to thwarting undesirable emergence, given that EY refuses to acknowledge it as a real phenomenon.
I fear we’re at cross purposes. I meant not “thwart emergent intelligence” but “find ways of making intelligence that don’t rely on it emerging mysteriously from incomprehensible complications”.
Sure, you cannot rely on spontaneous emergence for anything predictable, as neural network attempts at AGI demonstrate. My point was that if you ignore the chance of something emerging, that something will emerge in a most inopportune moment. I see your original point, though. Not sure if it can be successful. My guess is that the best case is some kind of “controlled emergence”, where you at least set the parameter space of what might happen.
Surely what MIRI would ideally like to do is to find a way of making intelligence not “emergent”, so that it’s easier to make something intelligent that behaves predictably enough to be classified as Friendly.
I don’t believe that MIRI has been consciously paying attention to thwarting undesirable emergence, given that EY refuses to acknowledge it as a real phenomenon.
I fear we’re at cross purposes. I meant not “thwart emergent intelligence” but “find ways of making intelligence that don’t rely on it emerging mysteriously from incomprehensible complications”.
Sure, you cannot rely on spontaneous emergence for anything predictable, as neural network attempts at AGI demonstrate. My point was that if you ignore the chance of something emerging, that something will emerge in a most inopportune moment. I see your original point, though. Not sure if it can be successful. My guess is that the best case is some kind of “controlled emergence”, where you at least set the parameter space of what might happen.