He boo-hooed neural networks, and in fact actively bet against them in actions by hiring researchers trained in abstract math/philosophy, ignoring neuroscience and early DL, etc.
This seems to assume that those researchers were meant to work out how to create AI. But the goal of that research was rather to formalize and study some of the challenges in AI alignment in crisp language to make them as clear as possible. The intent was not to study the question of “how do we build AI” but rather “what would we want from an AI and what would prevent us from getting that, assuming that we could build one”. That approach doesn’t make any assumptions of how the AI would be built, it could be neural nets or anything else.
MIRI doesn’t assume all AIs will be logical and I really need to write a long long screed about this at some point if I can stop myself from banging the keyboard so hard that the keys break. We worked on problems involving logic, because when you are confused about a *really big* thing, one of the ways to proceed is to try to list out all the really deep obstacles. And then, instead of the usual practice of trying to dodge around all the obstacles and not coming to grips with any of the truly and confusing scary things, you try to assume away *all but one* of the really big obstacles so that you can really, actually, confront and get to grips with one scary confusing thing all on its own. We tried to confront a particular deep problem of self-modifying AI and reflectivity, the tiling agents problem, because it was *one* thing that we could clearly and crisply state that we didn’t know how to do even though any reflective AI ought to find it easy; crisp enough that multiple people could work on it. This work initially took place in a first-order-logical setting because we were assuming away some of the other deep obstacles that had to do with logic not working in real life, explicitly and in full knowledge that logic does not well represent real life, so we could tackle *only one deeply confusing thing at a time*.
EY’s belief distribution about NNs and early DL from over a decade ago and how that reflects on his predictive track record has already been extensively litigated in other recent threads like here. I mostly agree that EY 2008 and later is somewhat cautious/circumspect about making explicitly future-disprovable predictions, but he surely did seem to exude skepticism which complements my interpretation of his actions.
That being said I also largely agree that MIRI’s research path was chosen specifically to try and be more generic than any viable route to AGI. But one could consider that also as something of a failure or missed opportunity vs investing more in studying neural networks, the neuroscience of human alignment, etc.
But I’ve always said (perhaps not in public, but nonetheless) that I thought MIRI had a very small chance of success, but it was still a reasonable bet for at least one team to make, just in case the connectivists were all wrong about this DL thing.
This seems to assume that those researchers were meant to work out how to create AI. But the goal of that research was rather to formalize and study some of the challenges in AI alignment in crisp language to make them as clear as possible. The intent was not to study the question of “how do we build AI” but rather “what would we want from an AI and what would prevent us from getting that, assuming that we could build one”. That approach doesn’t make any assumptions of how the AI would be built, it could be neural nets or anything else.
Eliezer makes that explicit in e.g. this SSC comment:
and it’s discussed in more length in “The Rocket Alignment Problem”.
EY’s belief distribution about NNs and early DL from over a decade ago and how that reflects on his predictive track record has already been extensively litigated in other recent threads like here. I mostly agree that EY 2008 and later is somewhat cautious/circumspect about making explicitly future-disprovable predictions, but he surely did seem to exude skepticism which complements my interpretation of his actions.
That being said I also largely agree that MIRI’s research path was chosen specifically to try and be more generic than any viable route to AGI. But one could consider that also as something of a failure or missed opportunity vs investing more in studying neural networks, the neuroscience of human alignment, etc.
But I’ve always said (perhaps not in public, but nonetheless) that I thought MIRI had a very small chance of success, but it was still a reasonable bet for at least one team to make, just in case the connectivists were all wrong about this DL thing.