[Question] When did Eliezer Yudkowsky change his mind about neural networks?

In 2008, Eliezer Yudkowsky was strongly critical of neural networks. From his post “Logical or Connectionist AI?”:

Not to mention that neural networks have also been “failing” (i.e., not yet succeeding) to produce real AI for 30 years now. I don’t think this particular raw fact licenses any conclusions in particular. But at least don’t tell me it’s still the new revolutionary idea in AI.

This is the original example I used when I talked about the “Outside the Box” box—people think of “amazing new AI idea” and return their first cache hit, which is “neural networks” due to a successful marketing campaign thirty goddamned years ago. I mean, not every old idea is bad—but to still be marketing it as the new defiant revolution? Give me a break.

By contrast, in Yudkowsky’s 2023 TED Talk, he said:

Nobody understands how modern AI systems do what they do. They are giant, inscrutable matrices of floating point numbers that we nudge in the direction of better performance until they inexplicably start working. At some point, the companies rushing headlong to scale AI will cough out something that’s smarter than humanity. Nobody knows how to calculate when that will happen. My wild guess is that it will happen after zero to two more breakthroughs the size of transformers.

Sometime between 2014 and 2017, I remember reading a discussion in a Facebook group where Yudkowsky expressed skepticism toward neural networks. (Unfortunately, I don’t remember what the group was.)

As I recall, he said that while the deep learning revolution was a Bayesian update, he still didn’t believe neural networks were the royal road to AGI. I think he said that he leaned more towards GOFAI/​symbolic AI (but I remember this less clearly).

I’ve combed a bit through Yudkowsky’s published writing, but I have a hard time tracking when, how, and why he changed his view on neural networks. Can anyone help me out?


This post exists only for archival purposes.