Interesting idea, but I found this harder to read than it needed to be. The fungal planet stuff is fun, but it takes a long time to get to the actual point.
Deiv
Slightly off-topic:
It’s a pleasant surprise to see Nick Bostrom posting here.
His perspective is unusually valuable. Whether or not one agrees on all points, having him in the conversation feels like a meaningful update.
Thanks for sharing this, Nick. I hope we’ll see more.
Let’s make some assumptions about Mark Zuckerberg:
Zuckerberg has above-average intelligence.
He has a deep interest in new technologies.
He is invested in a positive future for humanity.
He has some understanding of the risks associated with the development of superintelligent AI systems.
Given these assumptions, it’s reasonable to expect Zuckerberg to be concerned about AI safety and its potential impact on society.
Now the question that it’s been bugging me since some weeks after reading LeCun’s arguments:
Could it be that Zuckerberg is not informed about his subordinate views?
If so, someone should really make pressure for this to happen and probably even replace LeCun as Chief Scientist at Meta AI.
I have been using the same images from Tim’s post for years (literally since it first came out) to explain the basics of AI alignment to the uninitiated. It has worked wonders. On the other hand, I have shared the entire post many times and no one has ever read it.
I would imagine that a collaboration between Eliezer and Tim explaining the basics of alignment would strike a chord with many people out there. People are generally more open to discussing this kind of graphical explanation than reading a random post for 2 hours.
For those of us who internalized these ideas years ago, there’s not much new here. You mostly find yourself nodding along. But that’s not a criticism. It’s actually refreshing to see this kind of essay on LessWrong again. This is what made the site magnetic in the first place: staring at the actual scale of what’s at stake.
@Nick Bostrom’s line about our great common endowment of negentropy being irreversibly degraded into entropy on a cosmic scale still hits like nothing else. Once you see it, you can’t unsee it. Every second of delay has a cost measured in entire galaxies of potential flourishing slipping beyond our light cone forever. @Wei Dai pushed that picture even further.
The hardest part is always explaining this to people outside this corner of the world. Not because the argument is complex, Bostrom lays it out with brutal clarity, but because the conclusion feels too large to take seriously. People pattern-match it to sci-fi and move on. But 10^58 lives is not a rhetorical flourish. It’s a conservative lower bound.
More essays like this, please. It’s easy to get lost in object-level debates and forget the sheer enormity of what we’re actually trying to protect.