John Collison: To put numbers on this, you’ve talked about the potential for a 10% annual economic growth powered by AI. Doesn’t that mean that when we talk about AI risk, it’s often harms and misuses of AI, but isn’t the big AI risk that we slightly misregulated or we slowed down progress, and therefore there’s just a lot of human welfare that’s missed out on because you don’t have enough AI?
Dario’s former colleague at OpenAI, Paul Christiano, has a great 2014 blog post “On Progress and Prosperity” that does a good job explaining why I don’t believe this.
In short, “It seems clear that economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course.”
“For example, if exponential growth continued at 1% of its current rate for 1% of the remaining lifetime of our sun, Robin Hanson points out each atom in our galaxy would need to be about 10140 times as valuable as modern society.”
“So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants—they will live in a world that is “saturated,” where progress has run its course and has only very modest further effects.”
“I think this is sufficient to respond to the original argument: we have seen progress associated with good outcomes, and we have a relatively clear understanding of the mechanism by which that has occurred. We can see pretty clearly that this particular mechanism doesn’t have much effect on very long-term outcomes.”
The key assumption is that people care about the very long future, and care about it as much as people care about the near future, like in 10-30 years or more, and I don’t see much evidence for people caring about outcomes thousands of years into the future, let alone millions or billions or more years.
More generally, one of my central disagreements with people like Habryka is that I think humans have much higher intrinsic discount rates than 0%, and long-term moral views that view future people just as valuable as present people are way rarer than people on here think, and we are self-selected to have low discount rates.
I commented on the Substack:
Dario’s former colleague at OpenAI, Paul Christiano, has a great 2014 blog post “On Progress and Prosperity” that does a good job explaining why I don’t believe this.
In short, “It seems clear that economic, technological, and social progress are limited, and that material progress on these dimensions must stop long before human society has run its course.”
“For example, if exponential growth continued at 1% of its current rate for 1% of the remaining lifetime of our sun, Robin Hanson points out each atom in our galaxy would need to be about 10140 times as valuable as modern society.”
“So while further progress today increases our current quality of life, it will not increase the quality of life of our distant descendants—they will live in a world that is “saturated,” where progress has run its course and has only very modest further effects.”
“I think this is sufficient to respond to the original argument: we have seen progress associated with good outcomes, and we have a relatively clear understanding of the mechanism by which that has occurred. We can see pretty clearly that this particular mechanism doesn’t have much effect on very long-term outcomes.”
The key assumption is that people care about the very long future, and care about it as much as people care about the near future, like in 10-30 years or more, and I don’t see much evidence for people caring about outcomes thousands of years into the future, let alone millions or billions or more years.
More generally, one of my central disagreements with people like Habryka is that I think humans have much higher intrinsic discount rates than 0%, and long-term moral views that view future people just as valuable as present people are way rarer than people on here think, and we are self-selected to have low discount rates.