I find it very strange that Collier claims that international compute monitoring would “tank the global economy.” What is the mechanism for this, exactly?
>10% of the value of the S&P 500 is downstream of AI and the proposal is to ban further AI development. AI investment is a non-trivial fraction of current US GDP growth (20%?). I’d guess the proposal would cause a large market crash and a (small?) recession in the US; it’s unclear if this is well described as “tanking the global economy”.
it’s unclear if this is well described as “tanking the global economy”.
I think the answer is “no”?
Like, at least in this context I would read the above as implying a major market crash, not a short term 20% reduction in GDP-growth. We pass policies all the time that cause a 20% reduction in GDP growth, so in the context of a policy discussion concerned about the downside, implying either political infeasibility or the tradeoff obviously not being worth it, I feel like it clearly implies more.
Like, if you buy the premise of the book at all, the economical costs here are of course pretty trivial.
But the claim isn’t, or shouldn’t be, that this would be a short term reduction, it’s that it cuts off the primary mechanism for growth that supports a large part of the economy’s valuation—leading to not just a loss in value for the things directly dependent on AI, but also slowing growth generally. And reduction in growth is what makes the world continue to suck, so that most of humanity can’t live first-world lives. Which means that slowing growth globally by a couple percentage points is a very high price to pay.
I think that it’s plausibly worth it—we can agree that there’s a huge amount of value enabled by autonomous but untrustworthy AI systems that are likely to exist if we let AI continue to grow, and that Sam was right originally that there would be some great [i.e. incredibly profitable] companies before we all die. And despite that, we shouldn’t build it—as the title says.
I mean, I would describe various Trump tariff plans as “tanking the global economy”, I think it was fair to describe Smoot-Hawley as that, and so on.
I think the book makes the argument that expensive things are possible—this is likely cheaper and better than fighting WWII, the comparison they use—and it does seem fair to criticize their plan as expensive. It’s just that the alternative is far more expensive.
I think that the proposal in the book would “tank the global economy”, as defined by a >10% drop in the S&P 500, and similar index funds, and I think this is a kinda reasonable definition. But I also think that other proposals for us not all dying probably have similar (probably less severe) impacts because they also involve stopping or slowing AI progress (eg Redwood’s proposed “get to 30x AI R&D and then stop capabilities progress until we solve alignment” plan[1]).
>10% of the value of the S&P 500 is downstream of AI and the proposal is to ban further AI development. AI investment is a non-trivial fraction of current US GDP growth (20%?). I’d guess the proposal would cause a large market crash and a (small?) recession in the US; it’s unclear if this is well described as “tanking the global economy”.
I think the answer is “no”?
Like, at least in this context I would read the above as implying a major market crash, not a short term 20% reduction in GDP-growth. We pass policies all the time that cause a 20% reduction in GDP growth, so in the context of a policy discussion concerned about the downside, implying either political infeasibility or the tradeoff obviously not being worth it, I feel like it clearly implies more.
Like, if you buy the premise of the book at all, the economical costs here are of course pretty trivial.
But the claim isn’t, or shouldn’t be, that this would be a short term reduction, it’s that it cuts off the primary mechanism for growth that supports a large part of the economy’s valuation—leading to not just a loss in value for the things directly dependent on AI, but also slowing growth generally. And reduction in growth is what makes the world continue to suck, so that most of humanity can’t live first-world lives. Which means that slowing growth globally by a couple percentage points is a very high price to pay.
I think that it’s plausibly worth it—we can agree that there’s a huge amount of value enabled by autonomous but untrustworthy AI systems that are likely to exist if we let AI continue to grow, and that Sam was right originally that there would be some great [i.e. incredibly profitable] companies before we all die. And despite that, we shouldn’t build it—as the title says.
I mean, I would describe various Trump tariff plans as “tanking the global economy”, I think it was fair to describe Smoot-Hawley as that, and so on.
I think the book makes the argument that expensive things are possible—this is likely cheaper and better than fighting WWII, the comparison they use—and it does seem fair to criticize their plan as expensive. It’s just that the alternative is far more expensive.
I think that the proposal in the book would “tank the global economy”, as defined by a >10% drop in the S&P 500, and similar index funds, and I think this is a kinda reasonable definition. But I also think that other proposals for us not all dying probably have similar (probably less severe) impacts because they also involve stopping or slowing AI progress (eg Redwood’s proposed “get to 30x AI R&D and then stop capabilities progress until we solve alignment” plan[1]).
I think this is an accurate short description of the plan, but it might have changed last I heard.