Paper from the Federal Reserve Bank of Dallas estimates 150%-300% returns to government nondefense R&D over the postwar period on business sector productivity growth. They say this implies underfunding of nondefense R&D, but that is not right. One should assume decreasing marginal returns, so this is entirely compatible with the level of spending being too high. I also would not assume conditions are unchanged and spending remains similarly effective.
At low returns, you might question whether it’s good enough to invest more compared to other options (e.g., at 5%, maybe simply not incurring the added deficit to be financed at 5% is arguably preferable; at 7%, maybe your value function is such that simply not incurring the added deficit to be financed at 5% is arguably preferable), but at such high returns, unless you think the private sector is achieving a ballpark level of marginal returns, invest, baby, invest! The marginal returns would have to be insanely diminishing for it not to make sense to invest more, which implies we’re investing at just about the optimal level (if the marginal return of the next $1 were 0%, we shouldn’t invest more, but we shouldn’t invest less either because our current marginal return is 150%). Holding skepticism about the estimated return itself would be a different story.
That, or explain the factors/why the Robin should update his timeline for AI/computer automation taking “most” of the jobs.
Robin’s take here strikes me both as an uncooperative thought-experiment participant and as a decently considered position. It’s like he hasn’t actually skimmed the top doom scenarios discussed in this space (and that’s coming from me...someone who has probably thought less about this space than Robin) (also see his equating corporations with superintelligence—he’s not keyed into the doomer use of the term and not paying attention to the range of values it could take).
On the other hand, I find there is some affinity with my skepticism of AI doom, with my vibe being it’s in the notion that authorization lines will be important.
On the other other hand, once the authorization bailey is under siege by the superhuman intelligence aspect of the scenario, Robin retreats to the motte that there will be billions of AIs and (I guess unlike humans?) they can’t coordinate. Sure, corporations haven’t taken over the government and there isn’t one world government, but in many cases, tens of millions of people coordinate to form a polity, so why would we assume all AI agents will counteract each other?
It was definitely a fun section and I appreciate Robin making these points, but I’m finding myself about as unassuaged by Robin’s thoughts here as I am by my own.
When talking about doom, I think a pretty natural comparison is nuclear weapon development. And I believe that analogy highlights how much more right Robin is here than doomers might give him credit for. Obviously a lot of abstract thinking and scenario consideration went into developing the atomic bomb, but also a lot of safeguards were developed as they built prototypes and encountered snags. If Robin is so correct that no prototype or abstraction will allow us address safety concerns, so we need to be dealing with the real thing to understand it, then I think a biosafety analogy still helps his point. If you’re dealing with GPT-10 before public release, train it, give it no authorization lines, and train people (plural) studying it to not follow its directions. In line with Robin’s competition views, use GPT-9 agents to help out on assessments if need be. But again, Robin’s perspective here falls flat and is of little assurance if it just devolves into “let it into the wild, then deal with it.”
A great debate and post, thanks!