On Dwarkesh’s 3rd Podcast With Tyler Cowen

Link post

This post is extensive thoughts on Tyler Cowen’s excellent talk with Dwarkesh Patel.

It is interesting throughout. You can read this while listening, after listening or instead of listening, and is written to be compatible with all three options. The notes are in order in terms of what they are reacting to, and are mostly written as I listened.

I see this as having been a few distinct intertwined conversations. Tyler Cowen knows more about more different things than perhaps anyone else, so that makes sense. Dwarkesh chose excellent questions throughout, displaying an excellent sense of when to follow up and how, and when to pivot.

The first conversation is about Tyler’s book GOAT about the world’s greatest economists. Fascinating stuff, this made me more likely to read and review GOAT in the future if I ever find the time. I mostly agreed with Tyler’s takes here, to the extent I am in position to know, as I have not read that much in the way of what these men wrote, and at this point even though I very much loved it at the time (don’t skip the digression on silver, even, I remember it being great) The Wealth of Nations is now largely a blur to me.

There were also questions about the world and philosophy in general but not about AI, that I would mostly put in this first category. As usual, I have lots of thoughts.

The second conversation is about expectations given what I typically call mundane AI. What would the future look like, if AI progress stalls out without advancing too much? We cannot rule such worlds out and I put substantial probability on them, so it is an important and fascinating question.

If you accept the premise of AI remaining within the human capability range in some broad sense, where it brings great productivity improvements and rewards those who use it well but remains foundationally a tool and everything seems basically normal, essentially the AI-Fizzle world, then we have disagreements but Tyler is an excellent thinker about these scenarios. Broadly our expectations are not so different here.

That brings us to the third conversation, about the possibility of existential risk or the development of more intelligent and capable AI that would have greater affordances. For a while now, Tyler has asserted that such greater intelligence likely does not much matter, that not so much would change, that transformational effects are highly unlikely, whether or not they constitute existential risks. That the world will continue to seem normal, and follow the rules and heuristics of economics, essentially Scott Aaronson’s Futurama. Even when he says AIs will be decentralized and engage in their own Hayekian trading with their own currency, he does not think this has deep implications, nor does it imply much about what else is going on beyond being modestly (and only modestly) productive.

Then at other times he affirms the importance of existential risk concerns, and indeed says we will be in need of a hegemon, but the thinking here seems oddly divorced from other statements, and thus often rather confused. Mostly it seems consistent with the view that it is much easier to solve alignment quickly, build AGI and use it to generate a hegemon, than it would be to get any kind of international coordination. And also that failure to quickly build AI risks our civilization collapsing. But also I notice this implies that the resulting AIs will be powerful enough to enable hegemony and determine the future, when in other contexts he does not think they will even enable sustained 10% GDP growth.

Thus at this point, I choose to treat most of Tyler’s thoughts on AI as if they are part of the second conversation, with an implicit ‘assuming an AI at least semi-fizzle’ attached to them, at which point they become mostly excellent thoughts.

Dealing with the third conversation is harder. There is place where I feel Tyler is misinterpreting a few statements, in ways I find extremely frustrating and that I do not see him do in other contexts, and I pause to set the record straight in detail. I definitely see hope in finding common ground and perhaps working together. But so far I have been unable to find the road in.

The Notes Themselves

  1. I don’t buy the idea that investment returns have tended to be negative, or that VC investment returns have overall been worse than the market, but I do notice that this is entirely compatible with long term growth due to positive externalities not captured by investors.

  2. I agree with Tyler that the entrenched VCs are highly profitable, but that other VCs due to lack of good deal flow and adverse selection, and lack of skill, don’t have good returns. I do think excessive optimism produces competition that drives down returns but that returns would otherwise be insane.

  3. I also agree with Tyler that those with potential for big innovations or otherwise very large returns both do well themselves and also capture only a small fraction of total returns they generate, and I agree that the true rate is unknown and 2% is merely a wild guess.

  4. And yes, many people foolishly (or due to highly valuing independence) start small businesses that will have lower expected returns than a job. But I think that they are not foolish to value that independence highly versus taking a generic job, and also I believe that with proper attention to what actually causes success plus hard work small business can have higher private returns than a job for a great many people. A bigger issue is that many small businesses are passion projects such as restaurants and bars where the returns tend to be extremely bad. But the reason the returns are low is exactly because so many are passionate and want to do it.

  5. I find it silly to think that literal Keynes did not at the time have the ability to beat the market by anticipating what others would do. I am on record as saying the efficient market hypothesis is false, certainly in this historical context it should be expected to be highly false. The reason you cannot make money from this kind of anticipation easily is that the anticipation is priced in, but Keynes was clearly in position to notice it being not priced in. I share Tyler’s disdain for where the argument was leading regarding socializing long term investment, and also think that long term fundamentals-based investing or building factories is profitable, having less insight and more risk should get priced in. That is indeed what I am doing with most of my investments.

  6. Financial system at 2% of wealth might not be growing in those terms and maybe it’s not outrageous on its face but it is at least suspicious, that’s a hell of a management fee especially given many assets aren’t financialized, and 8% of GDP still seems like a huge issue. And yes, I think that if that number goes up as wealth goes up that still constitutes a very real problem.

  7. Risk behavior where you buy insurance for big things and take risks in small things makes perfect sense, both as mood management and otherwise, considering marginal utility curves and blameworthiness. You need to take a lot of small risks at minimum. No Gamble, No Future.

  8. The idea that someone’s failures are highly illustrative seems right, also I worry about people adapting that idea too rigorously.

  9. The science of what lets people ‘get away with’ what is generally considered socially unacceptable behaviors while being prominent seems neglected.

  10. Tyler continuing to bet on economic growth meaning things turned out well pretty much no matter what, whereas shrinking fertility risks things turning out badly. I find it so odd to model the future in ways that implicitly assume away AI.

  11. If hawks always gain long term status and pacifists always lose it, that does not seem like it can be true in equilibrium?

  12. I think that Hayek’s claim that there is a general natural human trend towards more socialism has been proven mostly right, and I’m confused why Tyler disagrees. I do think there are other issues we are facing now that are at least somewhat distinct from that question, and those issues are important, but also I would notice that those other problems are mostly closely linked to larger government intervention in markets.

  13. Urbanization is indeed very underrated. Housing theory of everything.

  14. ‘People overrate the difference between government and market’ is quite an interesting claim, that the government acts more like a market than you think. I don’t think I agree with this overall, although some doubtless do overrate it?

  15. (30:00) The market as the thing that finds a solution that gets us to the next day is a great way to think about it. And the idea that doing that, rather than solving for the equilibrium, is the secret of its success, seems important. It turns out that, partly because humans anticipate the future and plan for it, this changes what they are willing to do at what price today, and that this getting to tomorrow to fight another day will also do great things in the longer term. That seems exactly right, and also helps us point to the places this system might fail, while keeping in mind that it tends to succeed more than you would expect. A key question regarding AI is whether this will continue to work.

  16. Refreshing to hear that the optimum amount of legibility and transparency is highly nonzero but also not maximally high either.

  17. (34:00): Tyler reiterates that AIs will create their own markets, and use their own currencies, property rights and perhaps Bitcoins and NFTs will be involved, and that decentralized AI systems acting in self-interested ways will be an increasing portion of our economic activity. Which I agree is a baseline scenario of sorts if we dodge some other bullets. He even says that the human and AI markets will be fully integrated. And that those who are good at AI integration, at outsourcing their activities to AI, will be vastly more productive than those who do not (and by implication, outcompete them).

  18. What I find frustrating is Tyler failing to then solve for the equilibrium, and asking what happens next. If we are increasingly handing economic activity over to self-interested competitive AI agents who compete against each other in a market and to get humans to allocate power and resources to them, subject to the resulting capitalistic and competitive and evolutionary and selection dynamics, where does that lead? How do we survive? I would as Tyler often requests Model This, except that I don’t see how not to assume the conclusion.

  19. (37:00) Tyler expresses skepticism that GPT-N can scale up its intelligence that far, that beyond 5.5 maybe integration with other systems matters more, and says ‘maybe the universe is not that legible.’ I essentially read this as Tyler engaging in superintelligence denialism, consistent with his idea that humans with very high intelligence are themselves overrated, and saying that there is no meaningful sense in which intelligence can much exceed generally smart human level other than perhaps literal clock speed.

  20. A lot of this, that I see from many economists, seems to be based on the idea that the world will still be fundamentally normal and respond to existing economic principles and dynamics, and effectively working backwards from there, although of course it is not framed or presented that way. Thus intelligence and other AI capabilities will ‘face bottlenecks’ and regulations that they will struggle to overcome, which will doubtless be true, but I think gets easily overrun or gone around at some point relatively quickly.

  21. (39:00) Tyler asks, is more intelligence likely to be good or bad against existential risk? And says he thinks it is more likely to be good. There are several ways to respond with ‘it depends.’ The first is that while I would very much be against this as a strategy of course, if we were always not as intelligent as we actually are, such that we never industrialized, then we would not face substantial existential risks except over very long time horizons. Talk of asteroid impacts is innumerate, without burning coal we wouldn’t be worried about climate, nuclear and biological threats and AI would be irrelevant, fertility would remain high.

  22. Then on the flip side of adding more intelligence, I agree that adding more actually human intelligence will tend to be good, so the question again is how to think about this new intelligence and how it will get directed and to what extent we will remain in control of it and of what happens, and so on. How exactly will this new intelligence function and to what extent will it be on our behalf? Of course I have said much of this before as has Tyler, so I will stop there.

  23. The idea that AI potentially prevents other existential risks is of course true. It also potentially causes them. We are (or should be) talking price. As I have said before, if AI posed a non-trivial but sufficiently low existential risk, its upsides including preventing other risks would outweigh that.

  24. (40:30) Tyler made an excellent point here, that market participants notice a lot more than the price level. They care about size, about reaction speed and more, and take in the whole picture. The details teach you so much more. This is also another way of illustrating that the efficient market hypothesis is false.

  25. How do some firms improve over time? It is a challenge for my model of Moral Mazes that there are large centuries old Japanese or Dutch companies. It means there is at least some chance to reinvigorate such companies, or methods that can establish succession and retain leadership that can contain the associated problems. I would love to see more attention paid to this. The fact that Israel and the United States only have young firms and have done very well on economic growth suggests the obvious counterargument.

  26. I love the point that a large part of the value of free trade is that it bankrupts your very worst firms. Selection is hugely important.

  27. (48:00) Tyler says we should treat children better and says we have taken quite a few steps in that direction. I would say that we are instead treating children vastly worse. Children used to have copious free time and extensive freedom of movement, and now they lack both. If they do not adhere to the programs we increasingly put them on medication and under tremendous pressure. The impacts of smartphones and social media are also ‘our fault.’ There are other ways in which we treat them better, in particular not tolerating using corporal punishment or other forms of what we now consider abuse. Child labor is a special case, where we have gone from forcing children to do productive labor in often terrible ways to instead forcing children to do unproductive labor in often terrible ways, and also banning children from doing productive labor for far too long, which is also its own form of horrific. But of course most people will say that today’s abuses are fine and yesterday’s are horrific.

  28. Mill getting elected to Parliament I see as less reflecting differential past ability for a top intellectual to win an election, and more a reflection of his willingness to put himself up for the office and take one for the team. I think many of our best intellectuals could absolutely make it to Congress if they cared deeply about making it to Congress, but that they (mostly wisely) choose not to do that.

  29. (53:00) Smith noticed, despite persistent millennia long very slow if any growth, that economic growth was coming by observing a small group and seeing those dynamics as the future. The parallels to AI are obvious and Patel asks about it. Cowen says that to Smith 10% growth would likely be inconceivable, and he wouldn’t predict it because it would just shock him. I think this is right, and also I believe a lot of current economists are doing exactly that mental step today.

  30. Cowen also says he finds 10% growth for decades on end implausible. I would agree that seems unlikely, but I would say that not because it is too high but because you would then see such growth accelerate if it failed to rapidly hit a hard wall or cause a catastrophe, not because there would be insufficient room for continued growth. I do think his point that GDP growth ceases to be a good measure under sufficiently large level changes is sensible.

  31. I am curious how he would think about all these questions with regard to for example China’s emergence in the late 20th century. China has grown at 9% a year since 1978, so it is an existence proof that this can happen for some time. In some sense you can think of growth under AI potentially as a form of catch-up growth as well, in the sense that AI unlocks a superior standard of technological, intellectual and physical capacity for production (assuming the world is somehow recognizable at all) and we would be adapting to it.

  32. Tyler asks: If you had the option to buy from today’s catalogue or the Sears catalogue from 1905 and had $50,000 to spend, which would you choose? He points out you have to think about it, which indeed you do if this is to be your entire consumption bundle. If you are allowed trade, of course, it is a very easy decision, you can turn that $50,000 into vastly more.

  33. (1:05:00) Dwarkesh says my exact perspective on Tyler’s thinking, that he is excellent on GPT-5 level stuff, then seems (in my words not his) to hit a wall, and fails (in Dwarkesh’s words) to take all his wide ranging knowledge and extrapolate. That seems exactly right to me, that there is an assumption of normality of sorts, and when we get to the point where normality as a baseline stops making sense the predictions stop making sense. Tyler responds saying he writes about AI a lot and shares ideas he has them, and I don’t doubt those claims, but it does not address the point. I like that Dwarkesh asked the right question, and also realized that it would not be fruitful to pursue it once Tyler dodged answering. Dwarkesh has GOAT-level podcast question game.

  34. Should we subsidize savings? Tyler says he will come close to saying yes, at minimum we should stop taxing savings, which I agree with. He warns that the issue with subsidizing savings is it is regressive and would be seen as unacceptable.

The AI and Future Scenario Section Begins

  1. (1:14:00) Tyler worries about the fragile world hypothesis, not in terms of what AI could do but in terms of what could be done with… cheap energy? He asks what would happen if a nuclear bomb costs $50k. Which is a great question, but seems rather odd to worry about it primarily in terms of cost of energy?

  2. Tyler notes that due to intelligence we are doing better than the other great apes. I would reply that this is very true, that being the ape with the most intelligence has gone very well for us, and perhaps we should hesitate to create something that in turn has more intelligence than we do, for similar reasons?

  3. He says the existential risk people say ‘we should not risk all of this’ for AI, and that this is not how you should view history. Well, all right, then let’s talk price?

  4. Tyler thinks there is a default outcome of retreating to a kind of Medieval Balkans style existence with a much lower population ‘with or without AI.’ The with or without part really floors me, and makes me more confident that when he thinks about AI he simply is not pondering what I am pondering, for whatever reason, at all? But the more interesting claim is that, absent ‘going for it’ via AI, we face this kind of outcome.

  5. Tyler says things are hard to control, that we cannot turn back (and that we ‘chose a decentralized world well before humans even existed’) and such, although he does expect us to turn back via the decline scenario? He calls for some set of nations to establish dominance in AI, to at least buy us some amount of time. In some senses he has a point, but he seems to be doing some sort of confluence of the motte and bailey here. Clearly some forms of centralization are possible.

  6. By calling for nations such as America and the UK to establish dominance in this way, he must mean for particular agents within those nations to establish that dominance. It is not possible for every American to have root access and model weights and have that stay within America, or be functionally non-decentralized in the way he sees as necessary here. It could be the governments themselves, a handful of corporations or a combination or synthesis thereof. I would note this is, among other things, entirely incompatible with open model weights for frontier systems, and will require a compute monitoring regime.

  7. It certainly seems like Tyler is saying that we need to avoid misuse and proliferation of sufficiently capable AI systems at the cost of establishment of hegemonic control over AI, with all that implies? There is ultimately remarkable convergence of actual models of the future and of what is to be done, on many fronts, even without Tyler buying the full potential of such systems or thinking their consequences fully through. But notice the incompatibility of American dominance in AI with the idea of everyone’s AIs engaging in Hayekian commerce under a distinct ecosystem, unless you think that there is some form of centralized control over those AIs and access to them. So what exactly is he actually proposing? And how does he propose that we lay the groundwork now in order to get there?

Clearing Up Two Misconceptions

  1. I get a mention and am praised as super smart which is always great to hear, but in the form of Tyler once again harping on the fact that when China came out saying they would require various safety checks on their AIs, I and others pointed out that China was open to potential cooperation and was willing to slow down its AI development in the name of safety even without such cooperation. He says that I and others said “see, China is not going to compete with us, we can shut AI down.”

So I want to be clear: That is simply not what I said or was attempting to convey.

I presume he is in particular referring to this:

Zvi Mowshowitz (April 19, 2023): Everyone: We can’t pause or regulate AI, or we’ll lose to China.

China: All training data must be objective, no opinions in the training data, any errors in output are the provider’s responsibility, bunch of other stuff.

I look forward to everyone’s opinions not changing.

[I quote tweeted MMitchell saying]: Just read the draft Generative AI guidelines that China dropped last week. If anything like this ends up becoming law, the US argument that we should tiptoe around regulation ’cos China will beat us will officially become hogwash. Here are some things that stood out…

So in this context, Tyler and many others were claiming that if we did any substantive regulations on AI development we risked losing to China.

I was pointing out that China was imposing substantial regulations for its own reasons. These requirements, even if ultimately watered down, would be quite severe restrictions on their ability to deploy such systems.

The intended implication was that China clearly was not going to go full speed ahead with AI, they were going to impose meaningfully restrictive regulations, and so it was silly to say that unless we imposed zero restrictions we would ‘lose to China.’ And also that perhaps China would be open to collaboration if we would pick up the phone.

And yes, that we could pause the largest AI training runs for some period of time without substantively endangering our lead, if we choose to do that. But the main point was that we could certainly do reasonable regulations.

The argument was not that we could permanently shut down all AI development forever without any form of international agreement, and China and others would never move forward or never catch up to that.

I believe actually that the rest of 2023 has borne out that China’s restrictions in various ways have mattered a lot, that even within specifically AI they have imposed more meaningful barriers than we have, that they remain quite behind, and that they have shown willingness to sit down to talk on several occasions, including the UK Summit, the agreement on nuclear weapons and AI, a recent explicit statement of the importance of existential risk and more.

Tyler also says we seem to have “zero understanding of some properties of decentralized worlds.” On many such fronts I would strongly deny this, I think we have been talking extensively about these exact properties for a long time, and treating them as severe problems to finding any solutions. We studied game theory and decision theory extensively, we say ‘coordination is hard’ all the time, we are not shy about the problem that places like China exist. Yes, we think that such issues could potentially be overcome, or at least that if we see no other paths to survival or victory that we need to try, and that we should not treat ‘decentralized world’ as a reason to completely give up on any form of coordination and assume that we will always be in a fully competitive equilibrium where everyone defects.

Based on his comments in the last two minutes, perhaps instead the thing he thinks we do not understand is that the AI itself will naturally and inevitably also be decentralized, and there will not be only one AI? But again that seems like something we talk about a lot, and something I actively try to model and think about a lot, and try to figure out how to deal with or prevent the consequences. This is not a neglected point.

There are also the cases made by Eliezer and others that with sufficiently advanced decision theory and game theory and ability to model others or share source code and generate agents with high correlations and high overlap of interests and identification and other such affordances then coordination between various entities becomes more practical, and thus we should indeed expect that the world with sufficiently advanced agents will act in a centralized fashion even if it started out decentralized, but that is not a failure to understand the baseline outcome absent such new affordances. I think you have to put at least substantial weight on those possibilities.

Tyler once warned me – wisely and helpfully – in an email, that I was falling into too often strawmanning or caricaturing opposing views and I needed to be careful to avoid that. I agree, and have attempted to take those words to heart, the fact that I could say many others do vastly worse, both to views I hold and to many others, on this front is irrelevant. I am of course not perfect at this, but I do what I can, and I think I do substantially less than I would be doing absent his note.

Then he notes that Eliezer made a Tweet that Tyler thinks probably was not a joke – that I distinctly remember and that was 100% very much a joke – that the AI could read all the legal code and threaten us with enforcement of the legal system. That Eliezer does not seem to understand how screwed up the legal system is, talking about how this would cause very long courtroom waits and would be impractical and so on.

That’s the joke. The whole point was that the legal system is so screwed up that it would be utterly catastrophic if we actually enforced it, and also that is bad. Eliezer is constantly tweeting and talking, independently of AI, about how screwed up the legal system is, if you follow him it is rather impossible to miss. There are also lessons here about potential misalignment of socially verbally affirmed with what we actually want to happen, and also an illustration of the fact that a sufficiently capable AI would have lots of different forms of leverage over humans, it works on many levels. I laughed at the time, and knew it was a joke without being told. It was funny.

I would say to him, please try to give a little more benefit of the doubt, perhaps?

Final Notes Section

  1. Tyler predicts that until there is an ‘SBF-like’ headline incident, the government won’t do much of anything about AI even though the smartest people in the government in national security will think we should, and then after the incident we will overreact. If that is the baseline, it seems odd to oppose (as Tyler does) doing anything at all now, as this is how you get that overreaction.

  2. Should we honor past generations more because we want our views to be respected more in the future? Tyler says probably yes, that there is no known philosophically consistent view on this that anyone lives by. I can’t think of one either. He points out the Burke perspective on this is time inconsistent, as you are honoring the recent dead only, which is how most of us actually behave. Perhaps one way to think about this is that we care about the wishes of the dead in the sense that people still alive care about those particular dead, and thus we should honor the dead to the extent that they have a link to those who are alive? Which can in turn pass along through the ages, as A begets B begets C on to Z, and we also care about such traditions as traditions, but that ultimately this fades, faster with some than others? But that if we do not care about that particular person at all anymore, than we also don’t care about their preferences because dead is dead? And on top of that, we can say that there are certain specific things which we feel the dead are entitled to, like a property right or human right, such as their funerals and graves, and the right to a proper burial even if we did not know them at all, and we honor those things for everyone as a social compact exactly to keep that compact going. However none of this bodes especially well for getting future generations, or especially future AIs, to much care about our quirky preferences in the long run.

  3. Why does Argentina go crazy with the printing press and have hyperinflation so often? Tyler points out this is a mystery. My presumption is this begets itself. The markets expect it again, although not to the extent they should, I can’t believe (and didn’t at the time) some of the bond sales over the years actually happened at the prices they got and this seems like another clear case of the EMH being false, but certainly everyone involved has ‘hyperinflation expectations’ that make it much harder to go back from the brink, and will be far more tolerant of irresponsible policies that go down such roads into the future because it looks relatively responsible, and because as Tyler asks about various interest groups presumably are used to capturing more rents than the state can afford. Of course, this can also go the other way, at some point you get fed up with all that, and thus you get Milei.

  4. So weird to hear Tyler talk about the power of American civic virtue, but he still seems right compared to most places. We have so many clearly smart and well meaning people in government, yet it in many ways functions so poorly, as they operate under such severe constraints.

  5. Agreement that in the past economists and other academics were inclined to ask bigger questions, and now they more often ask smaller questions and overspecialize.

  6. (1:29:00) Tyler worries about competing against AI as an academic or thinker, that people might prefer to read what the AI writes for 10-20 years. This seems to me like a clear case of ‘if this is true then we have much bigger problems.’

  7. I love Tyler’s ‘they just say that’ to the critique that you can’t have capitalism with proper moral equality. And similarly with Fukuyama. Tyler says today’s problems are more manageable than those of any previous era, although we might still all go poof. I think that if you judge relative to standards and expectations and what counts as success that is not true, but his statement that we are in the fight and have lots of resources and talent is very true. I would say, we have harder problems that we aim to solve, while also having much better tools to solve them. As he says, let’s do it, indeed. This all holds with or without AI concerns.

  8. Tyler predicts that volatility will go up a lot due to AI. I am trying out two manifold markets to attempt to capture this.

  9. It seems like Tyler is thinking of greater intelligence in terms of ‘fitting together quantum mechanics and relativity’ and thus thinking it might cap out, rather than thinking about what that intelligence could do in various more practical areas. Strange to see a kind of implicit Straw Vulcan situation.

  10. Tyler says (responding to Dwarkesh’s suggestion) that maybe the impact of AI will be like the impact of Jews in the 20th century, in terms of innovation and productivity, where they were 2% of the population and generated 20% of the Nobel Prizes. That what matters is the smartest model, not how many copies you have (or presumably how fast it can run). So once again, the expectation that the capabilities of these AIs will cap out in intelligence, capabilities and affordances essentially within the human range, even with our access to them to help us go farther? I again don’t get why we would expect that.

  11. Tyler says existential risk is indeed one of the things we should be most thinking about. He would change his position most if he thought international cooperation were very possible or no other country could make AI progress, this would cause very different views. He notices correctly that his perspective is more pessimistic than what he would call a ‘doomer’ view. He says he thinks you cannot ‘just wake up in the morning and legislate safety.’

  12. In the weak sense, well, of course you can do that, the same way we legislate safe airplanes. In the strong sense, well, of course you cannot do that one morning, it requires careful planning, laying the groundwork, various forms of coordination including international coordination and so on. And in many ways we don’t know how to get safety at all, and we are well aware of many (although doubtless not all) of the incentive issues. This is obviously very hard. And that’s exactly why we are pushing now, to lay groundwork now. In particular that is why we want to target large training runs and concentrations of compute and high end chips, where we have more leverage. If we thought you could wake up and do it in 2027, then I would be happy to wait for it.

  13. Tyler reiterates that the only safety possible here, in his view, comes from a hegemon that stays good, which he admits is a fraught proposition on both counts.

  14. His next book is going to be The Marginal Revolution, not about the blog about the actual revolution, only 40k words. Sounds exciting, I predict I will review it.

Concluding AI Thoughts

So in the end, if you combine his point that he would think very differently if international coordination were possible or others were rendered powerless, his need for a hegemon if we want to achieve safety, and clear preference for the United States (or one of its corporations?) to take that role if someone has to, and his statement that existential risk from AI is indeed one of the top things we should be thinking about, what do you get? What policies does this suggest? What plan? What ultimate world?

As he would say: Solve for the equilibrium.