Lindsey: Okay, that’s a good answer. In your new book, GOAT, which we’ll talk about more later, one of the criteria you use for judging great economists is that they can’t have been too wrong about too many things. What’s an important thing that you now think you were dead wrong about?
Cowen: Well, there’s so many things. It’s hard to know where to start. But for instance, in 2007, early part of 2008, I definitely thought the banking system was solvent. That was wrong. I then thought it was the result of a real estate bubble. Everyone leapt on that bandwagon. I now think that was wrong. I was wrong big time twice in a row. Given the way home prices have evolved, I don’t think it was much of a bubble. It was maybe a little ahead of its time, but those prices seem to have been validated. So here’s this event that I paid very close attention to and I’m already wrong twice in a row, and maybe I’m shooting for three times in a row wrong. So I don’t know. There are so many judgments of history that unfold slowly. I think it’s really hard to be sure that you are right about something.
Like when shock therapy came for Poland, I thought, “Well, this is clearly the right thing to do.” I think it’s enough years. You can say it definitely worked for Poland. Has it worked everywhere? The places where it didn’t work, was it really tried? Was it possible in those places for it to be really tried? They’re very complicated questions, but I think I would have or not would have but did underrate the Chinese model at the time. But from 2023, there’s a point of view that says, well the Chinese model seemed great for 25 years but now they’re stuck with a dictator and all this terrible statism, and it might still blow up in their faces or cause a world war. So I think I’m wrong there but I could actually turn out to be right
(source: https://brinklindsey.substack.com/p/interview-with-tyler-cowen)
Enjoyed the story.
I thought it would be a good case study to see how well the different LLMs can interpret fiction outside of their training data, so I pasted it into ChatGPT 5.4 Thinking Extended, Claude Opus 4.6 Extended, and Gemini 3.1 Pro Preview (thinking setting=high), and gave each of them the prompt: “Analyze / summarize this short story in depth:”
Responses are here (in pastebin): ChatGPT, Claude, Gemini
I thought Gemini’s was the best overall, but they each missed / didn’t understand several important elements of the story (assuming my interpretation is right):
Murder conspiracy: the narrator + Phoebe + Jessica are all actively participating in a conspiracy to poison, and eventually kill, the master
The 3 responses all get this broadly correct, but Gemini I think was the most precise:
only one that identifies the poison as lead acetate (“sugar of lead”) specifically, notices the “saturnine” reference
The narrator is communicating by “writing between the lines” in the Straussian sense: his portrayal of himself as a “humble slave” and misogynist is just a cover in case his letters are intercepted
Gemini seems to understand this the best
ChatGPT doesn’t really get it:
“And yet he is not simply awakened or liberated. He remains compromised. He still enjoys hierarchy, still shares in cruelty, still speaks in the master’s idiom. His love for Phoebe does not ennoble him into purity; it merely opens fissures in his loyalty.”
Claude similarly doesn’t seem to fully understand this: e.g.
“What the Narrator Doesn’t Realize He’s Telling Us” and “these are the details of abuse and its concealment, narrated by someone who either cannot or will not see them clearly.”
Jessica is a child
Claude nails this one:
“The narrator’s master is almost certainly a pedophile” and correctly finds evidence: “nubility suspect”, “desperate merchants”, “maternal affection”
ChatGPT basically gets it right, though doesn’t explain the evidence:
“The master is probably a sexual predator, with rumors especially centered on Jessica’s suspicious youth.”
Gemini misses it completely, instead claiming that
Jessica “is biologically male (a eunuch, trans woman, or young boy in drag)”
Julian = Elizabeth
Gemini gets this one explicitly:
“Julian does not exist; ‘he’ is actually his sister, Elizabeth.”
ChatGPT misses it:
“Elizabeth functions as Julian’s intellectual proxy and may be far more than a mere conduit.”
Claude doesn’t even mention
Elizabeth
Belial = AI/LLM writing, and Julian’s linear algebra method is intended to be something similar to Pangram
None of them caught this