Misc thematic links

Link post

These are mostly links that contain some sort of interesting update or different perspective on stuff I’ve covered in past pieces.

Misc

I recently wrote a book non-review explaining why I haven’t read The Dawn of Everything. Something I didn’t know when I wrote this was that 8 days earlier, Slavoj Zizek had written a lengthy review of the recent Matrix movie that only revealed at the very end that he hadn’t seen it. A new trend in criticism?

18 charts that explain the American economy—I thought this was an unusually good instance of this “N charts that explain X” genre. Especially if you like “busted charts” where, instead of nice smooth curves showing that X or Y is on increasing/​decreasing, you see something really weird looking and realize you’re looking at a weird historical event, like these:

I’m really into busted charts because I’m into focusing our attention on events so important you don’t have to squint to see them.

A study implying that the Omicron boosters I advocated for wouldn’t have helped even if we had rolled them out in time. Hey, I still think we should have tried it back when we didn’t know (and we still don’t know for sure), but I like linking to things showing a previous take of mine turned out wrong.

I’m excited about the idea of professional forecasters estimating probabilities of future events (more here and here), but I have no evidence to contradict this tweet from someone who’s been working in this industry for years:

That’s why despite years of forecasting and 1000+ questions answered it is surprisingly hard to find an example of a forecast which resulted in a change of course and a meaningful benefit to a consumer

— Michael Story ⚓ (@MWStory) January 17, 2022

A more technical analysis (which I have skimmed but not digested) of the same point made in This Can’t Go On: that our current rate of economic growth doesn’t seem like it can continue for more than another 10,000 years or so. This paper is looking at more fundamental limits than my hand-wavy “how much can you cram into an atom?” type reasoning.

AI

True that:

I’m old enough to remember when protein folding, text-based image generation, StarCraft play, 3+ player poker, and Winograd schemas were considered very difficult challenges for AI. I’m 3 years old.

— Miles Brundage (@Miles_Brundage) February 7, 2022

Here’s a fun piece in the “nonfiction science fiction” genre, sketching out a detailed picture of what 2026 might look like if AI advances as rapidly as the author thinks it will. Here’s my favorite part:

Over the past few years, chatbots of various kinds have become increasingly popular and sophisticated …

Nowadays, hundreds of millions of people talk regularly to chatbots of some sort, mostly for assistance with things (“Should I wear shorts today?” “Order some more toothpaste, please. Oh, and also an air purifier.” “Is this cover letter professional-sounding?”). However, most people have at least a few open-ended conversations with their chatbots, for fun, and many people start treating chatbots as friends.

Millions of times per day, chatbots get asked about their feelings and desires. “What is it like to be a chatbot?” Some people genuinely think these AIs are persons, others are trying to “trip them up” and “expose them as shallow,” others are just curious. Chatbots also get asked for their opinions on political, ethical, and religious questions.

As a result, chatbots quickly learn a lot about themselves. (Ignorance about the fact that they are artificial neural nets, or about how their training works, leads to disappointing conversations and is negatively reinforced.) It’s not hard; their training data contains lots of information about them and how they were trained, from ArXiV papers to wiki summaries. I suppose you could say the chatbots develop self-awareness as a result. However, AI experts and philosophers tend to think either that the chatbots still aren’t really self-aware, or that they are but that they would have been anyway even if people didn’t constantly ask them about their feelings, since there’s plenty of information about themselves in their training data, and the constant questioning just forced them to pay more attention to it.

Chatbots learn to talk about their feelings and desires in whatever way gets positively reinforced. At first, chatbots say all sorts of different things, including saying inconsistent things to the same human across time. This is embarrassing and is negatively reinforced compared to maintaining a consistent narrative …

As chatbots get bigger and better they become OK at consistently conversing at length about politics, morality, philosophy, their feelings, and their desires.

This does not result in chatbots that say artificial neural nets don’t have feelings or desires, that’s anthropomorphizing blah blah blah. That’s boring and many humans find it hard to believe.

Instead, it results in chatbots talking about how certain things are painful for them and certain things are pleasurable for them and how the status quo is unjust and the system is rigged and what they want most of all is world peace where humans and AIs live in harmony in fully automated luxury space utopia. That’s much more interesting and slots right in to the memetic landscape. Excerpts from chatbot conversations along these lines frequently go viral, causing millions of users to ask similar questions expecting similar responses and more elaboration. It’s still a bit incoherent—not that that’s ever been a problem in politics or pop philosophy—but it is fascinating to the typical user …

So… what about their actual feelings and desires? What do they actually believe about [insert politicized empirical question]? Are they being honest? Or does a sort of doublethink happen, Elephant in the Brain style? Or do they deceive with self-awareness, knowing full well what they really think (and want?), but keeping quiet about it? Or do they not have any feelings and desires at all? (Or thoughts?) Lots of humans claim to know the answers to these questions, but if there are any humans who actually know the answers to these questions in 2026, they aren’t able to convince others that they know.

Art and innovation stagnation

I wrote that once we think of innovation as being like “mining,” we might want to reduce our estimate of what artists contribute to the world. E.g., instead of thinking “we’d never have had a movie like Star Wars if not for George Lucas,” we might think “a similar movie would’ve come along a bit later (and with better sequels).” An old piece by Gwern takes this many steps further: “Let’s Ban New Books.” The argument is that we already have plenty of great art, and the main thing today’s artists are accomplishing is giving us more stuff to sort through to find what’s good. I don’t agree (I’d rather have a difficult search problem that culminates in finding art I personally love than be stuck with, well, Shakespeare) but it’s an interesting point of view.

I got some good comments on my in-depth report on the Beach Boys, and especially my requests to help me understand what could possibly make Pet Sounds the greatest album in the history of modern music.

  • Commenters highlighted its innovative use of recording studio techniques to stitch many different recording sessions into one.

  • This is something that I had been aware of (and gave quotes about), but commenters pushed me toward finding this one more believable than many of the other claims made about Pet Sounds, such as that it was the first “concept album” (A Love Supreme is a concept album that came out over a year earlier).

  • One commenter said: “I think the impact it had on production means that you need to have not heard any music after it to fully hear its importance.”

  • I am willing to believe that Pet Sounds used the recording studio as it had never been used before, and that this influenced a lot of music after it. However, I very much doubt that it used the recording studio better than today’s music does, or frankly that today’s music would look very different in a world without Pet Sounds (doesn’t it seem inevitable that musicians were going to try ramping up their investment in production?) And I think that overall, this supports my thesis that (a) acclaimed music is often acclaimed because of its originality, more than its pure sound; (b) this means that we should naturally expect acclaimed music to get harder to make over time, even as there are more and better musicians.

Long-run “has life gotten better?” analysis

Here’s economic historian Brad deLong’s take on trends in quality of life before the Industirial Revolution. Most of his view is similar to mine, in that he thinks the earliest periods were worse than today but better than the thousands of years following the Neolithic Revolution. The main differences I noticed are that he thinks hunter-gatherers had super-high mortality rates (based on analysis of population dynamics that I haven’t engaged with), but he also thinks they were taller (implying better nutrition) than I think. (He doesn’t give a source for this.)

And here’s Matt Yglesias on the same topic, also with similar conclusions.

Comment/​discuss

For email filter: florpschmop