“Empire of AI” by Karen Hao was a nice read that I would recommend. It’s half hitpiece on how OpenAI corporate culture has evolved (with a focus on Sam Altman and his two-faced politicking), and half illustrating how frontier AI labs are “empires” that extract resources from the Global South (such as potable water for data center cooling and cheap labor for data labeling).
Below I collect some quotes from the book that illustrate how Sam Altman is manipulative and power-seeking, and accordingly why I find it frightening that he wields so much power over OpenAI.
There is some irony in the fact that I’ve put together a quote compilation focused on Sam Altman, when one of the main themes of the book is that the AI industry ignores the voices of powerless people, such as those in the Global South. Sorry about that.
Regarding Sam Altman’s early years running Loopt (early 2010s):
In [storytelling] Altman is a natural. Even knowing as you watch him that his company would ultimately fail, you can’t help but be compelled by what he’s saying. He speaks with a casual ease about the singular positioning of his company. His startup is part of the grand, unstoppable trajectory of technology. Consumers and advertisers are clamoring for the service. Don’t bet against him—his success is inevitable. (pg. 33)
“Sam remembers all these details about you. He’s so attentive. But then part of it is he uses that to figure out how to influence you in different ways,” says one person who worked several years with him. “He’s so good at adjusting to what you say, and you really feel like you’re making progress with him. And then you realize over time that you’re actually just running in place.” (pg. 34-35)
[Altman] sometimes lied about details so insignificant that it was hard to say why the dishonesty mattered at all. But over time, those tiny “paper cuts,” as one person called them, led to an atmosphere of pervasive distrust and chaos at the company. (pg. 35)
Regarding Sam Altman’s time running YC (mid 2010s):
A few years in [to running YC], he had refined his appearance and ironed out the edges. He’d traded in T-shirts and cargo shorts for fitted Henleys and jeans. He’d built eighteen pounds of muscle in a single year to flesh out his small frame. He learned to talk less, ask more questions, and project a thoughtful modesty with a furrowed brow. In private settings and with close friends, he still showed flashes of anger and frustration. In public ones and with acquaintances, he embodied the nice guy. [...] He avoided expressing negative emotions, avoided confrontation, avoided saying no to people. (pg. 42)
Ilya Sutskever to Sam Altman (2017):
“We don’t understand why the CEO title is so important to you [...] Your stated reasons have changed, and it’s hard to really understand what’s driving it. Is AGI *truly* your primary motivation? How does it connect to your political goals? How has your thought process changed over time?” (pg. 62)
Sam Altman’s shift away from YC to OpenAI in 2019:
The media widely reported Altman’s move as a well-choreographed step in his career and his new role as YC chairman. Except that he didn’t actually hold the title. He had proposed the idea to YC’s partnership but then publicized it as if it were a foregone conclusion, without their agreement [..] (pg. 69)
Sam Altman’s early dealings with Microsoft in 2019:
[AI safety researchers at OpenAI] were stunned to discover the extent of the promises that Altman had made to Microsoft for which technologies it would get access to in return for its investment. The terms of the deal didn’t align with what they had understood from Altman. (pg. 145)
Again in 2020:
Altman had made each of OpenAI’s decisions about the Microsoft deal and GPT-3’s deployment a foregone conclusion, but he had maneuvered and manipulated dissenters into believing they had a real say until it was too late to change course. (pg. 156)
Prior to the release of DALL-E 2 in 2022:
In private conversations with Safety, Altman expressed sympathy for their perspective, agreeing that the company was not on track with its AI safety research and needed to invest more. In private conversations with Applied, he pressed them to keep going. (pg. 240)
Sam Altman in 2019 on Conversations with Tyler:
“The way the world was introduced to nuclear power is an image that no one will ever forget, of a mushroom cloud over Japan [...] I’ve thought a lot about why the world turned against science, and one answer of many that I am willing to believe is that image, and that we learned that maybe some technology is too powerful for people to have. People are more convinced by imagery than facts.” (pg. 317)
Not consistently candid part 1 (in 2022):
Altman had highlighted the strong safety and testing protocols that OpenAI had put in place with the Deployment Safety Board to evaluate GPT-4’s deployment. After the meeting, one of the independent directors was catching up with an employee when the employee noted that a breach of the DSB protocols had already happened. Microsoft had done a limited rollout of GPT-4 to users in India, without the DSB’s approval. Despite spending a full day holed up in a room with the board for the on-site, Altman had not once notified them of the violation. (pg. 323-4)
Not consistently candid part 2 (in 2023):
Recently, [Altman] had told Murati he thought that OpenAI’s legal team had cleared GPT-4 Turbo for skipping DSB review. But when Murati checked in with Jason Kwon, who oversaw the legal team, Kwon had no idea how Altman had gotten that impression. (pg. 346)
In 2023, leading up to Altman being fired as CEO from OpenAI:
Murati had attempted to give Altman detailed feedback on the accelerating issues, hoping it would prompt self-reflection and change. Instead, he had iced her out [...] She had seen him do something similar with other executives: If they disagreed with or challenged him, he could quickly cut them out of key decision-making processes or begin to undermine their credibility. (pg. 347)
Murati on Musk vs. Altman:
Musk would make a decision and be able to articulate why he’d made it. With Altman, she was often left guessing whether he was truly being transparent with her and whether the whiplash he caused was based on sound reasoning or some hidden calculus. (pg. 362)
Not consistently candid part 3 (in 2023):
On the second day of the five-day board crisis, the directors confronted him during a mediated discussion about the many instances he had lied to them, which had led to their collapse of trust. Among the examples, they raised how he had lied to Sutskever about McCauley saying Toner should step off the board.
Altman momentarily lost his composure, clearly caught red-handed. “Well, I thought you could have said that. I don’t know,” he mumbled. (pg. 364)
In 2024:
In an office hours, [safety researchers] confronted Altman [regarding his plans to create a AI chip company]. Altman was uncharacteristically dismissive. “How much would you be willing to delay a cure for cancer to avoid risks?” he asked. He then quickly walked it back, as if he’d suddenly remembered his audience. “Maybe if it’s extinction risk, it should be infinitely long,” he said. (pg. 377-8)
In 2024, regarding Jan Leike’s departure:
“Of all the things Jan was worried about, Jan had no worries about the level of compute commit or the prioritization of Superalignment work, as I understand it,” Altman said. (pg. 387)
[Meanwhile Leike, two days later:] “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.” (pg. 388)
Altman in 2024 (this one seems worse than the goalpost shifting Anthropic has been doing with their RSP, yet I hear comparatively less discussion):
“When we originally set up the Microsoft deal, we came up with this thing called the sufficient AGI clause,” a clause that determined the moment when OpenAI would stop sharing its IP with Microsoft. “We all think differently now,” he added. There would no longer be a clean cutoff point for when OpenAI reached AGI. “We think it’s going to be a continual thing.” (pg. 402)
Well, that turned me off from wanting to read it. What’s the argument there? Water is almost always a local resource, especially at scale, not like fuels and food and minerals. Unless they’re trucking water in from thousands of miles away or building data centers in those regions without the infrastructure needed to supply themselves? In which case the local opposition to data center construction on water demand grounds makes even less sense. But I’d be baffled if they were doing that.
Yes, for example, the book describes Google’s efforts to build a data center in Chile.
> [...] Google said that its data center planned to use an estimated 169 liters of fresh drinking water per second to cool its servers. In other words, the data center could use more than one thousand times the amount of water consumed by the entire population of Carrillos, roughly eighty-eight thousand residents, over the course of a year. [...] Not only would the facility be taking that water directly from Cerrillos’s public water source, it would do so at a time when the nation’s entire drinking water supply was under threat. (pg. 288)
It then describes the efforts of an advocacy organization MOSACAT to push back against Google’s plans.
Even though the aggregate emissions / water / climate impact of genAI is not super large (see e.g. this Google report), that doesn’t mean that negative consequences can’t be concentrated on small regions or populations. To use a different example, it can both be the case that a vanishingly small fraction of data annotators encounter disturbing content in their jobs, and also the case that those that do deserve better employee protections.
Anyway, this was just a single clause summarizing half a book, there is only so much detail it can get into, especially since the purpose of this post is different.
Ok, fair example. I still maintain that “the nation’s entire drinking water supply” is not actually a coherent, relevant concept. There are good reasons to build data centers in Chile—cheap wind and solar potential, for example. Could they really not have forced Google to commit to building a desal plant and associated power generation to offset their own water demand? That seems like a pretty clear negotiation failure but not necessarily Google’s responsibility. Or if the government honestly believes the water cost is worth it, are they wrong? Or was there actual corruption involved?
Sorry, not trying to derail a post that I actually liked and think is important. It just read to me like all the other misleading claims about data center water usage.