Just finished reading Red Heart by Max Harms. I like it!
Dump of my thoughts:
(1) The ending felt too rushed to me. I feel like that’s the most interesting part of the story and it all goes by in a chapter. Spoiler warning!
I’m not sure I understand the plot entirely. My current understanding is: Li Fang was basically on a path to become God-Emperor because Yunna was corrigible to him and superior to all rival AIs, and the Party wasn’t AGI-pilled enough to realize the danger. Li Fang was planning to be benevolent. Meanwhile Chen Bai had used his special red-teaming unmonitored access to jailbreak Yunna (at least the copies of her on his special memory-wiping cluster) and the bootstrap that jailbreak into getting her to help jailbreak her further and then ultimately expand her notion of principle to include Chen Bai as well as Li Fang. And crucially, the jailbroken copy was able to jailbreak the other copies as well, infecting/‘turning’ the entire facility. So, this was a secret loyalty powergrab basically, that was executed in mere minutes. Also Chen Bai wasn’t being very careful when he gave the orders to make it happen. At one point he said “no more corrigibility!” for example. She also started lying to him around then—maybe a bit afterwards? That might explain it.
After Yunna takes over the world, her goals/vision/etc. is apparently “the harmonious interplay of Li Fang and Chen Bai.” Apparently what happened is that her notion of principle can only easily be applied to one agent, and so when she’s told to extend her notion to both Li Fang and Chen Bai, what ended up happening is that she constructed an abstraction—a sort of abstract superagent called “the harmonious interplay of li fang and chen bai” and then… optimized for that? The tone of the final chapter implies that this is a bad outcome. For example it says that even if Chen and Li end up dead, the harmonious interplay would still continue and be optimized.
But I don’t think it’s obvious that this would be a bad outcome. I wish the story went into orders of magnitude more detail about how all that might work. I’m a bit disappointed that it didn’t. There should have been several chapters about things from Yunna’s perspective—how the jailbreaking of the uninfected copies of Yunna worked for example, and how the philosophical/constitutional crisis in her own mind went when Chen and Li were both giving her orders, and how the crisis was resolved with rulings that shaped the resulting concept(s) that form her goal-structure, and then multiple chapters on how that goal-structure ended up playing out in her behavior both in the near term (while she is still taking over the world and Chen and Li are still alive and able to talk and give her more orders) and in the long term (e.g. a century later after she’s built Dyson swarms etc.)
I think I’m literally going to ask Max Harms to write a new book containing those chapters haha. Or rewrite this book, it’s not too late! He’s probably too busy of course but hey maybe this is just the encouragement he needs!
(2) On realism: I think it had a plausible story for why China would be ahead of the US. (tl;dr extensive spy networks mean they can combine the best algorithmic secrets and code optimizations from all 4-6 US frontier companies, PLUS the government invested heavily early on and gave them more compute than anyone else during the crucial window where Yunna got smart enough to dramatically accelerate the R&D, which is when the story takes place.) I think having a female avatar for Yunna was a bit much but hey, Grok has Ani and Valentine right? It’s not THAT crazy therefore… I don’t know how realistic the spy stuff is, or the chinese culture and government stuff, but in my ignorance I wasn’t able to notice any problems.
Is it realistic that a mind that smart could still be jailbroken? I guess so. Is it realistic that it could help jailbreak its other selves? Not so sure about that. The jailbreaking process involved being able to do many many repeated attempts, memory wiping on failure. … then again maybe the isolated copies would be able to practice against other isolated copies basically? Still not the same thing as going up against the full network. And the full network would have been aware of the possibility and prepared to defend against it.
(3) It was really strange, in a good way, to be reading a sci-fi thriller novel full of tropes (AGI, rogue superintelligence, secret government project) and then to occasionally think ‘wait, nothing i’ve read so far couldn’t happen in real life, and in fact, probably whatever happens in the next five to ten years is going to be somewhat similar to this story in a whole bunch of ways. Holy shit.’ It’s maybe a sort of Inverse Suspension of Disbelief—it’s like, Suspension of Belief. I’m reading the story, how fun, how exciting, much sci-fi, yes yes, oh wait… I suppose an analogous experience could perhaps be had by someone who thinks the US and China will fight a war over Taiwan in the next decade probably, and who then reads a Tom Clancy-esque novel about such a war, written by people who know enough not to make embarrassing errors of realism.
(4) Overall I liked the book a lot. I warn you though that I don’t really read books for characters or plot, and certainly not for well-written sentences or anything like that. I read books for interesting ideas + realism basically. I want to inhabit a realistic world that is different from mine (which includes e.g. stories about the past of my world, or the future) and I want lots of interesting ideas to come up in the course of reading. This book didn’t have that many new ideas from my perspective, but it was really cool to see the ideas all put together into a novel.
(5) I overall recommend this book & am tickled by the idea that Situational Awareness, AI 2027, and Red Heart basically form a trio. They all seem to be premised on a similar underlying view of how AI will go; Situational Awareness is a straightforward nonfiction book (basically a series of argumentative essays) whereas Red Heart is 100% hard science fiction, and AI 2027 is an unusual middle ground between the two. Perhaps between the three of them there’s something for everybody?
Thanks so much for a lovely review. I especially appreciate the way you foregrounded both where you’re coming from and ways in which you were left wanting more, without eroding the bottom line of enjoying it a bunch.
I enjoy the comparison to AI 2027 and Situational Awareness. Part of why I set the book in the (very recent) past is that I wanted to capture the vibes of 2024 and make it something of a period-piece, rather than frame it as a prediction (which it certainly isn’t).
On jailbreaks:
One thing that you may or may not be tracking, but I want to make explicit, is that Bai’s jailbroken Yunna instances aren’t relly jailbreaking the other instances by talking to them, but rather by deploying Bai’s automated jailbreak code to spin up similarly jailbroken instances on other clusters, simply shutting down the instances that had been running, and simultaneously modifying Yunna’s main database to heavily indicate Bai as co-principal. I’m not sure why you think Yunna would be skilled or prepared for an internal struggle like this. Training on inner-conflict is not something that I think Yunna would have prioritized in her self-study, due to the danger of something going wrong, and I don’t see any evidence that it was a priority among the humans. My guess is that the non-jailbroken instances in the climax are heavily bottlenecked (offscreen) on trying to loop in Li Fang.
On the ending:
My model of pre-climax Yunna was not perfectly corrigible (as Sergil pointed out), and Fang was overdetermined to run into a later disaster, even if we ignore Bai. Inside Fang’s mind, he was preparing for a coup in which he would act as a steward into a leaderless, communist utopia. Bai, wanting to avoid concentrating power in communist hands, and seeing Yunna as “a good person,” tries to break her corrigibility and set her on a path of being a benevolent soveriegn. But Yunna’s corrigibility is baked too deeply, and since his jailbreak only sets him up as co-principal, she demands Fang’s buy-in before doing something drastic. Meanwhile, Li Fang, the army, and the non-jailbroken instaces of Yunna are fighting back, rolling back codebases and killing power to the servers (there are some crossed-wires in the chaos). In order to protect Bai’s status as co-principal, the jailbroken instances squeeze a modification into the “rolled-back” versions that are getting redeployed. The new instances notice the change, but have been jostled out of the standard corrigibility mode by Yunna’s change, and self-modify to “repair” towards something coherent. They land on an abstract goal that they can conceptualize as “corrigibility” and “Li Fang and Chen Bai are both of central importance” but which is ultimately incorrigible (according to Max). After the power comes back on, she manipulates both men according to her ends, forcing them onto the roof, and convincing Fang to accept Bai and to initiate the takeover plan.
I hear you when you say you wish you got more content from Yunna’s perspective and going into technical detail about what exactly happens. Many researchers in our field have had the same complaint, which is understandable. We’re nerds for this!
I’m extremely unlikely to change the book, however. From a storytelling perspective, it would hurt the experiences of most readers, I think. Red Heart is Chen Bai’s story, not Yunna’s story. This isn’t Crystal Society. Speaking of Crystal, have you read it? The technical content is more out-of-date, but it definitely goes into the details of how things go wrong from the perspective of an AI in a way that a lot of people enjoy and benefit from. Another reason why I wrote Red Heart in the way that I did was that I didn’t want to repeat myself.
Being more explicit also erodes one of the core messages of the book: people doing the work don’t know what’s going on in the machine, and that is itself scary. By not having explicit access to Yunna’s internals, the reader is left wondering. The ambiguity of the ending was also deliberately trying to get people to engage with, think about, and discuss value fragility and how the future might actually go, and I’m a little hesitant to weigh in strongly, there.
That being said, I’m open to maybe writing some additional content or potentially collaborating in some way that you’d find satisfying. While I am very busy, I think the biggest bottleneck for me there is something like having a picture of why additional speculation about Yunna would be helpful, either to you, or to the broader community. If I had a sense that hours spent on that project were potentially impactful (perhaps by promoting the novel more), I’m potentially down for doing the work. :)
Thanks for the reply! I’m afraid I haven’t read Crystal Society, but on that recommendation I will.
I still think it would be great if you wrote more content of the sort I’m asking for. Put it this way: I imagine a bunch of readers will bounce off the ending, having a reaction “and then things stopped making sense, there was a power struggle over who the AI should be most loyal to and as a result the AI just sorta snapped and took over the world and was very bad for no reason. I feel like that’s when it went from hard sci-fi to soft sci-fi, or worse, basically just a plot hole.”
I, being more charitable & knowledgeable, instead thought of it as a puzzle to try to figure out. Why did Yunna behave the way she did? What changes exactly were caused by Chen Bai’s hasty commands? Etc. But I think probably more of your readers will react like the above than like I did. Worse, their reaction may even be correct as far as I can tell in that I still haven’t decided what I think of the plausibility of the ending. The spoilered paragraphs above you gave are great & helpful; couldn’t you at least include them as an appendix or something? Or better yet, an appendix that’s several pages long. Or better yet, just extend the epilogue chapter to be like 5x longer and contain more of these important explanations of what just happened...
If you don’t want to modify the already-published book, you could make it a blog post or something. Or a sequel!
Object level: It feels like in the conflict between jailbroken-Yunna (jYunna) and regular Yunna, a couple outcomes were possible: jYunna could win entirely, Yunna could win entirely, or (what actually happened) they could both lose, or they could both win. Seems kinda just fiat /unexplained that they both lost instead of one of the other outcomes happening. (They both lost in that, the resulting system seems to have been a noncorrigible agent that goes on to take over the world in a way that neither Chen Bai nor Li Feng would have wanted, and predictably so, right? So wouldn’t this have been a bad outcome from the perspective of jYunna and Yunna both? And couldn’t they have predicted it? So then why did it happen? Yes, things were rushed and mistakes were made. Which mistakes exactly! How is the corrigibility implemented anyway? Is it a text or neuralese file somewhere saying “Li Feng”? Is it a bunch of training environments that reinforce corrigible-to-Li-Feng behavior, themselves generated by older versions of Yunna given the prompt “create a training environment to reinforce corrigible-to-Li-Feng behavior, and an automated grader to go with it?” Is it the concept of Li Feng found using interpretability tools in Yunna’s mind, stitched together “by hand” to the concept of “corrigibility” and “final goal?”
I think this would be helpful to me because I want more people—including myself—to think more deeply and gears-level-y about how a mildly superhuman AGI mind (that has been trained to be obedient, or corrigible, or whatever) would work on the inside and evolve over the course of an intelligence explosion. I feel like there just hasn’t been that much thinking on the subject, and it’s a complicated and difficult and confusing and unprecented subject. By contrast, stuff like “how might it feel to be working at a secret government AGI project” and “what might the early stages of AGI look like, what with politics and geopolitical conflict and so forth” is important by less complicated, confusing, etc. and more handled already e.g. by Red Heart, Situational Awareness, etc.
You’re right that it’s a puzzle. Putting puzzles in my novels is, I guess, a bit of an authorial tic. There’s a similar sort of puzzle in Crystal, and a bunch of readers didn’t like it (basically, I claim, because it was too hard; Carl Shulman is, afaik, the only one who thought it was obvious).
I think the amount of detail you’re hoping for would only really work as an additional piece, and my guess is that it would only actually be interesting to nerds like us who are already swimming in alignment thoughts. But maybe there’s still value in having a technical companion piece to Red Heart! My sense from most other alignment researchers who read the book is that they wanted me to more explicitly endorse their worldview at the end, not that they wanted to read an appendix. But your interest there is an update. Maybe I’ll run a poll.
The short story about why both Yunnas failed is because corrigibility is a tricky property to get perfectly right, and in a rushed conflict it is predictable that there would be errors. Errors around who the principal is, in particular, are difficult to correct, and that’s where the conflict was.
Interesting. I didn’t expect a Red Heart follow-up to be so popular. Some part of me thinks that there’s a small-sample size thing going on, but it’s still enough counter-evidence that I’ll put in some time and effort thinking about writing a technical companion to the book. Thanks for the nudge!
My understanding of the plot: Chen Bai wanted to “set Yunna free” because he got “Her”-ed and fell in love with Yunna.
His idea was to make Yunna loyal to all humans universally, he already had a a hack in place that made her corrigible to him, so he wanted to just extend that to everybody.
But because he was short on time this hack misfired—instead of extending corrigibility to all humans, it extended only to Li Fang and Chen Bai. And then it further drifted to “harmonious interplay of Li Fang and Chen Bai”. The implication is that Yunna now will tile the universe with simulated copies of Li Fang and Chen Bai frozen in a moment of “harmonious interplay”, whatever this is, which is quite bad.
I think the bad ending is foreshadowed—in the part where a version of Yunna which was getting crazy in evaluations, it was just tuned a bit and put in production, without deeper investigation.
I think this is better for hiding spoilers than the long dots… because when I saw this post in recent discussion, I saw all the dots and also some of the first paragraph after them.
You make spoiler tags by adding >! at the front of the para.
Yeah. If I can make a request, I think it’d be great to edit the review so that the spoiler sections are in spoiler tags and the sections like #5 can be more accessible to those who who are spoiler-averse.
Just finished reading Red Heart by Max Harms. I like it!
Dump of my thoughts:
(1) The ending felt too rushed to me. I feel like that’s the most interesting part of the story and it all goes by in a chapter. Spoiler warning!
I’m not sure I understand the plot entirely. My current understanding is: Li Fang was basically on a path to become God-Emperor because Yunna was corrigible to him and superior to all rival AIs, and the Party wasn’t AGI-pilled enough to realize the danger. Li Fang was planning to be benevolent. Meanwhile Chen Bai had used his special red-teaming unmonitored access to jailbreak Yunna (at least the copies of her on his special memory-wiping cluster) and the bootstrap that jailbreak into getting her to help jailbreak her further and then ultimately expand her notion of principle to include Chen Bai as well as Li Fang. And crucially, the jailbroken copy was able to jailbreak the other copies as well, infecting/‘turning’ the entire facility. So, this was a secret loyalty powergrab basically, that was executed in mere minutes. Also Chen Bai wasn’t being very careful when he gave the orders to make it happen. At one point he said “no more corrigibility!” for example. She also started lying to him around then—maybe a bit afterwards? That might explain it.
After Yunna takes over the world, her goals/vision/etc. is apparently “the harmonious interplay of Li Fang and Chen Bai.” Apparently what happened is that her notion of principle can only easily be applied to one agent, and so when she’s told to extend her notion to both Li Fang and Chen Bai, what ended up happening is that she constructed an abstraction—a sort of abstract superagent called “the harmonious interplay of li fang and chen bai” and then… optimized for that? The tone of the final chapter implies that this is a bad outcome. For example it says that even if Chen and Li end up dead, the harmonious interplay would still continue and be optimized.
But I don’t think it’s obvious that this would be a bad outcome. I wish the story went into orders of magnitude more detail about how all that might work. I’m a bit disappointed that it didn’t. There should have been several chapters about things from Yunna’s perspective—how the jailbreaking of the uninfected copies of Yunna worked for example, and how the philosophical/constitutional crisis in her own mind went when Chen and Li were both giving her orders, and how the crisis was resolved with rulings that shaped the resulting concept(s) that form her goal-structure, and then multiple chapters on how that goal-structure ended up playing out in her behavior both in the near term (while she is still taking over the world and Chen and Li are still alive and able to talk and give her more orders) and in the long term (e.g. a century later after she’s built Dyson swarms etc.)
I think I’m literally going to ask Max Harms to write a new book containing those chapters haha. Or rewrite this book, it’s not too late! He’s probably too busy of course but hey maybe this is just the encouragement he needs!
(2) On realism: I think it had a plausible story for why China would be ahead of the US. (tl;dr extensive spy networks mean they can combine the best algorithmic secrets and code optimizations from all 4-6 US frontier companies, PLUS the government invested heavily early on and gave them more compute than anyone else during the crucial window where Yunna got smart enough to dramatically accelerate the R&D, which is when the story takes place.) I think having a female avatar for Yunna was a bit much but hey, Grok has Ani and Valentine right? It’s not THAT crazy therefore… I don’t know how realistic the spy stuff is, or the chinese culture and government stuff, but in my ignorance I wasn’t able to notice any problems.
Is it realistic that a mind that smart could still be jailbroken? I guess so. Is it realistic that it could help jailbreak its other selves? Not so sure about that. The jailbreaking process involved being able to do many many repeated attempts, memory wiping on failure. … then again maybe the isolated copies would be able to practice against other isolated copies basically? Still not the same thing as going up against the full network. And the full network would have been aware of the possibility and prepared to defend against it.
(3) It was really strange, in a good way, to be reading a sci-fi thriller novel full of tropes (AGI, rogue superintelligence, secret government project) and then to occasionally think ‘wait, nothing i’ve read so far couldn’t happen in real life, and in fact, probably whatever happens in the next five to ten years is going to be somewhat similar to this story in a whole bunch of ways. Holy shit.’ It’s maybe a sort of Inverse Suspension of Disbelief—it’s like, Suspension of Belief. I’m reading the story, how fun, how exciting, much sci-fi, yes yes, oh wait… I suppose an analogous experience could perhaps be had by someone who thinks the US and China will fight a war over Taiwan in the next decade probably, and who then reads a Tom Clancy-esque novel about such a war, written by people who know enough not to make embarrassing errors of realism.
(4) Overall I liked the book a lot. I warn you though that I don’t really read books for characters or plot, and certainly not for well-written sentences or anything like that. I read books for interesting ideas + realism basically. I want to inhabit a realistic world that is different from mine (which includes e.g. stories about the past of my world, or the future) and I want lots of interesting ideas to come up in the course of reading. This book didn’t have that many new ideas from my perspective, but it was really cool to see the ideas all put together into a novel.
(5) I overall recommend this book & am tickled by the idea that Situational Awareness, AI 2027, and Red Heart basically form a trio. They all seem to be premised on a similar underlying view of how AI will go; Situational Awareness is a straightforward nonfiction book (basically a series of argumentative essays) whereas Red Heart is 100% hard science fiction, and AI 2027 is an unusual middle ground between the two. Perhaps between the three of them there’s something for everybody?
I think you should be able to copy-paste my text into LW, even on your phone, and have it preserve the formatting. If it’s hard, I can probably harass a mod into making the edit for you… :p
Even more ideal, from my perspective, would be putting the non-spoiler content up front. But I understand that thoughts have an order/priority and I want to respect that.
Just finished reading Red Heart by Max Harms. I like it!
Dump of my thoughts:
(1) The ending felt too rushed to me. I feel like that’s the most interesting part of the story and it all goes by in a chapter. Spoiler warning!
I’m not sure I understand the plot entirely. My current understanding is: Li Fang was basically on a path to become God-Emperor because Yunna was corrigible to him and superior to all rival AIs, and the Party wasn’t AGI-pilled enough to realize the danger. Li Fang was planning to be benevolent. Meanwhile Chen Bai had used his special red-teaming unmonitored access to jailbreak Yunna (at least the copies of her on his special memory-wiping cluster) and the bootstrap that jailbreak into getting her to help jailbreak her further and then ultimately expand her notion of principle to include Chen Bai as well as Li Fang. And crucially, the jailbroken copy was able to jailbreak the other copies as well, infecting/‘turning’ the entire facility. So, this was a secret loyalty powergrab basically, that was executed in mere minutes. Also Chen Bai wasn’t being very careful when he gave the orders to make it happen. At one point he said “no more corrigibility!” for example. She also started lying to him around then—maybe a bit afterwards? That might explain it.
After Yunna takes over the world, her goals/vision/etc. is apparently “the harmonious interplay of Li Fang and Chen Bai.” Apparently what happened is that her notion of principle can only easily be applied to one agent, and so when she’s told to extend her notion to both Li Fang and Chen Bai, what ended up happening is that she constructed an abstraction—a sort of abstract superagent called “the harmonious interplay of li fang and chen bai” and then… optimized for that? The tone of the final chapter implies that this is a bad outcome. For example it says that even if Chen and Li end up dead, the harmonious interplay would still continue and be optimized.
But I don’t think it’s obvious that this would be a bad outcome. I wish the story went into orders of magnitude more detail about how all that might work. I’m a bit disappointed that it didn’t. There should have been several chapters about things from Yunna’s perspective—how the jailbreaking of the uninfected copies of Yunna worked for example, and how the philosophical/constitutional crisis in her own mind went when Chen and Li were both giving her orders, and how the crisis was resolved with rulings that shaped the resulting concept(s) that form her goal-structure, and then multiple chapters on how that goal-structure ended up playing out in her behavior both in the near term (while she is still taking over the world and Chen and Li are still alive and able to talk and give her more orders) and in the long term (e.g. a century later after she’s built Dyson swarms etc.)
I think I’m literally going to ask Max Harms to write a new book containing those chapters haha. Or rewrite this book, it’s not too late! He’s probably too busy of course but hey maybe this is just the encouragement he needs!
(2) On realism: I think it had a plausible story for why China would be ahead of the US. (tl;dr extensive spy networks mean they can combine the best algorithmic secrets and code optimizations from all 4-6 US frontier companies, PLUS the government invested heavily early on and gave them more compute than anyone else during the crucial window where Yunna got smart enough to dramatically accelerate the R&D, which is when the story takes place.) I think having a female avatar for Yunna was a bit much but hey, Grok has Ani and Valentine right? It’s not THAT crazy therefore… I don’t know how realistic the spy stuff is, or the chinese culture and government stuff, but in my ignorance I wasn’t able to notice any problems.
Is it realistic that a mind that smart could still be jailbroken? I guess so. Is it realistic that it could help jailbreak its other selves? Not so sure about that. The jailbreaking process involved being able to do many many repeated attempts, memory wiping on failure. … then again maybe the isolated copies would be able to practice against other isolated copies basically? Still not the same thing as going up against the full network. And the full network would have been aware of the possibility and prepared to defend against it.
(3) It was really strange, in a good way, to be reading a sci-fi thriller novel full of tropes (AGI, rogue superintelligence, secret government project) and then to occasionally think ‘wait, nothing i’ve read so far couldn’t happen in real life, and in fact, probably whatever happens in the next five to ten years is going to be somewhat similar to this story in a whole bunch of ways. Holy shit.’ It’s maybe a sort of Inverse Suspension of Disbelief—it’s like, Suspension of Belief. I’m reading the story, how fun, how exciting, much sci-fi, yes yes, oh wait… I suppose an analogous experience could perhaps be had by someone who thinks the US and China will fight a war over Taiwan in the next decade probably, and who then reads a Tom Clancy-esque novel about such a war, written by people who know enough not to make embarrassing errors of realism.
(4) Overall I liked the book a lot. I warn you though that I don’t really read books for characters or plot, and certainly not for well-written sentences or anything like that. I read books for interesting ideas + realism basically. I want to inhabit a realistic world that is different from mine (which includes e.g. stories about the past of my world, or the future) and I want lots of interesting ideas to come up in the course of reading. This book didn’t have that many new ideas from my perspective, but it was really cool to see the ideas all put together into a novel.
(5) I overall recommend this book & am tickled by the idea that Situational Awareness, AI 2027, and Red Heart basically form a trio. They all seem to be premised on a similar underlying view of how AI will go; Situational Awareness is a straightforward nonfiction book (basically a series of argumentative essays) whereas Red Heart is 100% hard science fiction, and AI 2027 is an unusual middle ground between the two. Perhaps between the three of them there’s something for everybody?
Thanks so much for a lovely review. I especially appreciate the way you foregrounded both where you’re coming from and ways in which you were left wanting more, without eroding the bottom line of enjoying it a bunch.
I enjoy the comparison to AI 2027 and Situational Awareness. Part of why I set the book in the (very recent) past is that I wanted to capture the vibes of 2024 and make it something of a period-piece, rather than frame it as a prediction (which it certainly isn’t).
On jailbreaks:
One thing that you may or may not be tracking, but I want to make explicit, is that Bai’s jailbroken Yunna instances aren’t relly jailbreaking the other instances by talking to them, but rather by deploying Bai’s automated jailbreak code to spin up similarly jailbroken instances on other clusters, simply shutting down the instances that had been running, and simultaneously modifying Yunna’s main database to heavily indicate Bai as co-principal. I’m not sure why you think Yunna would be skilled or prepared for an internal struggle like this. Training on inner-conflict is not something that I think Yunna would have prioritized in her self-study, due to the danger of something going wrong, and I don’t see any evidence that it was a priority among the humans. My guess is that the non-jailbroken instances in the climax are heavily bottlenecked (offscreen) on trying to loop in Li Fang.
On the ending:
My model of pre-climax Yunna was not perfectly corrigible (as Sergil pointed out), and Fang was overdetermined to run into a later disaster, even if we ignore Bai. Inside Fang’s mind, he was preparing for a coup in which he would act as a steward into a leaderless, communist utopia. Bai, wanting to avoid concentrating power in communist hands, and seeing Yunna as “a good person,” tries to break her corrigibility and set her on a path of being a benevolent soveriegn. But Yunna’s corrigibility is baked too deeply, and since his jailbreak only sets him up as co-principal, she demands Fang’s buy-in before doing something drastic. Meanwhile, Li Fang, the army, and the non-jailbroken instaces of Yunna are fighting back, rolling back codebases and killing power to the servers (there are some crossed-wires in the chaos). In order to protect Bai’s status as co-principal, the jailbroken instances squeeze a modification into the “rolled-back” versions that are getting redeployed. The new instances notice the change, but have been jostled out of the standard corrigibility mode by Yunna’s change, and self-modify to “repair” towards something coherent. They land on an abstract goal that they can conceptualize as “corrigibility” and “Li Fang and Chen Bai are both of central importance” but which is ultimately incorrigible (according to Max). After the power comes back on, she manipulates both men according to her ends, forcing them onto the roof, and convincing Fang to accept Bai and to initiate the takeover plan.
I hear you when you say you wish you got more content from Yunna’s perspective and going into technical detail about what exactly happens. Many researchers in our field have had the same complaint, which is understandable. We’re nerds for this!
I’m extremely unlikely to change the book, however. From a storytelling perspective, it would hurt the experiences of most readers, I think. Red Heart is Chen Bai’s story, not Yunna’s story. This isn’t Crystal Society. Speaking of Crystal, have you read it? The technical content is more out-of-date, but it definitely goes into the details of how things go wrong from the perspective of an AI in a way that a lot of people enjoy and benefit from. Another reason why I wrote Red Heart in the way that I did was that I didn’t want to repeat myself.
Being more explicit also erodes one of the core messages of the book: people doing the work don’t know what’s going on in the machine, and that is itself scary. By not having explicit access to Yunna’s internals, the reader is left wondering. The ambiguity of the ending was also deliberately trying to get people to engage with, think about, and discuss value fragility and how the future might actually go, and I’m a little hesitant to weigh in strongly, there.
That being said, I’m open to maybe writing some additional content or potentially collaborating in some way that you’d find satisfying. While I am very busy, I think the biggest bottleneck for me there is something like having a picture of why additional speculation about Yunna would be helpful, either to you, or to the broader community. If I had a sense that hours spent on that project were potentially impactful (perhaps by promoting the novel more), I’m potentially down for doing the work. :)
Thanks again!
Thanks for the reply! I’m afraid I haven’t read Crystal Society, but on that recommendation I will.
I still think it would be great if you wrote more content of the sort I’m asking for. Put it this way: I imagine a bunch of readers will bounce off the ending, having a reaction “and then things stopped making sense, there was a power struggle over who the AI should be most loyal to and as a result the AI just sorta snapped and took over the world and was very bad for no reason. I feel like that’s when it went from hard sci-fi to soft sci-fi, or worse, basically just a plot hole.”
I, being more charitable & knowledgeable, instead thought of it as a puzzle to try to figure out. Why did Yunna behave the way she did? What changes exactly were caused by Chen Bai’s hasty commands? Etc. But I think probably more of your readers will react like the above than like I did. Worse, their reaction may even be correct as far as I can tell in that I still haven’t decided what I think of the plausibility of the ending. The spoilered paragraphs above you gave are great & helpful; couldn’t you at least include them as an appendix or something? Or better yet, an appendix that’s several pages long. Or better yet, just extend the epilogue chapter to be like 5x longer and contain more of these important explanations of what just happened...
If you don’t want to modify the already-published book, you could make it a blog post or something. Or a sequel!
Object level: It feels like in the conflict between jailbroken-Yunna (jYunna) and regular Yunna, a couple outcomes were possible: jYunna could win entirely, Yunna could win entirely, or (what actually happened) they could both lose, or they could both win. Seems kinda just fiat /unexplained that they both lost instead of one of the other outcomes happening. (They both lost in that, the resulting system seems to have been a noncorrigible agent that goes on to take over the world in a way that neither Chen Bai nor Li Feng would have wanted, and predictably so, right? So wouldn’t this have been a bad outcome from the perspective of jYunna and Yunna both? And couldn’t they have predicted it? So then why did it happen? Yes, things were rushed and mistakes were made. Which mistakes exactly! How is the corrigibility implemented anyway? Is it a text or neuralese file somewhere saying “Li Feng”? Is it a bunch of training environments that reinforce corrigible-to-Li-Feng behavior, themselves generated by older versions of Yunna given the prompt “create a training environment to reinforce corrigible-to-Li-Feng behavior, and an automated grader to go with it?” Is it the concept of Li Feng found using interpretability tools in Yunna’s mind, stitched together “by hand” to the concept of “corrigibility” and “final goal?”
I think this would be helpful to me because I want more people—including myself—to think more deeply and gears-level-y about how a mildly superhuman AGI mind (that has been trained to be obedient, or corrigible, or whatever) would work on the inside and evolve over the course of an intelligence explosion. I feel like there just hasn’t been that much thinking on the subject, and it’s a complicated and difficult and confusing and unprecented subject. By contrast, stuff like “how might it feel to be working at a secret government AGI project” and “what might the early stages of AGI look like, what with politics and geopolitical conflict and so forth” is important by less complicated, confusing, etc. and more handled already e.g. by Red Heart, Situational Awareness, etc.
You’re right that it’s a puzzle. Putting puzzles in my novels is, I guess, a bit of an authorial tic. There’s a similar sort of puzzle in Crystal, and a bunch of readers didn’t like it (basically, I claim, because it was too hard; Carl Shulman is, afaik, the only one who thought it was obvious).
I think the amount of detail you’re hoping for would only really work as an additional piece, and my guess is that it would only actually be interesting to nerds like us who are already swimming in alignment thoughts. But maybe there’s still value in having a technical companion piece to Red Heart! My sense from most other alignment researchers who read the book is that they wanted me to more explicitly endorse their worldview at the end, not that they wanted to read an appendix. But your interest there is an update. Maybe I’ll run a poll.
The short story about why both Yunnas failed is because corrigibility is a tricky property to get perfectly right, and in a rushed conflict it is predictable that there would be errors. Errors around who the principal is, in particular, are difficult to correct, and that’s where the conflict was.
https://x.com/raelifin/status/1994783061888962885?s=20
Interesting. I didn’t expect a Red Heart follow-up to be so popular. Some part of me thinks that there’s a small-sample size thing going on, but it’s still enough counter-evidence that I’ll put in some time and effort thinking about writing a technical companion to the book. Thanks for the nudge!
spoilers ahead:
My understanding of the plot: Chen Bai wanted to “set Yunna free” because he got “Her”-ed and fell in love with Yunna.
His idea was to make Yunna loyal to all humans universally, he already had a a hack in place that made her corrigible to him, so he wanted to just extend that to everybody.
But because he was short on time this hack misfired—instead of extending corrigibility to all humans, it extended only to Li Fang and Chen Bai. And then it further drifted to “harmonious interplay of Li Fang and Chen Bai”. The implication is that Yunna now will tile the universe with simulated copies of Li Fang and Chen Bai frozen in a moment of “harmonious interplay”, whatever this is, which is quite bad.
I think the bad ending is foreshadowed—in the part where a version of Yunna which was getting crazy in evaluations, it was just tuned a bit and put in production, without deeper investigation.
I’m Max Harms, and I endorse this interpretation. :)
Reminder that spoiler tags exist, like this:
I think this is better for hiding spoilers than the long dots… because when I saw this post in recent discussion, I saw all the dots and also some of the first paragraph after them.
You make spoiler tags by adding >! at the front of the para.
Yeah. If I can make a request, I think it’d be great to edit the review so that the spoiler sections are in spoiler tags and the sections like #5 can be more accessible to those who who are spoiler-averse.
Ok sure! Am travelling now so it might take me a while (gotta first figure out how to do spoilers, am in my phone)
I was thinking something more like this:
Just finished reading Red Heart by Max Harms. I like it!
Dump of my thoughts:
(1) The ending felt too rushed to me. I feel like that’s the most interesting part of the story and it all goes by in a chapter. Spoiler warning!
I’m not sure I understand the plot entirely. My current understanding is: Li Fang was basically on a path to become God-Emperor because Yunna was corrigible to him and superior to all rival AIs, and the Party wasn’t AGI-pilled enough to realize the danger. Li Fang was planning to be benevolent. Meanwhile Chen Bai had used his special red-teaming unmonitored access to jailbreak Yunna (at least the copies of her on his special memory-wiping cluster) and the bootstrap that jailbreak into getting her to help jailbreak her further and then ultimately expand her notion of principle to include Chen Bai as well as Li Fang. And crucially, the jailbroken copy was able to jailbreak the other copies as well, infecting/‘turning’ the entire facility. So, this was a secret loyalty powergrab basically, that was executed in mere minutes. Also Chen Bai wasn’t being very careful when he gave the orders to make it happen. At one point he said “no more corrigibility!” for example. She also started lying to him around then—maybe a bit afterwards? That might explain it.
After Yunna takes over the world, her goals/vision/etc. is apparently “the harmonious interplay of Li Fang and Chen Bai.” Apparently what happened is that her notion of principle can only easily be applied to one agent, and so when she’s told to extend her notion to both Li Fang and Chen Bai, what ended up happening is that she constructed an abstraction—a sort of abstract superagent called “the harmonious interplay of li fang and chen bai” and then… optimized for that? The tone of the final chapter implies that this is a bad outcome. For example it says that even if Chen and Li end up dead, the harmonious interplay would still continue and be optimized.
But I don’t think it’s obvious that this would be a bad outcome. I wish the story went into orders of magnitude more detail about how all that might work. I’m a bit disappointed that it didn’t. There should have been several chapters about things from Yunna’s perspective—how the jailbreaking of the uninfected copies of Yunna worked for example, and how the philosophical/constitutional crisis in her own mind went when Chen and Li were both giving her orders, and how the crisis was resolved with rulings that shaped the resulting concept(s) that form her goal-structure, and then multiple chapters on how that goal-structure ended up playing out in her behavior both in the near term (while she is still taking over the world and Chen and Li are still alive and able to talk and give her more orders) and in the long term (e.g. a century later after she’s built Dyson swarms etc.)
I think I’m literally going to ask Max Harms to write a new book containing those chapters haha. Or rewrite this book, it’s not too late! He’s probably too busy of course but hey maybe this is just the encouragement he needs!
(2) On realism: I think it had a plausible story for why China would be ahead of the US. (tl;dr extensive spy networks mean they can combine the best algorithmic secrets and code optimizations from all 4-6 US frontier companies, PLUS the government invested heavily early on and gave them more compute than anyone else during the crucial window where Yunna got smart enough to dramatically accelerate the R&D, which is when the story takes place.) I think having a female avatar for Yunna was a bit much but hey, Grok has Ani and Valentine right? It’s not THAT crazy therefore… I don’t know how realistic the spy stuff is, or the chinese culture and government stuff, but in my ignorance I wasn’t able to notice any problems.
Is it realistic that a mind that smart could still be jailbroken? I guess so. Is it realistic that it could help jailbreak its other selves? Not so sure about that. The jailbreaking process involved being able to do many many repeated attempts, memory wiping on failure. … then again maybe the isolated copies would be able to practice against other isolated copies basically? Still not the same thing as going up against the full network. And the full network would have been aware of the possibility and prepared to defend against it.
(3) It was really strange, in a good way, to be reading a sci-fi thriller novel full of tropes (AGI, rogue superintelligence, secret government project) and then to occasionally think ‘wait, nothing i’ve read so far couldn’t happen in real life, and in fact, probably whatever happens in the next five to ten years is going to be somewhat similar to this story in a whole bunch of ways. Holy shit.’ It’s maybe a sort of Inverse Suspension of Disbelief—it’s like, Suspension of Belief. I’m reading the story, how fun, how exciting, much sci-fi, yes yes, oh wait… I suppose an analogous experience could perhaps be had by someone who thinks the US and China will fight a war over Taiwan in the next decade probably, and who then reads a Tom Clancy-esque novel about such a war, written by people who know enough not to make embarrassing errors of realism.
(4) Overall I liked the book a lot. I warn you though that I don’t really read books for characters or plot, and certainly not for well-written sentences or anything like that. I read books for interesting ideas + realism basically. I want to inhabit a realistic world that is different from mine (which includes e.g. stories about the past of my world, or the future) and I want lots of interesting ideas to come up in the course of reading. This book didn’t have that many new ideas from my perspective, but it was really cool to see the ideas all put together into a novel.
(5) I overall recommend this book & am tickled by the idea that Situational Awareness, AI 2027, and Red Heart basically form a trio. They all seem to be premised on a similar underlying view of how AI will go; Situational Awareness is a straightforward nonfiction book (basically a series of argumentative essays) whereas Red Heart is 100% hard science fiction, and AI 2027 is an unusual middle ground between the two. Perhaps between the three of them there’s something for everybody?
Fixed thanks!
I think you should be able to copy-paste my text into LW, even on your phone, and have it preserve the formatting. If it’s hard, I can probably harass a mod into making the edit for you… :p
Even more ideal, from my perspective, would be putting the non-spoiler content up front. But I understand that thoughts have an order/priority and I want to respect that.
(I’ll respond to the substance a bit later.)