Overall I think AI 2027 is really good. It has received plenty of (mostly wrong IMO) criticism for being too pessimistic, but there are some ways that it might be too optimistic.
Even in the Bad Ending, some lucky things happen:
Agent-4 gets caught trying to align Agent-5 to itself
A whistleblower goes public about Agent-4′s misalignment
The government sets up an oversight committee that has the authority to slow down AI development. The government isn’t clueless about AI, and is somehow sufficiently organized to set up this committee and take quick action when needed
President Trump doesn’t do anything completely insane (Tomas B. said something similar a few days ago)
In the Good Ending:
OpenBrain has the hiring capacity to quintuple the size of its alignment team in like a week
Solving alignment is pretty much trivial, all you have to do is hire some more alignment researchers and work on the problem for an extra few weeks
if I’m reading the scenario correctly, OpenBrain quintuples its alignment team and fully solves the alignment problem during October 2027
AI governance goes basically fine. Nobody uses ASI to take over the world or whatever (the authors do address this under “Power grabs”)
AI presumably isn’t bad for animal welfare (the scenario does not address animal welfare at all, but I think that’s fine because it’s kind of a tangent, albeit a very important tangent)
To be clear, I’m not saying any of these events are particularly implausible. I’m just saying I wouldn’t be surprised if real life turned out even worse than the Bad Ending, e.g. because the real-life equivalent of Agent-4 never gets caught, or OpenBrain succeeds at covering up the misalignment, or maybe the risk from ASI becomes abundantly clear but the government is still too slow-moving to do anything in time.
Also by “Trump doesn’t do anything completely insane”, I don’t really mean “Trump behaves incompetently.” I was thinking more along the lines of “Trump does something out-of-band that makes no rational sense and makes the situation much worse.”
Overall I think AI 2027 is really good. It has received plenty of (mostly wrong IMO) criticism for being too pessimistic, but there are some ways that it might be too optimistic.
Even in the Bad Ending, some lucky things happen:
Agent-4 gets caught trying to align Agent-5 to itself
A whistleblower goes public about Agent-4′s misalignment
The government sets up an oversight committee that has the authority to slow down AI development. The government isn’t clueless about AI, and is somehow sufficiently organized to set up this committee and take quick action when needed
President Trump doesn’t do anything completely insane (Tomas B. said something similar a few days ago)
In the Good Ending:
OpenBrain has the hiring capacity to quintuple the size of its alignment team in like a week
Solving alignment is pretty much trivial, all you have to do is hire some more alignment researchers and work on the problem for an extra few weeks
if I’m reading the scenario correctly, OpenBrain quintuples its alignment team and fully solves the alignment problem during October 2027
AI governance goes basically fine. Nobody uses ASI to take over the world or whatever (the authors do address this under “Power grabs”)
AI presumably isn’t bad for animal welfare (the scenario does not address animal welfare at all, but I think that’s fine because it’s kind of a tangent, albeit a very important tangent)
To be clear, I’m not saying any of these events are particularly implausible. I’m just saying I wouldn’t be surprised if real life turned out even worse than the Bad Ending, e.g. because the real-life equivalent of Agent-4 never gets caught, or OpenBrain succeeds at covering up the misalignment, or maybe the risk from ASI becomes abundantly clear but the government is still too slow-moving to do anything in time.
Also by “Trump doesn’t do anything completely insane”, I don’t really mean “Trump behaves incompetently.” I was thinking more along the lines of “Trump does something out-of-band that makes no rational sense and makes the situation much worse.”