Random note: Congressman Brad Sherman just held up If Anyone Builds It, Everyone Dies in a Congressional hearing and recommended it, saying (rough transcript, might be slight paraphrase): “they’re [the AI companies] not really focused on the issue raised by this book, which I recommend, but the title tells it all, If Anyone Builds It Everyone Dies”
I think this is a clear and unambiguous example of the theory of change of the book having at least one success—being an object that can literally be held and pointed to by someone in power.
There is one issue that I think is more explosive than even the spread of nuclear weapons: engineered intelligence. By that I mean, the efforts of computer engineers and bio-engineers who may create intelligence beyond that of a human being. In testimony at the House Science Committee1, the consensus of experts testifying was that in roughly 25 years we would have a computer that passed the Turing Test,2 and more importantly, exceeded human intelligence.
As we develop more intelligent computers, we will find them useful tools in creating ever more intelligent computers, a positive feedback loop. I don’t know whether we will create the maniacal Hal from 2001, or the earnest Data from Star Trek—or perhaps both.
There are those who say don’t worry, even if a computer is intelligent and malevolent—it is in a box and it cannot affect the world. But I believe that there are those of our species who sell hands to the Beelzebub, in return for a good stock tip.
I think this was a major dropped ball. We had mostly ruled out political advocacy, so there was no one trying to do the “make connections with congresspeople” work that would have caused us to discover that someone had been thinking of this as an important issue for years.
That said, I know that several x-risk orgs have been in contact with his office in recent years.
We had mostly ruled out political advocacy, so there was no one trying to do the “make connections with congresspeople” work that would have caused us to discover that someone had been thinking of this as an important issue for years.
It is the case that I thought there was little point in building political connections on this issue; but the more earlier failure was that for the last two decades there have only ever been a handful of people working seriously on this problem to begin with, which means most balls would be dropped regardless.
That’s totally right, until like 2020 or something the community was small and underresourced, such that things were gonna get dropped.
But I think we also did a somewhat bad job of effectively strategizing about how to approach the problem such that we ended up making worse allocation-of-effort choices than we could have, given the (unfair) benefit of hindsight.
I have a draft post about how I think we should have spent the period before takeoff started, in retrospect.
Yeah, I think one thing I’ve added to my (too long!) to-do list is to ask the LLMs, and then pay a researcher to find any other examples of folks like this that we missed.
It was actually inspired by Brad Sherman holding up the book. I just saw this shortform and its funny because this thread roughly corresponds to my own thought process when seeing the original image!
This is awesome! It broadly aligns with my understanding of the situation, although it does miss some folks that are known to care a bunch about this from their public statements. Downloading the JSON to take a deeper look!
Strong upvoting the underlying post for Doing The Thing.
So far the LLMs really want to procrastinate on this task in normal chat windows because it’s tooooo many queries. This is gonna have to be a Claude Code thing.
Random note: Congressman Brad Sherman just held up If Anyone Builds It, Everyone Dies in a Congressional hearing and recommended it, saying (rough transcript, might be slight paraphrase): “they’re [the AI companies] not really focused on the issue raised by this book, which I recommend, but the title tells it all, If Anyone Builds It Everyone Dies”
I think this is a clear and unambiguous example of the theory of change of the book having at least one success—being an object that can literally be held and pointed to by someone in power.
It’s important context that Sherman was concerned about Superintelligence risks, broadly construed, decades ago.
In 2007 he gave a speech in which he said:
How the heck has this guy been in Congress the whole time and we’ve not heard about him / he’s not been in contact with the AI x-risk scene?
I think this was a major dropped ball. We had mostly ruled out political advocacy, so there was no one trying to do the “make connections with congresspeople” work that would have caused us to discover that someone had been thinking of this as an important issue for years.
That said, I know that several x-risk orgs have been in contact with his office in recent years.
It is the case that I thought there was little point in building political connections on this issue; but the more earlier failure was that for the last two decades there have only ever been a handful of people working seriously on this problem to begin with, which means most balls would be dropped regardless.
That’s totally right, until like 2020 or something the community was small and underresourced, such that things were gonna get dropped.
But I think we also did a somewhat bad job of effectively strategizing about how to approach the problem such that we ended up making worse allocation-of-effort choices than we could have, given the (unfair) benefit of hindsight.
I have a draft post about how I think we should have spent the period before takeoff started, in retrospect.
I will read it when you publish it!
would a co-writer help?
Yeah, I think one thing I’ve added to my (too long!) to-do list is to ask the LLMs, and then pay a researcher to find any other examples of folks like this that we missed.
This is such a funny coincidence! I just wrote a post where Claude does research on every member of congress individually.
https://www.lesswrong.com/posts/WLdcvAcoFZv9enR37/what-washington-says-about-agi
It was actually inspired by Brad Sherman holding up the book. I just saw this shortform and its funny because this thread roughly corresponds to my own thought process when seeing the original image!
This is awesome! It broadly aligns with my understanding of the situation, although it does miss some folks that are known to care a bunch about this from their public statements. Downloading the JSON to take a deeper look!
Strong upvoting the underlying post for Doing The Thing.
So far the LLMs really want to procrastinate on this task in normal chat windows because it’s tooooo many queries. This is gonna have to be a Claude Code thing.
Link: https://www.youtube.com/watch?v=UECn73UVILg
Thank you! Also, Time Stamped Link: https://youtu.be/UECn73UVILg?si=UqTHRBWnrklR8bWs&t=203
See also: Sen. James Inhofe (2015).