There are three books that I massively recommend for anyone who thinks the AI industry is easy to reshape or influence in any direction. These books are Mearshimer’s Tragedy of Great Power Politics and Nye’s Soft Power (2004). The third is basically any book that covers the military significance of AI, in any way whatsoever, such as how AI is mounted on nuclear stealth missiles.
In addition, I highly recommend against trying to formulate (or even think about) AI policy without meeting a ton of people with experience with AI in the policy space. Trying to reinvent the wheel on this is a losing strategy, it’s time-inefficient at best, and at worst it can attract unwanted attention from extremely wealthy, powerful, and vicious people. If your proposals are good, and many of them are, it’s best to have them evaluated by experienced individuals who you know personally, not shoved in front of the eyes of as many strangers as possible.
Yes, books are a big investment, so it was rude of me to fail to explain why it is worth people’s time to look into getting them.
Mearshimer’s Tragedy of Great Power Politics (Ch. 1 and 2): Explains in detail why governments and militaries keep doing all these horrible things, like gain-of-function research, or creating offensive nuclear stealth missiles that deliberately disguise their radar signiatures as computer glitches.
Nye’s Soft Power (2004, Ch 1 and 4): Explains why governments take the media so seriously, and it gives one of the the best explanations I’ve seen for why massive, competent lies are critical for national security. Chapter 4 also gives a fantastic history of propaganda, including describing the nitty-gritty of how propaganda has become prevalent in modern media.
Both of these books are absolutely critical for anyone trying to understand AI policy, and only a small fraction of each book needs to be read in order to get 95% of the neccesary information.
There are three books that I massively recommend for anyone who thinks the AI industry is easy to reshape or influence in any direction. These books are Mearshimer’s Tragedy of Great Power Politics and Nye’s Soft Power (2004). The third is basically any book that covers the military significance of AI, in any way whatsoever, such as how AI is mounted on nuclear stealth missiles.
In addition, I highly recommend against trying to formulate (or even think about) AI policy without meeting a ton of people with experience with AI in the policy space. Trying to reinvent the wheel on this is a losing strategy, it’s time-inefficient at best, and at worst it can attract unwanted attention from extremely wealthy, powerful, and vicious people. If your proposals are good, and many of them are, it’s best to have them evaluated by experienced individuals who you know personally, not shoved in front of the eyes of as many strangers as possible.
This sounds important. Could you say more?
Yes, books are a big investment, so it was rude of me to fail to explain why it is worth people’s time to look into getting them.
Mearshimer’s Tragedy of Great Power Politics (Ch. 1 and 2): Explains in detail why governments and militaries keep doing all these horrible things, like gain-of-function research, or creating offensive nuclear stealth missiles that deliberately disguise their radar signiatures as computer glitches.
Nye’s Soft Power (2004, Ch 1 and 4): Explains why governments take the media so seriously, and it gives one of the the best explanations I’ve seen for why massive, competent lies are critical for national security. Chapter 4 also gives a fantastic history of propaganda, including describing the nitty-gritty of how propaganda has become prevalent in modern media.
Both of these books are absolutely critical for anyone trying to understand AI policy, and only a small fraction of each book needs to be read in order to get 95% of the neccesary information.
I didn’t mean to imply any rudeness on your part. Thank you for the recommendation and summary.
Could you say in short what the reasons Mearhimer and Nye give and how/why you think it impact on AI safety?
I think it would be good to hear some different perspectives on the issue of A(G)I policy, especially less socially desirable/cynical ones.