Will the world’s elites navigate the creation of AI just fine?
One open question in AI risk strategy is: Can we trust the world’s elite decision-makers (hereafter “elites”) to navigate the creation of human-level AI (and beyond) just fine, without the kinds of special efforts that e.g. Bostrom and Yudkowsky think are needed?
Some reasons for concern include:
Otherwise smart people say unreasonable things about AI safety.
Many people who believed AI was around the corner didn’t take safety very seriously.
Elites have failed to navigate many important issues wisely (2008 financial crisis, climate change, Iraq War, etc.), for a variety of reasons.
AI may arrive rather suddenly, leaving little time for preparation.
But if you were trying to argue for hope, you might argue along these lines (presented for the sake of argument; I don’t actually endorse this argument):
If AI is preceded by visible signals, elites are likely to take safety measures. Effective measures were taken to address asteroid risk. Large resources are devoted to mitigating climate change risks. Personal and tribal selfishness align with AI risk-reduction in a way they may not align on climate change. Availability of information is increasing over time.
AI is likely to be preceded by visible signals. Conceptual insights often take years of incremental tweaking. In vision, speech, games, compression, robotics, and other fields, performance curves are mostly smooth. “Human-level performance at X” benchmarks influence perceptions and should be more exhaustive and come more rapidly as AI approaches. Recursive self-improvement capabilities could be charted, and are likely to be AI-complete. If AI succeeds, it will likely succeed for reasons comprehensible by the AI researchers of the time.
Therefore, safety measures will likely be taken.
If safety measures are taken, then elites will navigate the creation of AI just fine. Corporate and government leaders can use simple heuristics (e.g. Nobel prizes) to access the upper end of expert opinion. AI designs with easily tailored tendency to act may be the easiest to build. The use of early AIs to solve AI safety problems creates an attractor for “safe, powerful AI.” Arms races not insurmountable.
The basic structure of this ‘argument for hope’ is due to Carl Shulman, though he doesn’t necessarily endorse the details. (Also, it’s just a rough argument, and as stated is not deductively valid.)
Personally, I am not very comforted by this argument because:
Elites often fail to take effective action despite plenty of warning.
I think there’s a >10% chance AI will not be preceded by visible signals.
I think the elites’ safety measures will likely be insufficient.
Obviously, there’s a lot more for me to spell out here, and some of it may be unclear. The reason I’m posting these thoughts in such a rough state is so that MIRI can get some help on our research into this question.
In particular, I’d like to know:
Which historical events are analogous to AI risk in some important ways? Possibilities include: nuclear weapons, climate change, recombinant DNA, nanotechnology, chloroflourocarbons, asteroids, cyberterrorism, Spanish flu, the 2008 financial crisis, and large wars.
What are some good resources (e.g. books) for investigating the relevance of these analogies to AI risk (for the purposes of illuminating elites’ likely response to AI risk)?
What are some good studies on elites’ decision-making abilities in general?
Has the increasing availability of information in the past century noticeably improved elite decision-making?
- Elites and AI: Stated Opinions by 15 Jun 2013 19:52 UTC; 14 points) (
- 25 Jun 2013 23:02 UTC; 13 points) 's comment on A personal history of involvement with effective altruism by (
- 6 Jun 2013 17:40 UTC; 12 points) 's comment on Tiling Agents for Self-Modifying AI (OPFAI #2) by (
- 29 Jul 2013 10:32 UTC; 8 points) 's comment on Why I’m Skeptical About Unproven Causes (And You Should Be Too) by (
- Forecasting rare events by 11 Jul 2014 22:48 UTC; 6 points) (
- 7 Jun 2013 20:03 UTC; 4 points) 's comment on Tiling Agents for Self-Modifying AI (OPFAI #2) by (
- 6 Jun 2013 23:01 UTC; 4 points) 's comment on Tiling Agents for Self-Modifying AI (OPFAI #2) by (
- 6 Jun 2013 20:14 UTC; 2 points) 's comment on Tiling Agents for Self-Modifying AI (OPFAI #2) by (
- 27 Jun 2013 18:49 UTC; 2 points) 's comment on Tiling Agents for Self-Modifying AI (OPFAI #2) by (
- 6 Jun 2013 17:26 UTC; 0 points) 's comment on Tiling Agents for Self-Modifying AI (OPFAI #2) by (
What does RSI stand for?
“recursive self improvement”.
Okay, I’ve now spelled this out in the OP.
Lately I’ve been listening to audiobooks (at 2x speed) in my down time, especially ones that seem likely to have passages relevant to the question of how well policy-makers will deal with AGI, basically continuing this project but only doing the “collection” stage, not the “analysis” stage.
I’ll post quotes from the audiobooks I listen to as replies to this comment.
From Watts’ Everything is Obvious:
More (#1) from Everything is Obvious:
More (#2) from Everything is Obvious:
More (#4) from Everything is Obvious:
More (#3) from Everything is Obvious:
From Rhodes’ Arsenals of Folly:
More (#3) from Arsenals of Folly:
And:
And:
And:
And:
Amazing stuff. Was the world really as close to a nuclear war in 1983 as in 1962?
More (#2) from Arsenals of Folly:
And:
And, a blockquote from the writings of Robert Gates:
More (#1) from Arsenals of Folly:
And:
And:
And:
More (#4) from Arsenals of Folly:
From Lewis’ Flash Boys:
So Spivey began digging the line, keeping it secret for 2 years. He didn’t start trying to sell the line to banks and traders until a couple months before the line was complete. And then:
More (#1) from Flash Boys:
And:
And:
There was so much worth quoting from Better Angels of Our Nature that I couldn’t keep up. I’ll share a few quotes anyway.
More (#3) from Better Angels of Our Nature:
Further reading on integrative complexity:
Wikipedia Psychlopedia Google book
Now that I’ve been introduced to the concept, I want to evaluate how useful it is to incorporate into my rhetorical repertoire and vocabulary. And, to determine whether it can inform my beliefs about assessing the exfoliating intelligence of others (a term I’ll coin to refer to that intelligence/knowledge which another can pass on to me to aid my vocabulary and verbal abstract reasoning—my neuropsychological strengths which I try to max out just like an RPG character).
At a less meta level, knowing the strengths and weaknesses of the trait will inform whether I choose to signal it or dampen it from herein and in what situations. It is important for imitators to remember that whatever IC is associated with does not neccersarily imply those associations to lay others.
strengths
conflict resolution (see Luke’s post)
As listed in psycholopedia:
appreciation of complexity
scientific profficiency
stress accomodationo
resistance to persuasion
prediction ability
social responsibliy
more initiative, as rated by managers, and more motivation to seek power, as gauged by a projective test
weaknesses
based on psychlopedia:
low scores on compliance and conscientiousness
seem antagonistic and even narcissistic based on the wiki article:
dependence (more likely to defer to others)
rational expectations (more likely to fallaciously assume they are dealing with rational agents)
Upon reflection, here are my conclusions:
high integrative complexity dominates low integrative complexity for those who have insight into the concept and self aware of how it relates to them, others, and the capacity to use the skill and hide it.
the questions eliciting the answers that are expert rated to define the concept of IC by psychometricians is very crude and there ought to be a validated tool devised, if that is an achievable feat (cognitive complexity or time estimates beyond the scope of my time/intelligence at the moment)
I have been using this tool as my primary estimate of intelligence of people but will instead subordinate it to ordinary psychometric status before I became aware of it here and will now elevate traditional tools of intelligence to their established status
I’m interested in learning about the algorithms used to search say Twitter and assess IC. Anyone got any info?
very interested in any research on IC association with corporate board performance and shareprices etc. Doesn’t seem to be much research but generally research does start with Defence implications before going corporate...
Interested in exploring relations between the assessment of IC and tools used in CBT given their structural similarity...and by extensions general relationships between IC and mental health
More (#4) from Better Angels of Our Nature:
Untrue unless you’re in a non-sequential game
True under a utilitarian framework and with a few common mind-theoretic assumptions derived from intuitions stemming from most people’s empathy
Woo
More (#2) from Better Angels of Our Nature:
More (#1) from Better Angels of Our Nature:
From Ariely’s The Honest Truth about Dishonesty:
More (#1) from Ariely’s The Honest Truth about Dishonesty:
And:
More (#2) from Ariely’s The Honest Truth about Dishonesty:
And:
From Feynman’s Surely You’re Joking, Mr. Feynman:
More (#1) from Surely You’re Joking, Mr. Feynman:
And:
And:
One quote from Taleb’s AntiFragile is here, and here’s another:
AntiFragile makes lots of interesting points, but it’s clear in some cases that Taleb is running roughshod over the truth in order to support his preferred view. I’ve italicized the particularly lame part:
From Think Like a Freak:
More (#1) from Think Like a Freak:
And:
From Rhodes’ Twilight of the Bombs:
More (#1) from Twilight of the Bombs:
And:
And:
And:
And:
From Harford’s The Undercover Economist Strikes Back:
And:
More (#2) from The Undercover Economist Strikes Back:
And:
And:
And:
More (#1) from The Undercover Economist Strikes Back:
And:
From Caplan’s The Myth of the Rational Voter:
More (#2) from The Myth of the Rational Voter:
This is an absurdly narrow definition of self-interest. Many people who are not old have parents who are senior citizens. Men have wives, sisters, and daughters whose well-being is important to them. Etc. Self-interest != solipsistic egoism.
More (#1) from The Myth of the Rational Voter:
And:
More (#3) from The Myth of the Rational Voter:
Allow me to offer an alternative explanation of this phenomenon for consideration. Typically, when polled about their trust in insitutions, people tend to trust the executive branch more than the legislature or the courts, and they trust the military far more than they trust civilian government agencies. In the period before 9/11, our long national nightmare of peace and prosperity would generally have made the military less salient in people’s minds, and the spectacles of impeachment and Bush v. Gore would have made the legislative and judicial branches more salient in people’s minds. After 9/11, the legislative agenda quieted down/the legislature temporarily took a back seat to the executive, and military and national security organs became very high salience. So when people were asked about the government, the most immediate associations would have been to the parts that were viewed as more trustworthy.
From Richard Rhodes’ The Making of the Atomic Bomb:
More (#2) from The Making of the Atomic Bomb:
After Alexander Sachs paraphrased the Einstein-Szilard letter to Roosevelt, Roosevelt demanded action, and Edwin Watson set up a meeting with representatives from the Bureau of Standards, the Army, and the Navy...
Upon asking for some money to conduct the relevant experiments, the Army representative launched into a tirade:
More (#3) from The Making of the Atomic Bomb:
Frisch and Peierls wrote a two-part report of their findings:
More (#1) from The Making of the Atomic Bomb:
On the origins of the Einstein–Szilárd letter:
And:
More (#5) from The Making of the Atomic Bomb:
More (#4) from The Making of the Atomic Bomb:
And:
And:
And:
And:
From Poor Economics:
From The Visioneers:
And:
And:
And:
From Priest & Arkin’s Top Secret America:
More (#2) from Top Secret America:
And, on JSOC:
And:
And:
I wonder if the security-industrial complex bureaucracy is any better in other countries.
Which sense of “better” do you have in mind? :-)
More efficient.
KGB had a certain aura, though I don’t know if its descendants have the same cachet. Israeli security is supposed to be very good.
Stay tuned; The Secret History of MI6 and Defend the Realm are in my audiobook queue. :)
More (#1) from Top Secret America:
And:
From Pentland’s Social Physics:
More (#2) from Social Physics:
And:
More (#1) from Social Physics:
And:
And:
From de Mesquita and Smith’s The Dictator’s Handbook:
More (#2) from The Dictator’s Handbook:
And:
More (#1) from The Dictator’s Handbook:
From Ferguson’s The Ascent of Money:
More (#1) from The Ascent of Money:
And:
The Medici Bank is pretty interesting. A while ago I wrote https://en.wikipedia.org/wiki/Medici_Bank on the topic; LWers might find it interesting how international finance worked back then.
From Scahill’s Dirty Wars:
More (#2) from Dirty Wars:
And:
And:
More (#1) from Dirty Wars:
And:
And:
And:
Foreign fighters show up everywhere. And now there’s the whole Islamic State issue. Perhaps all the world needs is more foreign legions doing good things. The FFL is overrecruited afterall. Heck, we could even deal with the refugee crisis by offering visas to those mercenaries. Sure as hell would be more popular than selling visas and citizenship cause people always get antsy about inequality and having less downward social comparisons.
Passage from Patterson’s Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market:
But it proved all too easy: The very first tape Wang played revealed two dealers fixing prices.
Some relevant quotes from Schlosser’s Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety:
And:
More from Command and Control:
And:
More (#3) from Command and Control:
And:
And:
And:
More (#2) from Command and Control:
And:
And:
More (#4) from Command and Control:
And:
Do you keep a list of the audiobooks you liked anywhere? I’d love to take a peek.
Okay. In this comment I’ll keep an updated list of audiobooks I’ve heard since Sept. 2013, for those who are interested. All audiobooks are available via iTunes/Audible unless otherwise noted.
Outstanding:
Tetlock, Expert Political Judgment
Pinker, The Better Angels of Our Nature (my clips)
Schlosser, Command and Control (my clips)
Yergin, The Quest (my clips)
Osnos, Age of Ambition (my clips)
Worthwhile if you care about the subject matter:
Singer, Wired for War (my clips)
Feinstein, The Shadow World (my clips)
Venter, Life at the Speed of Light (my clips)
Rhodes, Arsenals of Folly (my clips)
Weiner, Enemies: A History of the FBI (my clips)
Rhodes, The Making of the Atomic Bomb (available here) (my clips)
Gleick, Chaos (my clips)
Wiener, Legacy of Ashes: The History of the CIA (my clips)
Freese, Coal: A Human History (my clips)
Aid, The Secret Sentry (my clips)
Scahill, Dirty Wars (my clips)
Patterson, Dark Pools (my clips)
Lieberman, The Story of the Human Body
Pentland, Social Physics (my clips)
Okasha, Philosophy of Science: VSI
Mazzetti, The Way of the Knife (my clips)
Ferguson, The Ascent of Money (my clips)
Lewis, The Big Short (my clips)
de Mesquita & Smith, The Dictator’s Handbook (my clips)
Sunstein, Worst-Case Scenarios (available here) (my clips)
Johnson, Where Good Ideas Come From (my clips)
Harford, The Undercover Economist Strikes Back (my clips)
Caplan, The Myth of the Rational Voter (my clips)
Hawkins & Blakeslee, On Intelligence
Gleick, The Information (my clips)
Gleick, Isaac Newton
Greene, Moral Tribes
Feynman, Surely You’re Joking, Mr. Feynman! (my clips)
Sabin, The Bet (my clips)
Watts, Everything Is Obvious: Once You Know the Answer (my clips)
Greenblatt, The Swerve: How the World Became Modern (my clips)
Cain, Quiet: The Power of Introverts in a World That Can’t Stop Talking
Dennett, Freedom Evolves
Kaufman, The First 20 Hours
Gertner, The Idea Factory (my clips)
Olen, Pound Foolish
McArdle, The Up Side of Down
Rhodes, Twilight of the Bombs (my clips)
Isaacson, Steve Jobs (my clips)
Priest & Arkin, Top Secret America (my clips)
Ayres, Super Crunchers (my clips)
Lewis, Flash Boys (my clips)
Dartnell, The Knowledge (my clips)
Cowen, The Great Stagnation
Lewis, The New New Thing (my clips)
McCray, The Visioneers (my clips)
Jackall, Moral Mazes (my clips)
Langewiesche, The Atomic Bazaar
Ariely, The Honest Truth about Dishonesty (my clips)
A process for turning ebooks into audiobooks for personal use, at least on Mac:
Rip the Kindle ebook to non-DRMed .epub with Calibre and Apprentice Alf.
Open the .epub in Sigil, merge all the contained HTML files into a single HTML file (select the files, right-click, Merge). Open the Source view for the big HTML file.
Edit the source so that the ebook begins with the title and author, then jumps right into the foreword or preface or first chapter, and ends with the end of the last chapter or epilogue. (Cut out any table of contents, list of figures, list of tables, appendices, index, bibliography, and endnotes.)
Remove footnotes if easy to do so, using Sigil’s Regex find-and-replace (remember to use Minimal Match so you don’t delete too much!). Click through several instances of the Find command to make sure it’s going to properly cut out only the footnotes, before you click “Replace All.”
(Ignore italics here; it’s added erroneously by LW.) Use find and replace to add [[slnc_1000]] at the end of every paragraph; Mac’s text-to-speech engine interprets this as a slight pause, which aids in comprehension when I’m listening to the audiobook. Usually this just means replacing every instance of with [[slnc_1000]]
Copy/paste that entire HTML file into a text file and save it as .html. Open this in your browser, Select All, right-click and choose Services → Add to iTunes as Spoken Track. (I think “Ava” is the best voice; you’ll have to add this voice by upgrading to Mavericks and adding Ava under System Preferences → Dictation and Speech.) This will take a while, and might even throw up an error even though the track will continue being created and will succeed.
Now, sync this text-to-speech audiobook to some audio player that can play at 2x or 3x speed, and listen away.
To de-DRM your Audible audiobooks, just use Tune4Mac.
VoiceDream for iPhone does a very fine job of text-to-speech; it also syncs your pocket bookmarks and can read epub files.
Other:
Roose, Young Money. Too focused on a few individuals for my taste, but still has some interesting content. (my clips)
Hofstadter & Sander, Surfaces and Essences. Probably a fine book, but I was only interested enough to read the first and last chapters.
Taleb, AntiFragile. Learned some from it, but it’s kinda wrong much of the time. (my clips)
Acemoglu & Robinson, Why Nations Fail. Lots of handy examples, but too much of “our simple theory explains everything.” (my clips)
Byrne, The Many Worlds of Hugh Everett III (available here). Gave up on it; too much theory, not enough story. (my clips)
Drexler, Radical Abundance. Gave up on it; too sanitized and basic.
Mukherjee, The Emperor of All Maladies. Gave up on it; too slow in pace and flowery in language for me.
Fukuyama, The Origins of Political Order. Gave up on it; the author is more keen on name-dropping theorists than on tracking down data.
Friedman, The Moral Consequences of Economic Growth (available here). Gave up on it. There are some actual data in chs. 5-7, but the argument is too weak and unclear for my taste.
Tuchman, The Proud Tower. Gave up on it after a couple chapters. Nothing wrong with it, it just wasn’t dense enough in the kind of learning I’m trying to do.
Foer, Eating Animals. I listened to this not to learn, but to shift my emotions. But it was too slow-moving, so I didn’t finish it.
Caro, The Power Broker. This might end up under “outstanding” if I ever finish it. For now, I’ve put this one on hold because it’s very long and not as highly targeted at the useful learning I want to be doing right now than some other books.
Rutherfurd, Sarum. This is the furthest I’ve gotten into any fiction book for the past 5 years at least, including HPMoR. I think it’s giving my system 1 an education into what life was like in the historical eras it covers, without getting bogged down in deep characterization, complex plotting, or ornate environmental description. But I’ve put it on hold for now because it is incredibly long.
Diamond, Collapse. I listened to several chapters, but it seemed to be mostly about environmental decline, which doesn’t interest me much, so I stopped listening.
Bowler & Morus, Making Modern Science (available here) (my clips). A decent history of modern science but not focused enough on what I wanted to learn, so I gave up.
Brynjolfsson & McAfee, The Second Machine Age (my clips). Their earlier, shorter Race Against the Machine contained the core arguments; this book expands the material in order to explain things to a lay audience. As with Why Nations Fail, I have too many quibbles with this book’s argument to put this book in the ‘Liked’ category.
Clery, A Piece of the Sun. Nothing wrong with it, I just wasn’t learning the type of things I was hoping to learn, so I stopped about half way through.
Schuman, The Miracle. Fairly interesting, but not quite dense enough in the kind of stuff I’m hoping to learn these days.
Conway & Oreskes, Merchants of Doubt. Fairly interesting, but not dense enough in the kind of things I’m hoping to learn.
Horowitz, The Hard Thing About Hard Things
Wessel, Red Ink
Levitt & Dubner, Think Like a Freak (my clips)
Gladwell, David and Goliath (my clips)
Thanks! Your first 3 are not my cup of tea, but I’ll keep looking through the top 1000 list. For now, I am listening to MaddAddam, the last part of Margaret Atwood’s post-apocalyptic fantasy trilogy, which qrnyf jvgu bar zna qvfnccbvagrq jvgu uvf pbagrzcbenel fbpvrgl ervairagvat naq ercbchyngvat gur rnegu jvgu orggre crbcyr ur qrfvtarq uvzfrys. She also has some very good non-fiction, like her Massey lecture on debt, which I warmly recommend.
Could you say a bit about your audiobook selection process?
When I was just starting out in September 2013, I realized that vanishingly few of the books I wanted to read were available as audiobooks, so it didn’t make sense for me to search Audible for titles I wanted to read: the answer was basically always “no.” So instead I browsed through the top 2000 best-selling unabridged non-fiction audiobooks on Audible, added a bunch of stuff to my wishlist, and then scrolled through the wishlist later and purchased the ones I most wanted to listen to.
These days, I have a better sense of what kind of books have a good chance of being recorded as audiobooks, so I sometimes do search for specific titles on Audible.
Some books that I really wanted to listen to are available in ebook but not audiobook, so I used this process to turn them into audiobooks. That only barely works, sometimes. I have to play text-to-speech audiobooks at a lower speed to understand them, and it’s harder for my brain to stay engaged as I’m listening, especially when I’m tired. I might give up on that process, I’m not sure.
Most but not all of the books are selected because I expect them to have lots of case studies in “how the world works,” specifically with regard to policy-making, power relations, scientific research, and technological development. This is definitely true for e.g. Command and Control, The Quest, Wired for War, Life at the Speed of Light, Enemies, The Making of the Atomic Bomb, Chaos, Legacy of Ashes, Coal, The Secret Sentry, Dirty Wars, The Way of the Knife, The Big Short, Worst-Case Scenarios, The Information, and The Idea Factory.
I definitely found out something similar. I’ve come to believe that most ‘popular science’, ‘popular history’ etc books are on audible, but almost anything with equations or code is not.
The ‘great courses’ have been quite fantastic for me for learning about the social sciences. I found out about those recently.
Occasionally I try podcasts for very niche topics (recent Rails updates, for instance), but have found them to be rather uninteresting in comparison to full books and courses.
Thanks!
From Singer’s Wired for War:
More (#7) from Wired for War:
And:
The army recruiters say that soldiers on the ground still win wars. I reckon that Douhet’s prediction will approach true, however, crudely. Drones.
More (#6) from Wired for War:
And:
Inequality doesn’t seem so bad now, huh?
More (#5) from Wired for War:
More (#4) from Wired for War:
And:
More (#3) from Wired for War:
And:
And:
More (#2) from Wired for War:
More (#1) from Wired for War:
And:
From Osnos’ Age of Ambition:
And:
And:
More (#2) from Osnos’ Age of Ambition:
And:
More (#1) from Osnos’ Age of Ambition:
And:
And:
And:
From Soldiers of Reason:
More (#2) from Soldiers of Reason:
And:
More (#1) from Soldiers of Reason:
And:
From David and Goliath:
And:
More (#2) from David and Goliath:
And:
From Wade’s A Troublesome Inheritance:
More (#2) from A Troubled Inheritance:
More (#1) from A Troublesome Inheritance:
And:
From Moral Mazes:
And:
And:
From Lewis’ The New New Thing:
And:
From Dartnell’s The Knowledge:
And:
And:
And:
From Ayres’ Super Crunchers, speaking of Epagogix, which uses neural nets to predict a movie’s box office performance from its screenplay:
More (#1) from Super Crunchers:
And:
And:
From Isaacson’s Steve Jobs:
And:
And:
And:
More (#1) from Steve Jobs:
And:
[no more clips, because Audible somehow lost all my bookmarks for the last two parts of the audiobook!]
From Feinstein’s The Shadow World:
More (#8) from The Shadow World:
And:
And:
More (#7) from The Shadow World:
And:
And:
And:
And:
More (#6) from The Shadow World:
And:
And:
More (#5) from The Shadow World:
And:
And:
And:
More (#4) from The Shadow World:
And:
And:
More (#3) from The Shadow World:
And:
And:
More (#2) from The Shadow World:
And:
More (#1) from The Shadow World:
And:
And:
And:
From Weiner’s Enemies:
More (#5) from Enemies:
And:
More (#4) from Enemies:
And:
And:
More (#3) from Enemies:
And:
And:
More (#2) from Enemies:
And:
And:
More (#1) from Enemies:
And:
And:
From Roose’s Young Money:
From Tetlock’s Expert Political Judgment:
More (#2) from Expert Political Judgment:
More (#1) from Expert Political Judgment:
And:
And:
From Sabin’s The Bet:
And:
More (#3) from The Bet:
More (#2) from The Bet:
And:
And:
More (#1) from The Bet:
And:
And:
From Yergin’s The Quest:
More (#7) from The Quest:
More (#6) from The Quest:
And:
And:
And:
And:
More (#5) from The Quest:
And:
And:
And:
More (#4) from The Quest:
And:
More (#3) from The Quest:
And:
More (#2) from The Quest:
And:
And:
And:
More (#1) from The Quest:
And:
And:
From The Second Machine Age:
More (#1) from The Second Machine Age:
From Making Modern Science:
More (#1) from Making Modern Science:
From Johnson’s Where Good Ideas Come From:
From Gertner’s The Idea Factory:
More (#2) from The Idea Factory:
And:
And:
More (#1) from The Idea Factory:
And:
I’m sure that I’ve seen your answer to this question somewhere before, but I can’t recall where: Of the audiobooks that you’ve listened to, which have been most worthwhile?
I keep an updated list here.
I guess I might as well post quotes from (non-audio) books here as well, when I have no better place to put them.
First up is Revolution in Science.
Starting on page 45:
This amazingly high percentage of self-proclaimed revolutionary scientists (30% or more) seems like a result of selection bias, since most scientist with oversized egos are not even remembered. I wonder what fraction of actual scientists (not your garden-variety crackpots) insist on having produced a revolution in science.
From Sunstein’s Worst-Case Scenarios:
More (#2) from Worst-Case Scenarios:
More (#5) from Worst-Case Scenarios:
More (#4) from Worst-Case Scenarios:
More (#3) from Worst-Case Scenarios:
And:
Similar issues are raised by the continuing debate over whether certain antidepressants impose a (small) risk of breast cancer. A precautionary approach might seem to argue against the use of these drugs because of their carcinogenic potential. But the failure to use those antidepressants might well impose risks of its own, certainly psychological and possibly even physical (because psychological ailments are sometimes associated with physical ones as well). Or consider the decision by the Soviet Union to evacuate and relocate more than 270,000 people in response to the risk of adverse effects from the Chernobyl fallout. It is hardly clear that on balance this massive relocation project was justified on health grounds: “A comparison ought to have been made between the psychological and medical burdens of this measure (anxiety, psychosomatic diseases, depression and suicides) and the harm that may have been prevented.” More generally, a sensible government might want to ignore the small risks associated with low levels of radiation, on the ground that precautionary responses are likely to cause fear that outweighs any health benefits from those responses—and fear is not good for your health.
And:
More (#1) from Worst-Case Scenarios:
But at least so far in the book, Sunstein doesn’t mention the obvious rejoinder about investing now to prevent existential catastrophe.
Anyway, another quote:
From Gleick’s Chaos:
More (#3) from Chaos:
And:
More (#2) from Chaos:
And:
And:
More (#1) from Chaos:
From Lewis’ The Big Short:
More (#4) from The Big Short:
And:
And:
And:
More (#3) from The Big Short:
And:
And:
And:
More (#2) from The Big Short:
And:
And:
More (#1) from The Big Short:
And:
From Gleick’s The Information:
More (#1) from The Information:
And:
And:
And, an amusing quote:
From Acemoglu & Robinson’s Why Nations Fail:
More (#2) from Why Nations Fail:
And:
More (#1) from Why Nations Fail:
And:
And:
And:
From Greenblatt’s The Swerve: How the World Became Modern:
More (#1) from The Swerve:
From Aid’s The Secret Sentry:
More (#6) from The Secret Sentry:
And:
And:
And:
More (#5) from The Secret Sentry:
And:
More (#4) from The Secret Sentry:
And:
More (#3) from The Secret Sentry:
And:
And:
Even when enemy troops and tanks overran the major South Vietnamese military base at Bien Hoa, outside Saigon, on April 26, Martin still refused to accept that Saigon was doomed. On April 28, Glenn met with the ambassador carry ing a message from Allen ordering Glenn to pack up his equipment and evacuate his remaining staff immediately. Martin refused to allow this. The following morning, the military airfield at Tan Son Nhut fell, cutting off the last air link to the outside.
More (#2) from The Secret Sentry:
And:
And:
More (#1) from The Secret Sentry:
From Mazzetti’s The Way of the Knife:
More (#5) from The Way of the Knife:
And:
And:
More (#4) from The Way of the Knife:
And:
And:
And:
And:
And:
And:
More (#3) from The Way of the Knife:
More (#2) from The Way of the Knife:
And:
More (#1) from The Way of the Knife:
And:
And:
From Freese’s Coal: A Human History:
More (#2) from Coal: A Human History:
More (#1) from Coal: A Human History:
Passages from The Many Worlds of Hugh Everett III:
And:
(It wasn’t until decades later that David Deutsch and others showed that Everettian quantum mechanics does make novel experimental predictions.)
A passage from Tim Weiner’s Legacy of Ashes: The History of the CIA:
More (#1) from Legacy of Ashes:
And:
And:
And:
I shared one quote here. More from Life at the Speed of Light:
Also from Life at the Speed of Light:
This seems obviously false. Local expenditures—of money, pride, possibility of not being the first to publish, etc. - are still local, global penalties are still global. Incentives are misaligned in exactly the same way as for climate change.
This is to be taken as an arguendo, not as the author’s opinion, right? See IEM on the minimal conditions for takeoff. Albeit if “AI-complete” is taken in a sense of generality and difficulty rather than “human-equivalent” then I agree much more strongly, but this is correspondingly harder to check using some neat IQ test or other “visible” approach that will command immediate, intuitive agreement.
Most obviously molecular nanotechnology a la Drexler, the other ones seem too ‘straightforward’ by comparison. I’ve always modeled my assumed social response for AI on the case of nanotech, i.e., funding except for well-connected insiders, term being broadened to meaninglessness, lots of concerned blither by ‘ethicists’ unconnected to the practitioners, etc.
Climate change doesn’t have the aspect that “if this ends up being a problem at all, then chances are that I (or my family/...) will die of it”.
(Agree with the rest of the comment.)
Many people believe that about climate change (due to global political disruption, economic collapse etcetera, praising the size of the disaster seems virtuous). Many others do not believe it about AI. Many put sizable climate-change disaster into the far future. Many people will go on believing this AI independently of any evidence which accrues. Actors with something to gain by minimizing their belief in climate change so minimize. This has also been true in AI risk so far.
Hm! I cannot recall a single instance of this. (Hm, well; I can recall one instance of a TV interview with a politician from a non-first-world island nation taking projections seriously which would put his nation under water, so it would not be much of a stretch to think that he’s taking seriously the possibility that people close to him may die from this.) If you have, probably this is because I haven’t read that much about what people say about climate change. Could you give me an indication of the extent of your evidence, to help me decide how much to update?
Ok, agreed, and this still seems likely even if you imagine sensible AI risk analyses being similarly well-known as climate change analyses are today. I can see how it could lead to an outcome similar to today’s situation with climate change if that happened… Still, if the analysis says “you will die of this”, and the brain of the person considering the analysis is willing to assign it some credence, that seems to align personal selfishness with global interests more than (climate change as it has looked to me so far).
Will keep an eye out for the next citation.
This has not happened with AI risk so far among most AIfolk, or anyone the slightest bit motivated to reject the advice. We had a similar conversation at MIRI once, in which I was arguing that, no, people don’t automatically change their behavior as soon as they are told that something bad might happen to them personally; and when we were breaking it up, Anna, on her way out, asked Louie downstairs how he had reasoned about choosing to ride motorcycles.
People only avoid certain sorts of death risks under certain circumstances.
Thanks!
Point. Need to think.
Being told something is dangerous =/= believing it is =/= alieving it is.
Right. I’ll clarify in the OP.
This seems implied by X-complete. X-complete generally means “given a solution to an X-complete problem, we have a solution for X”.
eg. NP complete: given a polynomial solution to any NP-complete problem, any problem in NP can be solved in polynomial time.
(Of course the technical nuance of the strength of the statement X-complete is such that I expect most people to imagine the wrong thing, like you say.)
(I don’t have answers to your specific questions, but here are some thoughts about the general problem.)
I agree with most of you said. I also assign significant probability mass to most parts of the argument for hope (but haven’t thought about this enough to put numbers on this), though I too am not comforted on these parts because I also assign non-small chance to them going wrong. E.g., I have hope for “if AI is visible [and, I add, AI risk is understood] then authorities/elites will be taking safety measures”.
That said, there are some steps in the argument for hope that I’m really worried about:
I worry that even smart (Nobel prize-type) people may end up getting the problem completely wrong, because MIRI’s argument tends to conspicuously not be reinvented independently elsewhere (even though I find myself agreeing with all of its major steps).
I worry that even if they get it right, by the time we have visible signs of AGI we will be even closer to it than we are now, so there will be even less time to do the necessary basic research necessary to solve the problem, making it even less likely that it can be done in time.
Although it’s also true that I assign some probability to e.g. AGI without visible signs, I think the above is currently the largest part of why I feel MIRI work is important.
I personally am optimistic about the world’s elites navigating AI risk as well as possible subject to inherent human limitations that I would expect everybody to have, and the inherent risk. Some points:
I’ve been surprised by people’s ability to avert bad outcomes. Only two nuclear weapons have been used since nuclear weapons were developed, despite the fact that there are 10,000+ nuclear weapons around the world. Political leaders are assassinated very infrequently relative to how often one might expect a priori.
AI risk is a Global Catastrophic Risk in addition to being an x-risk. Therefore, even people who don’t care about the far future will be motivated to prevent it.
The people with the most power tend to be the most rational people, and the effect size can be expected to increase over time (barring disruptive events such as economic collapses, supervolcanos, climate change tail risk, etc). The most rational people are the people who are most likely to be aware of and to work to avert AI risk. Here I’m blurring “near mode instrumental rationality” and “far mode instrumental rationality,” but I think there’s a fair amount of overlap between the two things. e.g. China is pushing hard on nuclear energy and on renewable energies, even though they won’t be needed for years.
Availability of information is increasing over time. At the time of the Dartmouth conference, information about the potential dangers of AI was not very salient, now it’s more salient, and in the future it will be still more salient.
In the Manhattan project, the “will bombs ignite the atmosphere?” question was analyzed and dismissed without much (to our knowledge) double-checking. The amount of risk checking per hour of human capital available can be expected to increase over time. In general, people enjoy tackling important problems, and risk checking is more important than most of the things that people would otherwise be doing.
I should clarify that with the exception of my first point, the arguments that I give are arguments that humanity will address AI risk in a near optimal way – not necessarily that AI risk is low.
For example, it could be that people correctly recognize that building an AI will result in human extinction with probability 99%, and so implement policies to prevent it, but that sometime over the next 10,000 years, these policies will fail, and AI will kill everyone.
But the actionable thing is how much we can reduce the probability of AI risk, and if by default people are going to do the best that one could hope, we can’t reduce the probability substantially.
What?
Rationality is systematized winning. Chance plays a role, but over time it’s playing less and less of a role, because of more efficient markets.
There is lots of evidence that people in power are the most rational, but there is a huger prior to overcome.
Among people for whom power has an unsatiated major instrumental or intrinsic value, the most rational tend to have more power- but I don’t think that very rational people are common and I think that they are less likely to want more power than they have.
Particularly since the previous generation of power-holders used different factors when they selected their successors.
I agree with all