I basically agree with the 1st and 2nd points, somewhat disagree with the 3rd point (I do consider it plausible that ASIs develop goals that are incompatible with human survival, but I don’t think it’s very likely), the 4th point is right but the argument is locally invalid, because processor clock speeds are not how fast AIs think, and I basically agree with the point that sufficiently aggressive policy responses can avert catastrophe, but don’t agree with the premise that wait and see is utterly unviable for AI tech, and also disagree with the premise that ASI is a global suicide bomb.
“I have read everything I could find that rationalists have written on AI safety. I came across many interesting ideas, I studied them carefully until I understood them well, and I am convinced that many are correct. Now I’m ready to see how all the pieces fit together to show that an AI moratorium is the correct course of action. To be clear, I don’t mean a document written for the layperson, or any other kind of introductory document. I’m ready for the real stuff now. Show me your actual argument in all its glory. Don’t hold back.”
After some careful consideration, you:
(a) helpfully provide a link to A List of Lethalities
(b) suggest that he read the sequences
(c) patiently explain that if he was smart enough to understand the argument then he would have already figured it out for himself
(d) leave him on read
(e) explain that the real argument was written once, but it has since been taken down, and unfortunately nobody’s gotten around to rehosting it since
(f) provide a link to a page which presents a sound argument[0] in favour of an AI moratorium
===
Hopefully, the best response here is obvious. But currently no such page exists.
It’s a stretch to expect to be taken seriously without such a page.
[0]By this I mean an argument whose premises are all correct and which collectively entail the conclusion that an AI moratorium should be implemented.
Didn’t The Problem try to do something similar by summarizing the essay in the following five bullet points:
The summary
Key points in this document:
There isn’t a ceiling at human-level capabilities.
ASI is very likely to exhibit goal-oriented behavior.
ASI is very likely to pursue the wrong goals.
It would be lethally dangerous to build ASIs that have the wrong goals.
Catastrophe can be averted via a sufficiently aggressive policy response.
Each point is a link to the corresponding section.
I basically agree with the 1st and 2nd points, somewhat disagree with the 3rd point (I do consider it plausible that ASIs develop goals that are incompatible with human survival, but I don’t think it’s very likely), the 4th point is right but the argument is locally invalid, because processor clock speeds are not how fast AIs think, and I basically agree with the point that sufficiently aggressive policy responses can avert catastrophe, but don’t agree with the premise that wait and see is utterly unviable for AI tech, and also disagree with the premise that ASI is a global suicide bomb.
Someone approaches you with a question:
“I have read everything I could find that rationalists have written on AI safety. I came across many interesting ideas, I studied them carefully until I understood them well, and I am convinced that many are correct. Now I’m ready to see how all the pieces fit together to show that an AI moratorium is the correct course of action. To be clear, I don’t mean a document written for the layperson, or any other kind of introductory document. I’m ready for the real stuff now. Show me your actual argument in all its glory. Don’t hold back.”
After some careful consideration, you:
(a) helpfully provide a link to A List of Lethalities
(b) suggest that he read the sequences
(c) patiently explain that if he was smart enough to understand the argument then he would have already figured it out for himself
(d) leave him on read
(e) explain that the real argument was written once, but it has since been taken down, and unfortunately nobody’s gotten around to rehosting it since
(f) provide a link to a page which presents a sound argument[0] in favour of an AI moratorium
===
Hopefully, the best response here is obvious. But currently no such page exists.
It’s a stretch to expect to be taken seriously without such a page.
[0]By this I mean an argument whose premises are all correct and which collectively entail the conclusion that an AI moratorium should be implemented.