Request: stop advancing AI capabilities

Note: this has been in my draft queue since well before the FLI letter and TIME article. It was sequenced behind the interpretability post, and delayed by travel, and I’m just plowing ahead and posting it without any edits to acknowledge ongoing discussion aside from this note.

This is an occasional reminder that I think pushing the frontier of AI capabilities in the current paradigm is highly anti-social, and contributes significantly in expectation to the destruction of everything I know and love. To all doing that (directly and purposefully for its own sake, rather than as a mournful negative externality to alignment research): I request you stop.

(There’s plenty of other similarly fun things you can do instead! Like trying to figure out how the heck modern AI systems work as well as they do, preferably with a cross-organization network of people who commit not to using their insights to push the capabilities frontier before they understand what the hell they’re doing!)

(I reiterate that this is not a request to stop indefinitely; I think building AGI eventually is imperative; I just think literally every human will be killed at once if we build AGI before we understand what the hell we’re doing.)