Hearing, on my way out the door, when I’m exhausted beyond all measure and feeling deeply alienated and betrayed, “man, you should really consider sticking around” is upsetting.
This is not how I read Seth Herd’s comment; I read him as saying “aw, I’ll miss you, but not enough to follow you to Substack.” This is simultaneously support for you staying on LW and for the mods to reach an accommodation with you, intended as information for you to do what you will with it.
I think the rest of this—being upset about what you think is the frame of that comment—feels like it’s the conflict in miniature? I’m not sure I have much helpful to say, there.
A lot of my thinking over the last few months has shifted from “how do we get some sort of AI pause in place?” to “how do we win the peace?”. That is, you could have a picture of AGI as the most important problem that precedes all other problems; anti-aging research is important, but it might actually be faster to build an aligned artificial scientist who solves it for you than to solve it yourself (on this general argument, see Artificial Intelligence as a Positive and Negative Factor in Global Risk). But if alignment requires a thirty-year pause on the creation of artificial scientists to work, that belief flips—now actually it makes sense to go ahead with humans researching the biology of aging, and to do projects like Loyal.
This isn’t true of just aging; there are probably something more like twelve major areas of concern. Some of them are simply predictable catastrophes we would like to avert; others are possibly necessary to be able to safely exit the pause at all (or to keep the pause going when it would be unsafe to exit).
I think ‘solutionism’ is basically the right path, here. What I’m interested in: what’s the foundation for solutionism, or what support does it need? Why is solutionism not already the dominant view? I think one of the things I found most exciting about SENS was the sense that “someone had done the work”, had actually identified the list of seven problems, and had a plan of how to address all of the problems. Even if those specific plans didn’t pan out, the superstructure was there and the ability to pivot was there. It looked like a serious approach by serious people. What is the superstructure for solutionism such that one can be reasonably confident that marginal efforts are actually contributing to success, instead of bailing water on the Titanic?