I agree with the Statement. As strongly as I can agree with anything. I think the hope of current humans achieving… if not immortality, then very substantially increased longevity… without AI doing the work for us, is at most a rounding error. And ASI that was even close to aligned, that found it worth reserving even a billionth part of the value of the universe for humans, would treat this as the obvious most urgent problem and solve death pretty much if there’s any physically possible way of doing so. And when I look inside, I find that I simply don’t care about a glorious transhumanist future that doesn’t include me or any of the particular other humans I care about. I do somewhat prefer being kind / helpful / benificent to people I’ve never met, very slightly prefer that even for people who don’t exist yet, but it’s far too weak a preference to trade off against any noticeable change to the odds of me and everyone I care about dying. If that makes me a “sociopath” in the view of someone or other, oh well.
I’ve been a supporter of MIRI, AI alignment, etc. for a long time, not because I share that much with EA in terms of values, but because the path to the future having any value has seemed for a long time to route through our building aligned ASI, which I consider as hard as MIRI does. But when the “pivotal act” framing started being discussed, rather than actually aligning ASI, I noticed a crack developing between my values and MIRI’s, and the past year with advocacy for “shut it all down” and so on has blown that crack wide open. I no longer feel like a future I value has any group trying to pursue it. Everyone outside of AI alignment is either just confused and flailing around with unpredictable effects, or is badly mistaken and actively pushing towards turning us all into paperclips, but those in AI alignment are either extremely unrealistically optimistic about plans that I’m pretty sure, for reasons that MIRI has argued, won’t work; or, like current MIRI, they say things like that I should stake my personal presence in the glorious transhumanist future on cryonics (and what of my friends and family members who I could never convince to sign up? What of the fact that, IMO, current cryonics practice probably doesn’t even prevent info-theoretical death, let alone give one a good shot at actually being revived at some point in the future?)
I happen to also think that most plans for preventing ASI from happening soon, that aren’t “shut it all down” in a very indiscriminate way, just won’t work—that is, I think we’ll get ASI (and probably all die) pretty soon anyway. And I think “shut it all down” is very unlikely to be societally selected as our plan for how to deal with AI in the near term, let alone effectively implemented. There are forms of certain actors choosing to go slower on their paths to ASI that I would support, but only if those actors are doing that specifically to attempt to solve alignment before ASI, and only if it won’t slow them down so much that someone else just makes unaligned ASI first anyway. And of course we should forcibly stop anyone who is on the path to making ASI without even trying to align it (because they’re mistaken about the default result of building ASI without aligning it, or because they think humanity’s extinction is good actually), although I’m not sure how capable we are of stopping them. But I want an organization that is facing up to the real, tremendous difficulty of making the first ASI aligned, and trying to do that anyway, because no other option actually has a result that they (or I) find acceptable. (By the way, MIRI is right that “do your alignment homework for you” is probably the literal worst possible task to give to one’s newly developed AGI, so e.g. OpenAI’s alignment plan seems deeply delusional to me and thus OpenAI is not the org for which I’m looking.)
I’d like someone from MIRI to read this. If no one replies here, I may send them a copy, or something based on this.
I didn’t look at the tags before reading. I did notice it was fiction pretty quickly but “is this dath ilan” was still a live question for me until the reveal. (Though Eliezer might want to continue writing some non-dath ilan fiction occasionally, if he wants that to continue to be a likely thought process.)