Also known as Max Harms. (I post AI alignment content under my other account.)
Not the same person as MaxH!
Raelifin
The dynamic you’re talking about is real, but also I suspect is a product of the Overton window being closed. Marketing and building momentum could, I think, unlock a huge market. But absent doing better than the existing orgs on that front, I agree the customer base is likely going to be tiny.
I don’t have a good sense of how well MAiD cryonics can get in terms of information preservation. On the surface it should be a huge improvement, since it won’t have the ischemia issue, but three immediate things jump to mind:
* Ken Hayworth effectively condemned the highest-quality cryo-tissue in 2015, not just the average case (which he agrees is much worse).
* Cryo orgs store at −196, and there is a substantial risk of shattering at that temperature.
* Being unable to survive thawing means cryo is more fragile in many ways.
Thanks for the response! I appreciate the extra context on your third-party validation. Sorry if that’s on the website somewhere and I just didn’t find it.
I actually learned about Sparks in the course of investigating Nectome, and I hope my post helps similarly direct more attention your way. I broadly think Sparks is doing good work in saving lives, and I appreciate your contribution to that. :)
If you succeed, maybe you should work for Nectome. 😅
Mostly our conversation was about MAiD, and the way that donating your body as part of that can require crossing country borders. But honestly I don’t remember a ton of specifics because I wasn’t taking notes during that exchange. Maybe @Borys Wrobel has more to say. (Tho he doesn’t use LW that much, so if you really want to know, you might want to email Nectome. Hello@nectome.com)
Yeah, totally. I expect historical records and the memories of other people to be useful.
My point is that I don’t know an objective measure for whether the superintelligence rescued the existing person or built a new person, except via whether they match the other memories and records. If the superintelligence optimizes for the “rescued” person matching the memories of those who knew them, they will seem like they were revived successfully, but might not actually be very close to the real deal.
Yes.
Also @Aurelia is on LW and might be willing to answer questions herself.
Hard to say what the future can/can’t do. I think I’m, like, 80% that a brain that’s simply dumped in LN2 is going to lose so much information that even a superintelligence could not put the person back together in a way that their loved ones think they simply came back from the dead without being fundamentally changed (modulo cheating by “repairing” the cryonaut in a way that is deliberately designed to match the memories of their loved ones). Like, at the far end of what might be the case, the frozen brain tissue might as well have gone into the fire. The superintelligence can build a person that matches the historical record, but they won’t be the same person.
It could also be the case that the relevant information is still there, even when shredded, like papers put through a shredder, and that a sufficiently dedicated agent could figure out a model of how the ice formed, simulate an inverse process, and have things be fine. Even if I get in an accident and I’m at room temp for days, I would still like to be cryopreserved just in case this is true. But I wouldn’t bet on it.
In the common cryo case, it gets even trickier, since some parts of the brain will be well-perfused, and others won’t be, and there’s a quantitative question of how much. If I lost 10% of my cortex I would still be pretty similar, but would also be pretty different. I don’t think we have good measures here.
In short: idk, my guess is that reality is complicated and “invert the shredding” is not as simple as it sounds, even if it’s possible, in some sense.
Nectome: All That I Know
Raelifin’s Shortform
Does anyone have any questions that they’d like me to ask Nectome? I’m visiting their facilities on Wednesday and getting some VIP access. I think they’re quite happy to answer questions directly, but since I’m doing a deep-dive on them, there may be things that I, as an outsider, am more capable of answering as part of my investigation.
The only thing I can think of is Three Worlds Collide, but that’s by Eliezer, and doesn’t exactly fit your description. https://www.lesswrong.com/posts/HawFh7RvDM4RyoJ2d/three-worlds-collide-0-8
This is great. I don’t remember the last time a post made me flip so hard from “the thesis is obviously false” to “the thesis is obviously true”! And not just because I didn’t understand it, but also because I learned a thing. (Tho part was a pedagogically useful misunderstanding.)
Nope. If you have specific questions I’d be happy to answer them.
Becoming a Chinese Room
This market looks semi-reasonable to me: https://manifold.markets/MaxHarms/when-will-the-red-heart-audiobook-c
It’s hard for me to make concrete predictions because I have a lot of agency, and it depends on things like my standards and priorities. It turns out I’m moving across town in December, so that will delay things. If I had to guess, I would say the market is over-estimating the chance it’ll be <January and >April and under-estimating my chances of getting it done in February, but :shrug:.
Thanks for such a glowing review!! I’m so glad you heart the book!
The least realistic part of Red Heart is simply that there’s a near-superhuman AI in the near future at all[2].
I’d be curious for the specific ways in which you feel that Yunna is unrealistically strong or competent for a model around the size of GPT-6.5 (which is where I was aiming for in the story). LessWrong has spoiler tags in case you want to get into
the ending. (Use >! at the start of a line to black it out.)
The story actually starts in an alternate-timeline October 2023. I knew the book would be a period-piece and wanted to lampshade that it’s unrealistically early without making it distracting. Glad to hear you didn’t pick up on the exact date.
Just to defend myself about AI 2027 and timelines, I think a loss-of-control event in 2028 is very plausible, but as I explain in the piece you link in the footnote, my expectation is actually in the early 2030s, due to various bottlenecks and random slowdowns. But also, error bars are wide. I think the onus should be on people to explain why they don’t think a loss of control in 2028 is possible, given the natural uncertainty of the future and the difficulty of prediction.
Regardless, thanks again. :)
With Crystal, I just slammed them out there with pretty minimal effort. I gave Society away for free, and didn’t make paperback copies until just recently. For Red Heart I thought the story might have broader appeal, and wanted to get over my allergy to marketing, so I reached out to a bunch of literary agents early this year. Very few were interested, and most gave no reason. One was kind enough to explain that as a white guy writing a book about China, it would be an uphill battle to find a publisher, and that I’d probably need a Chinese co-author to make it work. She estimated that optimistically I might be able to get it in stores in 2027. From my perspective that was way too slow, and since I already had experience self-publishing, I went down that route. Self-publishing is extremely easy these days, and can produce a product of comparable quality if you are competent and/or have a team. The main issue is marketing and building awareness; traditional publishing still acts as a gatekeeper in many ways. So I’m still extremely dependent on word-of-mouth recommendations.
Lovely to find yet another person who benefited from my stories. I hope you enjoy Red Heart! ❤️
Yes. That’s right. And I am (among other things) worried about an AI that warps my values by telling me a series of facts.
But I want to clarify that I’m talking about terminal values, not strategic sub-goals. The stopgap plan is 100% able to tell me that the store is closed, thus changing my plan of going to the store. What it shouldn’t do is tell me intense stories about the suffering of pigs and thereby change how much I care about pigs.[1]
Why do you think this stopgap is so bad? (I agree that it’s bad, but it seems like you see it as worse than me.)
Unless this story is necessary to counteract another pressure such that I cleave closer to the null-action counterfactual.