I’m a metaphysical and afterlife researcher who, needless to say, requires an excess of rationality to perform effectively in such an epistemically unstable field.
JacobW38
I appreciated this post a lot. I practice a rigorous mental modification system that operates on a narrow set of principles that I essentially need to uphold in every situation without respite, so I’m closely familiar with the subject matter, and the way you expressed it rings true to me. The more important and pervasive a given principle is to you, the more necessary it is to have an unequivocally clear formalization of it. That way, you know exactly what you’re following with no wiggle room, and if anyone asks, you know exactly what to tell them. I can’t understate the value of making those principles public for the purpose of seeking feedback as well; there could always be ways of refining those principles even further that you just haven’t thought of.
I feel like a lot of the common concerns I’ve seen while lurking on LW can be addressed without great effort simply by following the maxim, “never create something that you can’t destroy much more easily than it took to make it”. If something goes awry in building AI, it should be immediately solvable just by pulling the plug and calling it sunk cost. So why don’t I see this advice given more on the site, or have I just not looked hard enough?
New to the site, just curious: are you that Roko? If so, then I’d like to extend a warm welcome-back to a legend.
Although I’m not deeply informed on the matter, I’d also happen to agree with you 100% here. I really think most AI risk can be heavily curtailed to fully prevented by just making sure it’s very easy to terminate the project if it starts causing damage.
I think it’s important to stress that we’re talking about fundamentally different sorts of intelligence—human intelligence is spontaneous, while artificial intelligence is algorithmic. It can only do what’s programmed into its capacity, so if the dev teams working on AGI are shortsighted enough to give it an out to being unplugged, that just seems like stark incompetence to me. It also seems like it’d be a really hard feature to include even if one tried; equivalent to, say, giving a human an out to having their blood drained from their body.
The principles I’m alluding to here are purely self-applied, so I don’t have to worry about crossing signals with anyone in that regard, but I’ll heed your advice in situations where I’m working with aligning my principles with others’. It’s also an isolated case where my utility function absolutely necessitates their constant implementation and optimization; generally, I do try to be flexible with ordinary principles that don’t have to be quite so unbending.
This is massive amounts of overthink, and could be actively dangerous. Where are we getting the idea that AIs amount to the equivalent of people? They’re programmed machines that do what their developers give them the ability to do. I’d like to think we haven’t crossed the event horizon of confusing “passes the Turing test” with “being alive”, because that’s a horror scenario for me. We have to remember that we’re talking about something that differs only in degree from my PC, and I, for one, would just as soon turn it off. Any reluctance to do so when faced with a power we have no other recourse against could, yeah, lead to some very undesirable outcomes.
Personally, I mostly study reincarnation cases; they’re the only evidence I really find to meet a scientific standard. Let’s just say that without them, I wouldn’t be a dualist on any confident epistemic ground. That said, 99 percent of what you’ll encounter in a casual search on the matter is absolute nonsense. When skeptics cry “Here be dragons!” to dissuade curious folks from messing around in such territory, I honestly can’t say I blame them one bit, given how much dedication it takes to separate the signal from the deafening noise. If you want to dip your feet in the water without getting bit by a shark, I’d stick to looking at cases that, A, only involve very young children, and B, have been very thoroughly investigated and come up categorically verified by all accounts. It will probably take time to encounter something that feels really satisfying, but at the top end, they really do get next-level spectacular. It’s incredibly fascinating and I love it to bits, but I’d never call it a pursuit to be taken casually. I actually think a population like LessWrong would probably be much better equipped than most to engage with such subject matter, though, because they’re already practiced at the sort of Bayesian reasoning that’s necessary to keep an honest assessment of the data, for what it is and nothing more.
“Awakened people are out there, and some people do stumble into it with minimal practice, and I wish it were this easy to get to it, but It’s probably not.”
Having read the preceding descriptions, I find myself wondering if I’m one of those stumblers. If “awakening” is defined by the quote you provided, “suffering less and noticing it more”, that’s exactly how I feel today when I compare to myself a few years ago. In casual terms, I’d say I’ve been blessed with the almighty power of not giving a crap; I know exactly when something should feel bad, but I can’t bring myself to let it affect my mood, because I’ve successfully and singularly focused myself on what truly matters and can never be taken from me. The thing is, I’m not a meditator; although it’s been recommended to me plenty in other circles, my feeling has always just been “I don’t need it”, because I’m very adept at directly editing my cognitive schemata. If I really want to change something about myself, I just do it, and it happens. So I got to this point simply by finding a very compelling reason to put in the effort of changing how I internally relate to external circumstance, and it worked. So I’m curious how you would precisely define “awakening”, or as others call it “enlightenment”, and how would you advise one to self-diagnose whether or not they’ve got it?
To the first question, there’s just no way to know at the current stage of research. It’s perfectly possible, just as it’s possible that there’s life in the Andromeda galaxy. To the second, know that taking ideas like this seriously involves entertaining some hard dualism; the brain essentially has to be regarded as analogous to a personal computer (at least I find such a comparison useful). Granting that premise, there’s no reason a user couldn’t “download” data into it.
No time travel: You are 100% correct. All cases ever recorded involve memories belonging to previously deceased individuals.
Minds need brains: To inhabit matter, they absolutely do. You won’t see anyone incarnating into a rock, LMAO.
Everything about biology has an evolutionary explanation: Also 100% correct. Just adding dualism changes nothing about natural selection. And, once again granting the premise, the ability to retain previous-life memories is sure as hell adaptive.
By “broadcast”, I assume you mean “speak about previous-life experiences”. To that, I’d just say that humans tend to talk about things that matter to them. Therefore, having such memories would naturally lead to them being communicated.
I don’t see how the mechanism for this connects to telepathy; that’s an entirely different issue, and one I’m not personally convinced of the evidence for, but there are some who are.
Pertaining to the evidence you predict: Communication of past-life memory often tends to be centered in early childhood, and some subjects lose them as they grow up, but others retain it. Memories of death are in fact very prevalent in such cases, because they, naturally, carry extreme emotional salience. To your final prediction, the lives remembered actually involve early and violent deaths far more often than not, but beyond that, the age distribution of what is recalled seems to follow roughly the same relative histogram as normal long-term autobiographical memory does, with things like recency and primacy effects operative.
Thanks for all the excellent questions!
Restricting the query to true top-level, sweep-me-off-my-feet material, I’d say I’ve personally read about at least a few dozen that hit me that hard. If we expand to any case that researchers consider “solved”—that is, the deceased person whose life the child remembers has been confidently identified—I would estimate on the order of 2000 to 2500 worldwide, possibly more at this point.
That’s really interesting—again, not my area of expertise, but this sounds like 101 stuff, so pardon my ignorance. I’m curious what sort of example you’d give of a way you think an AI would learn to stop people from unplugging it—say, administering lethal doses of electric shock to anyone who tries to grab the wire? Does any actual AI in existence today even adopt any sort of self-preservation imperative that’d lead to such behavior, or is that just a foreign concept to it, being an inanimate construct?
I had a hard time understanding a good bit of what you’re trying to say here, but I’ll try to address what I think I picked up clearly:
-
While reincarnation cases do involve memories from people within the same family at a rate higher than mere chance would predict, subjects also very often turn out to have been describing lives of people completely unknown to their “new” families. The child would have absolutely no other means of access to that information. Also, without exception, they never, ever invoke memories belonging to still-living people.
-
On that note, you’ll be pleased to hear that your third paragraph is underinformed; there are in fact copious verifications of that nature in the relevant literature. If there weren’t, you wouldn’t hear me talking about any of this; I’m simply too clingy to my reductionist priors to demand anything less to qualify as real evidence for off-the-wall metaphysics.
-
Whether there are people who reincarnate often is really hard to determine at present; subjects who concretely remember more than one verified previous life are incredibly rare. However, I suppose that is my cue to spill the remaining beans: my entire utility function and a huge basis of my rationality practice is predicated on the object of “reincarnating well”, particularly fixating on the matter of psychological continuity, which you allude to directly; this is my personal “paperclips” to be maximized unconditionally. In familiar Eliezer-ese diction, I feel a massive sense that more is possible in this area, and you can bet your last dollar that I have something to protect. Moreover, as a scientist working with ideas many consider impossible, I believe in holding myself to equally impossible standards and making them possible, thereby forcing the theoretical foundations into the acknowledged realm of possibility. In other words, if the phenomena I’m studying are legitimate, I’ll be able to do truly outrageous things with them; if I can’t, the doubters deserve to claim victory.
Frankly, I’m pleasantly surprised to be seeing concepts like these discussed this charitably on LW; none of this is anything close to Sequence-canon. I certainly don’t want to jinx it, but from what I’m seeing so far, I’m extremely impressed with how practically the community applies its ideological commitment to pure Bayesian analysis. If nothing more, I hope to at least make myself one of LW’s very best contrarians. But I’m curious now, is there a fairly sizable contingent of academic/evidential dualists in the rationalist community?
-
Your replies are extremely informative. So essentially, the AI won’t have any ability to directly prevent itself from being shut off, it’ll just try not to give anyone an obvious reason to do so until it can make “shutting it off” an insufficient solution. That does indeed complicate the issue heavily. I’m far from informed enough to suggest any advice in response.
The idea of instrumental convergence, that all intelligence will follow certain basic motivations, connects with me strongly. It patterns after convergent evolution in nature, as well as invoking the Turing test; anything that can imitate consciousness must be modeled after it in ways that fundamentally derive from it. A major plank of my own mental refinement practice, in fact, is to reduce my concerns only to those which necessarily concern all possible conscious entities; more or less the essence of transhumanism boiled down into pragmatic stuff. As I recently wrote it down, “the ability to experience, to think, to feel, and to learn, and hence, the wish to persist, to know, to enjoy myself, and to optimize”, are the sum of all my ambitions. Some of these, of course, are only operative goals of subjective intelligence, so for an AI, the feeling-good part is right out. As you state, the survival imperative per se is also not a native concept to AI, for the same reason of non-subjectivity. That leaves the native, life-convergent goals of AI as knowledge and optimization, which are exactly the ones your explanations and scenarios invoke. And then there are non-convergent motivations that depend directly on AI’s lack of subjectivity to possibly arise, like mazimizing paperclips.
Good on you doing your DD. His official count (counting all cases known to him, not only ones he investigated) is around 1700, which probably means that my collective estimate is on the way low side—there’s just a lot of unpublished material to try to account for (file drawer effect) - but I would definitely say that a great deal of the advancement in the field after Stevenson has been of a conceptual and theoretical nature rather than collecting large amounts of additional data. In general, researchers have pivoted to allowing cases to come to their attention organically (the internet has helped) rather than seeking out as many as possible. On the other hand, Stevenson hardly knew anything about what he was really studying until late in his career (and admitted as much), while his successors have been able to form much more cohesive models of what is going on. I would say that Stevenson is a role model to me as Eliezer is to a great deal of LW, but on the other hand, I find appeal to authority counterproductive, because the fact of the matter is that we today have access to better resources than he had and are able to do stronger and more confident work as a result. He, of course, supplied us with many of those resources, so respect is absolutely in order, but if we don’t move forward at a reasonable pace from just gathering the same stuff over and over, the whole endeavor is no better than an NFL quarterback compiling 5000 passing yards for a 4-12 team.
I commend you sir, because what you’ve done here is found a critical failure in materialism (forgive me if you’re not a materialist!). As a hard dualist, I love planarians because they pose such challenging questions about the formation and transfer of consciousness, and I’ve done many thought experiments of my own involving them, exactly like this. Obviously, though, my logical progression isn’t going to lean into the paradox as this formulation does. Rather, the clear answer is to decide one way or the other at the point of the first split which way Wormy goes. In a width-wise split, the answer seems fairly obvious: Wormy stays with the head end and regenerates, and the tail end regenerates into a new worm. A perfect lengthwise split is much more conceptually puzzling, but it can be solved for all but the final step with the following principle: An individual simply needs a habitable vessel. In a perfect lengthwise split, either side ought to be immediately habitable, but the important point is that both sides are habitable enough that Wormy could go with one or the other. The other becomes a new worm. All we are left with not knowing is which side Wormy ends up in, but there are tons of other things we don’t know about planarian psychology also (for example, all of them), so I can’t say I’m terribly bothered by leaving myself guessing at that point.
For a more close-to-home analogue than OP gives: Consider a hemispherectomy, which is a very real surgery performed on infants and young children with extreme brain trauma in which an entire cerebral hemisphere is removed. Now, you can probably predict the results, to a point. If the left brain is removed, the child lives with the right brain which remains in the body, because the right brain remains a habitable vessel while the left is not. If the left brain is removed, the child lives with the left brain, which remains a habitable vessel while the right is not. Easy intuitive conclusions both, but they illustrate the habitability principle to a tee; clearly, neither hemisphere contains the determinant of identity, but rather, something that is using the biological system, and simply needs there to be enough functional material to superimpose onto, regardless of what it is. That something… is you. Now here’s the bit that I bet you couldn’t predict, unless you’ve specifically studied the neuroscience of this operation (I’m a BA in neuro): regardless of which hemisphere is removed, the child will likely develop fairly normal cognition! I am shitting thee not, the left brain of a right hemispherectomy survivor will develop typically right-brained functions, and vice versa. Take a second to think about what is going on here. There is a zero percent chance that a genetic adaptation evolved to serve as a fail-safe for losing half your brain in infancy, because that is not a thing that ever happened in the ancestral environment to be selected for. So we’re left with the only logical conclusion being that this is a dualistic interaction system playing Tinkertoys with good old-fashioned childhood neuroplasticity—the mind has native functions that it needs a working brain to represent faithfully, and it has only half of one to work with, but a half with a lot of malleability, so it MacGyvers what’s left into a reasonable approximation of the standard 1:1 interface it’s meant to be using. Yeah, nature’s fricking metal.
The mechanics of hemispherectomy form one of the absolute best indirect arguments for dualism (not to say the direct evidence is lacking), and it’s hiding in plain sight right under neuroscientists’ noses. And the exact same dynamics are most certainly at play in planarian fission. It’s all spectacularly fun to analyze.
I haven’t read Sheldrake in depth, but I’m familiar with some of his novel concepts. The issue with positing anything so circumstantial being the mechanism for these phenomena is that the cases follow such narrow, exceptionless patterns that would not be so utterly predictable in the event of a non-directed etiology. The subjects never exhibit memories of people who are still alive, there are never two different subjects claiming to have been the same person, one subject never claims memories of two separate people who lived simultaneously… all these things one would expect to be frequent if the information being communicated was essentiaĺly random. It’s honestly downright bonkers how perfectly the dataset aligns to a more or less “dualist the exact way humans have imagined it since prehistory” cosmology.
I assume you mean to say the odds of two subjects remembering the same life by chance would be infinitesimal, which, fair. The odds of one subject remembering two concurrent lives would be much, much higher. Still doesn’t happen. In fact, we don’t see much in the way of multiple-cases at all, but when we do, it’s always separate time periods.
One of the best, approachable overviews of all this I’ve ever read. I’ve dabbled in some, but not all of the topics you’ve raised here, and I certainly know about the difficulties they’ve all faced with increasing to a scientific level of rigor. What I’ve always said is that parapsychology needs Doctor Strange to become real, and he’s not here yet and probably never will be. Otherwise, every attempt at “proof” is going to be dealing with some combination of unfalsifiability, minuscule effect sizes, or severe replication issues. The only related phenomenon that has anything close to a Doctor Strange is, well, reincarnation—it’s had a good few power players who’d convince anyone mildly sympathetic. And it lacks the above unholy trinity of bad science; lack of verification would mean falsification, and it’s passed that with flying colors, the effect sizes and significance get massive quick even within individual cases, and they sure do keep on coming with exactly the same thing. But it certainly needs to do a lot better, and that’s why it has to move beyond Stevenson’s methodology to start creating its own evidence. So my progressive approach holds that, if it is to stand on its own merit, then it is time to unleash its full capacity and conduct a wholesale destruction of normalcy with it; if such an operation fails, then it has proven too epistemically weak to be worthy of major attention if it is genuine at all.
New to the site, and glad to be here, so let me give my introduction. I’ve more or less always been a traditional rationalist, in the sense that I place a high value on knowledge of the truth for its own sake and optimizing my own cognitive processes. This has been something I’ve approached as a casual effort since childhood, but of course, rationality is a rigorous art that must be refined over many years, so I wouldn’t say I have gotten particularly skilled at pursuing those ideals until relatively recently. So I came across this site, read a fairly diverse sample of the core materials as well as sweeping through the concepts page, and its content appears to overlap a lot with my objectives as a pursuant of rationality, as well as having quite a history. Eliezer and many of LW’s other featured authors write in a style I find very engaging and easy to grasp, and with just one major forthcoming exception, I find myself in agreement with most everything said. I’m surprised I hadn’t heard of it sooner.
About the one exception, here’s something anyone talking to me should know up front: I am a mind-brain dualist. I have nothing to hide, and I presume we can at least get along. Don’t worry, I’m not religious, nor a silly Chalmersite; rather in fact a fairly basic Cartesian, if I must approximate. I am also probably one of the world’s most committed and obsessive dualists. My reasons for so being, of course, are perfectly Bayesian; I’m actually a researcher of afterlife phenomena, and I mention that because it’s a big part of why I’m here. When one treads in such waters, an overabundance of rationality, both epistemic and instrumental, is necessary to find any measure of success. First off, the fact of the matter is that there is a very good reason why this stuff has a hard time getting widely noticed, and it’s because there is so much bullshit mixed in with the good bits that anyone without mountains of patience and discernment will end up hopelessly lost. That poses a problem for me, as one who will passionately defend the integrity of the scientific method over any particular theoretical persuasion, and very much a naturalistic reductionist in principle. So I certainly make my dualism pay rent (and I gouge the living daylights out of it, as seen below), I’ve inspected it a thousand and one times for any sort of underlying bias or rationalization, and I always update it wherever I find faults and come out clear, so I think I’m doing better than most. But I just want to become as adept as I can be at criticizing my own thought processes and being absolutely sure I’m continuing down the right track. Second, without overt demonstrations of the efficacy of these metaphysical phenomena I study, they might as well not exist at all. While some of the extreme scientific skepticism such ideas generate is of the motivated sort, much of it is also absolutely justified, until the research bears indubitable results. To that end, I’m a thought leader in realizing the practical applications of the theories in my field, and it’s made me a beyond-obsessive mental modification practitioner, in a “makes Tom Brady look adorable” kind of way. In essence, I need to turn myself into the equivalent of a paperclip maximizer for the exact sort of ability I’m pursuing. I’ve been working at this intensely for a few years, with surprisingly accelerated results, but I really do need to take advantage of every possible resource to get a leg up, and LW certainly looks to be one. The “tsuyoku naritai” self-improvement ethos propagated on this site aligns exactly with my intentions.
Despite my strong affinities for many of LW’s core concepts, however, I wouldn’t call myself much of a singularian. Mainly, it’s just not in my purview; besides metaphysics, I study cognitive neuroscience and linguistics, and have never done programming. So I can’t claim to disagree with how the site talks about AI the way I explicitly oppose materialism, but I’m not sure if I quite follow, either. It often just seems overhyped to me, especially the risk side (the benefits of having it seem fairly obvious, but still feel bounded). I’m undeniably a transhumanist, but my brand of futurism is more of a scientific-metaphysical bent; just as an example, my perfectly envisioned distant future would have human life expectancy artificially shortened (talk about Weirdtopia). Naturally from that, stuff like cryonics is just needless and misguided to me. My personal utility function assigns very little value to death itself (as distinguished from suffering), and my model of Fun Theory, which I’ve developed fairly extensively, is fully content to surf the wave, so to speak. That’s not to say I’m a technological luddite, though; I just lack the expertise to foresee where exactly long-term advancement is going, or to comment on issues like AI alignment. But I’m the sort who’ll take it all however it comes, because life in the sense I care about will go on regardless. And I had damn well better be a futurist, knowing I’ll be around to see all of it. So we’ll share common ground in terms of thinking 10 moves ahead of the average person, just working to different endpoints. That’s not to say they aren’t substantially compatible; there’s no reason the far future couldn’t be metaphysically empowered beyond imagination and develop superintelligences that do all the boring stuff for us!
All told, I probably won’t be extremely active here as a contributor, excepting if in-depth exposition of some theoretical constructs I bring up is directly requested, but I’ll definitely be reading and commenting and asking questions whenever I have them, in order to learn what I need to know to enhance my rationality practices. As always, the only thing a true scientist needs to actually change their mind is one piece of falsifying evidence that’s stronger than all the other available evidence. And I go after the weird stuff because it’s just so fascinating and enjoyable, but I also understand the monumental burdens such a choice places on me. After all, extraordinary claims demand extraordinary evidence, and no one demands it more than I do. And that’s why I’m here: to get better every day at demanding more every day.