I’m a metaphysical and afterlife researcher who, needless to say, requires an excess of rationality to perform effectively in such an epistemically unstable field.
JacobW38
I highly recommend following Rational Animations on Youtube for this sort of general purpose. I’d describe their format as “LW meets Kurzgesagt”, the latter which I already found highly engaging. They don’t post new videos that often, but their stuff is excellent, even more so recently, and definitely triggers my dopamine circuits in a way that rationality content generally struggles to satisfy. Imo, it’s perfect introductory material to anyone new on LW to get familiar with its ideology in a way that makes learning easy and fun.
(Not affiliated with RA in any way, just a casual enjoyer of chonky shibes)
You’ve described habituation, and yes, it does cut both ways. You also speak of “pulling the unusual into ordinary experience”, as though that is undesirable, but contrarily, I find exactly that a central motivation to me. When I come upon things that on first blush inspire awe, my drive is to fully understand them, perhaps even to command them. I don’t think I know how to see anything as “bigger than myself” in a way that doesn’t ring simply as a challenge to rise above whatever it is.
Manipulating one’s own utility functions is supposed to be hard? That would be news to me. I’ve never found it problematic, once I’ve either learned new information that led me to update it, or become aware of a pre-existing inconsistency. For example, loss aversion is something I probably had until it was pointed out to me, but not after that. The only exception to this would be things one easily attaches to emotionally, such as pets, to which I’ve learned to simply not allow myself to become so attached. Otherwise, could you please explain why you make the claim that such traits are not readily editable in a more general capacity?
Thanks for asking. I’ll likely be publishing my first paper early next year, but the subject matter is quite advanced, definitely not entry-level stuff. It takes more of a practical orientation to the issue than merely establishing evidence (the former my specialty as a researcher; as is probably clear from other replies, I’m satisfied with the raw evidence).
As for best published papers for introductory purposes, here you can find one of my personal all-time favorites. https://www.semanticscholar.org/paper/Development-of-Certainty-About-the-Correct-Deceased-Haraldsson-Abu-Izzeddin/4fb93e1dfb2e353a5f6e8b030cede31064b2536e
Apologies for the absence; combination of busy/annoyance with downvotes, but I could also do a better job of being clear and concise. Unfortunately, after having given it thought, I just don’t think your request is something I can do for you, nor should it be. Honestly, if you were to simply take my word for it, I’d wonder what you were thinking. But good information, including primary sources, is openly accessible, and it’s something that I encourage those with the interest to take a deep dive into, for sure. Once you go far enough in, in my experience, there’s no getting out, unless perhaps you’re way more demanding of utter perfection in scientific analysis than I am, and I’m generally seen as one of the most demanding people currently in the PL-memory field, to the point of being a bit of a curmudgeon (not to mention an open sympathizer with skeptics like CSICOP, which is also deeply unpopular). But it takes a commitment to really wanting to know one way or the other. I can’t decide for anyone whether or not to have that.
I certainly could summarize the findings and takeaways of afterlife evidence and past-life memory investigations for a broad audience, but I haven’t found any reason to assume that it wouldn’t just be downvoted. That’s not why I came here anyways; I joined to improve my own methods and practice. I feel that if I were interested in doing anything like proselytizing, I would have to have an awfully low opinion of the ability of the evidence to speak for itself, and I don’t at all. But you tell me if I’m taking the right approach here, or if an ELI5 on the matter would be appropriate and/or desired. I’d not hesitate to provide such content if invited.
Based on evidence I’ve been presented with to this point—I’d say high enough to confidently bet every dollar I’ll ever earn on it. Easily >99% that it’ll be put beyond reasonable doubt in the next 100-150 years, and I only specify that long because of the spectacularly lofty standards academia forces such evidence to measure up to. I’m basically alone in my field in actually being in favor of the latter, however, so I have no interest in declining to play the long game with it.
Been staying hard away from crypto all year, with the general trend of about one seismic project failure every 3 months, and this might be the true Lehman moment on top of the shitcoin sundae. Passing no assumptions on intent or possible criminal actions until more info is revealed, but it certainly looks like SBF mismanaged a lot of other people’s money and was overconfident in his own, being largely pegged to illiquid altcoins and FTT. The most shocking thing to me is how CZ took a look at their balance sheets for all of like 3 hours after announcing intent to acquire, and just noped right outta there. Clearly this situation is more FUBAR than we ever imagined, and it all feels like SBF had to have known all along that his castle was built on a foundation of sand, at the very least. By all appearances, for him not to have seen that would require an immense amount of living in denial.
That being said, I could see how this feeling would come about if the value/importance in question is being imposed on you by others, rather than being the value you truly assign to the project. In that case, such a burden can weigh heavily and manifest aversively. But avoiding something you actually assign said value to just seems like a basic error in utility math?
I have a taboo on the word “believe”, but I am an academic researcher of afterlife evidence. I personally specialize in verifiable instances of early-childhood past-life recall.
Honestly, even from a purely selfish standpoint, I’d be much more concerned about a plausible extinction scenario than just dying. Figuring out what to do when I’m dead is pretty much my life’s work, and if I’m being completely honest and brazenly flouting convention, the stuff I’ve learned from that research holds a genuine, not-at-all-morbid appeal to me. Like, even if death wasn’t inevitable, I’d still want to see it for myself at some point. I definitely wouldn’t choose to artificially prolong my lifespan, given the opportunity. So personally, death and I are on pretty amicable terms. On the other hand, in the case of an extinction event… I don’t even know what there would be left for me to do at that point. It’s just the kind of thing that, as I imagine it, drains all the hope and optimism I had out of me, to the point where even picking up the pieces of whatever remains feels like a monumental task. So my takeaway would be that anyone, no matter their circumstances, who really feels that AI or anything else poses such a threat should absolutely feel no inhibition toward working to prevent such an outcome. But on an individual basis, I think it would pay dividends for all of us to be generally death-positive, if perhaps not as unreservedly so as I am.
I like the thought behind this. You’ve hit on something I think is important for being productive: if thinking about the alternative makes you want to punch through a wall, that’s great, and you should try to make yourself feel that way. I do a similar thing, but more toward general goal-accomplishment; if I have an objective in sight that I’m heavily attracted to, I identify every possible obstacle to the end (essentially murphyjitsu’ing), and then I cultivate a driving, vengeful rage toward each specific obstacle, on top of what motivation I already had toward the end goal. It works reasonably well for most things, but is by far the most effective on pure internal tasks like editing out cognitive biases or undesired beliefs, because raw motivation is just a much more absolute determinant of success in that domain. Learning is a mostly mental task, so this seems like a very strong application of the general principle to me.
On your question of how to respond to pointless suffering, though, I don’t think your response would work for me at all. I’d just snap back, “well, what does it matter at that point?!”. I think I actually prefer a Buddhist-ish angle on the issue, directly calling out the pointlessness of suffering per se (I’m nonreligious and agnostic myself, for the record). To paraphrase a quote I got from a friend of mine, “one who can accept anything never suffers”. Pain is unavoidable, but perspective enables you to remain happy while in pain, by keeping whatever is not lost at the front of your mind. In your hypothetical scenario, I think I’d frame it something like, “Have your reasons for joy be ones that can never be taken from you.” Does that ring right?
It appears what you have is free won’t!
For the own-behavior predictions, could you put together a chart with calibration accuracy on the Y axis, and time elapsed between the prediction and the final decision (in buckets) on the X axis? I wonder whether the predictions became less-calibrated the farther into the future you tried to predict, since a broader time gap would result in more opportunity for your intentions to change.
This is way too interesting not to have comments!
First, I think this bears on the makeup of one’s utility function. If your UF contains absolutes, infinite value judgments, then in my opinion, it is impossible not to be truly motivated toward them. No pushing is ever required; at least, it never feels like pushing. Obstacles just manifest to the mind in the form of fun challenges that only amplify the engagement, because you already know you have the will to win. If your UF does not include absolutes, or you step down to the levels that are finite (for the record, I see no contradiction in a UF with one infinite and arbitrarily many finites), that is where this sort of akrasia emerges, because motivation naturally flickers in and out between those various finite objects at different times.
Interestingly, this is almost the opposite of the typical form of akrasia, not doing something against your better judgment. As with that, though, noticing it when it happens, in my opinion, is the first step to making it less akratic. I’ve absolutely felt the difference, at various times in my life, between actually having the thing and trying to “do” it for all of Kaj’s examples (motivation, inspiration, empathy, and so on). The best solution I’ve personally found is, when possible, to simply wait for the real quality to return, and it always does. For example, when working on private writing projects, I write when a jolt of inspiration strikes, then wait for the next brilliant idea and not try to force it; if I do, I always produce inferior quality writing. When waiting isn’t practical, such as academic projects with a deadline, I don’t have such an easy path to always put in my best-quality work. This is one major reason why I think that being highly gifted doesn’t necessarily translate to exceptional academic performance; the education system isn’t really adapted to how at least some great minds operate.
I suspect the dichotomy may be slightly misapportioned here, because I sometimes find that ideas which are presented on the right side end up intersecting back with the logical extremes of methods from the left side. For example, the extent to which I push my own rationality practice is effectively what has convinced me that there’s a lot of ecological validity to classical free will. The conclusion that self-directed cognitive modification has no limits, which implies conceptually unbounded internal authority, is not something that I would imagine one could come to just by feeling it out; in fact, it seems to me like most non-rationalists would find this highly unintuitive. On the other hand, most non-rationalists do assume free will for much less solid reasons. So how does your formulation account for a crossover or “full circle” effect like this?
On a related note, I’m curious whether LWers generally believe that rationality can be extended to arbitrary levels of optimization by pure intent, or that there are cases when one cannot be perfectly rational given the available information, no matter how much effort is given? I place myself in the former camp.
I don’t think I’ve ever experienced this. I’d actually say I could be described by the blue graph. The more I really, really care about something, the more I want to do absolutely nothing but it, especially if I care about it for bigger reasons than, say, because it’s a lot of fun at this moment. Sometimes, there comes a point where continuing to improve said objective feels like it’s bringing diminishing returns, so I call the project sufficiently complete to my liking. Other times, it never stops feeling worth the effort, or it is simply too important not to perpetually, asymptotically optimize the mission. So I keep moving forward, forever. I know for sure that the work I consider the most important thing I’ll ever do is also something I’ll never stop obsessing over for a minute. And it doesn’t become onerous; it feels awesome to have set oneself on a trajectory demanding of such fixation. So I’m actually a little puzzled what the upshot is supposed to be here.
I like this proposal. In light of the issues raised in this post, it’s important for people to come into the custom of explaining their own criteria for “truth” instead of leaving what they are talking about ambiguous. I tend not to use the word much myself, in fact, because I find it more helpful to describe exactly what kind of reality judgments I am interested in arriving at. Basically, we shouldn’t be talking about the world as though we have actual means of knowing things about it with probability 1.
Important post. The degree to which my search for truth is motivated, and to what ends, is something I grapple with frequently. I generally prefer the definition of truth as “that which pays the most rent in anticipated experience”; essentially a demand for observability and falsifiability, a combination of your correspondence and predictive criteria. This, of course, leaves what is true subject to updating if new ideas lead to better results, but I think it is the best way we have of approximating truth. So I’m constantly looking really hard at the evidence I examine and asking myself, am I convinced of this for the right reasons? What would have to happen to unconvince me? How can I take a detached stance toward this belief, if ever there comes a time when I may no longer want it? So in what way my truth-seeking could be called motivated, I aim to constrain it to at least being solely motivated by adherence to the scientific method, which is something I am unashamed to simply acknowledge.
Unfortunate to say I haven’t kept a neat record of where exactly each case is published, so I asked my industry connections and was directed to the following article. Having reviewed it, it would of course be presumptuous of me to say I endorse everything stated therein, since I have not read the primary source for every case described. But those sources are referenced at bottom, many with links. It should suffice as a compilation of information pertaining to your question, and you can judge what meets your standards.
https://psi-encyclopedia.spr.ac.uk/articles/reincarnation-cases-records-made-verifications
Disclaimer, I’m not someone who personally investigates cases. What you’ve raised has actually been a massive problem for researchers since the beginning, and has little to do with the internet—Stevenson himself often learned of his cases many years after they were in their strongest phase, and sometimes after connections had already been made to a possible previous identity. In general, the earlier a researcher can get on a case and in contact with the subject, the better. As a result, cases in which important statements given by the subject are documented, and corroborated by a researcher, before any attempt at verification has been made are considered some of the best. In that regard, the internet has actually helped researchers get informed of cases earlier, when subjects are typically still giving a lot of information and no independent searches have been conducted. Pertaining to problems specifically presented by online communication, whenever a potebtially important case comes to their attention, I would say that researchers try to take the process offline as soon as the situation allows.
I fully agree with the gist of this post. Empowerment, as you define it, is both a very important factor in my own utility function, and seems to be an integral component to any formulation of fun theory. In your words, “to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes” describes my terminal goals for a thousand years into the future so smack-dab perfectly that I don’t think I could’ve possibly put it any better. Empowerment is, yes, an instrumental goal for all the options it creates, but also an end in itself, because the state of being empowered itself is just plain fun and relieving and great all around! Not only does this sort of empowerment provide an unlimited potential to be parlayed into enjoyment of all sorts, it lifts the everyday worries of modern life off our shoulders completely, if taken as far as it can be. I could effectively sum up the main reason I’m a transhumanist as seeking empowerment, for myself and for humanity as a whole.
I would add one caveat, however, for me personally: the best kind of empowerment is self-empowerment. Power earned through conquest is infinitely sweeter than power that’s just given to you. If my ultimate goals of transcending mortality and such were just low-hanging fruit, I can’t say I’d be nearly as obsessed with them in particular as I am. To analogize this to something like a video game, it feels way better to barely scrape out a win under some insane challenge condition that wasn’t even supposed to be possible, than to rip through everything effortlessly by taking the free noob powerup that makes you invincible. I don’t know how broadly this sentiment generalizes exactly, but I certainly haven’t found it to be unpopular. None of that is to say I’m opposed to global empowerment by means of AI or whatever else, but there must always be something left for us to individually strive for. If that is lost, there isn’t much difference left between life and death.