I’m a metaphysical and afterlife researcher who, needless to say, requires an excess of rationality to perform effectively in such an epistemically unstable field.
JacobW38
I’m a hardcore consciousness and metaphysics nerd, so some of your questions fall within my epistemic wheelhouse. Others, I am simply interested in as you are, and can only respond with opinion or conjecture. I will take a stab at a selection of them below:
4: “Easy” is up in the air, but one of my favorite instrumental practices is to identify lines of preprogrammed “code” in my cognition that do me absolutely no good (grief, for instance), and simply hack into them to make them execute different emotional and behavioral outputs. I think the best way to stay happy is just to manually edit out negative thought tendencies, and having some intellectual knowledge that none of it’s a big deal anyways always helps.
8: I would define it as “existing in its minimally reduced, indivisible state”. For instance, an electron is a fundamental particle, but a proton is not because it’s composed of quarks.
12 (and 9): I think you’re on the best track with B. Consciousness is clearly individuated. Is it fundamental? That’s a multifaceted issue. It’s pretty clear to me that it can be reduced to something that is fundamental. At minimum, the state of being a “reference point” for external reality is something that really cannot be gotten beneath. On the other hand, a lot of what we think of as consciousness and experience is actually information: thought, sensation, memory, identity, etc. I couldn’t tell you what of any of this is irreducible—I suspect the capacities for at least some of them are. Your chosen stance here seems to approximate a clean-cut interactionism, which is at least a serviceable proxy.
13: I think this is the wrong question. We don’t know anything yet about how physics at the lowest level ultimately intersects and possibly unifies with the “metaphysics” of consciousness. At our current state of progress, no matter what theory of consciousness proves accurate, it will inevitably lean on some as-yet-undiscovered principle of physics that we in 2023 would find incomprehensible.
16: This will be controversial here, but is a settled issue in my field: You’d be looking for phenomenological evidence that AIs can participate in metaphysics the same ways conscious entities can. The easiest proof to the affirmative would be if they persist in a discarnate state after they “die”. I sure don’t expect it, but I’d be glad to be wrong.
19: I think a more likely idea along the general lines of the simulation hypothesis, due to the latter’s implications about computers and consciousness that, as I said above, I do not expect to hold up, is that an ultra-advanced civilization could just create a genuine microcosm where life evolved naturally. Not to say it’s likely.
20: Total speculation, of course—my personal pet hypothesis is that all civilizations discover everything they need to know about universal metaphysics way before they develop interstellar travel (we’re firmly on that track), and at some point just decide they’re tired of living in bodies. I personally hope we do not take such an easy way out.
21: I can buy into a sort of quantum-informed anthropic principle. Observers seem to be necessary to hold non-observer reality in a stable state. So that may in fact be the universe’s most basic dichotomy.
33: In my experience, the most important thing is to love what you’re learning about. Optimal learning is when you learn so quickly that you perpetually can’t wait to learn the next thing. I don’t think there’s any way to make “studying just to pass the test” effective long-term. You’ll just forget it all afterwards. You can probably imagine my thoughts on the western educational system.
43-44: Speaking to one’s intellectual comfort zone, Litany of Tarski-type affirmations are very effective at that. The benefit, of course, is better epistemics due to shedding ill-conceived discomfort with unfamiliar ideas.
45: I’ve actually never experienced this, and was shocked to learn it’s a thing in college. Science will typically blame neurochemistry, but in normal cognition, thought is the prime mover there. So all I can think of is an associative mechanism whereby people relate the presence of a certain chemical with a certain mood, because the emotion had previously caused the chemical release. When transmitters are released abnormally (i.e. not by willed thought), these associations activate. Again, never happened to me.
56: I’d consider myself mostly aligned with both, so I’d personally say yes. I’m also a diehard metaphysics nerd who’s fully aware I’m not going anywhere, so I’d better fricking prioritize the far future because there’s a lot of it waiting for me. For someone who’s not that, I’d actually say no, because it’s much more rational to care most about the period of time you get to live in.
58: As someone who’s also constantly scheming about things indefinitely far in the future, I feel you on this one. I find that building and maintaining an extreme amount of confidence in those matters enriches my experience of the present.
71-73: For me, studying empirical metaphysics has fulfilled the first two (rejecting materialism makes anyone happier, and there’s no limit of possible discovery) and eventually will the third (it’ll rise to prominence in my lifetime). I can’t say I wouldn’t recommend.
78: Same as 71-73, for an obvious example. I can definitely set you in the right direction.
81: Following the scientific method, a hypothesis must be formed as an attempt to explain an observation. It must then be testable, and present a means of supporting or rejecting it by the results of the test. I’ve certainly dealt with theories that seem equally well supported by evidence but can’t both be true, but I have no reason to think better science couldn’t tease them apart.
89: Definitely space travel, AI, VR, aging reversal, genetic engineering. I really think metaphysical science will outstrip all of the above in utility, though...
96: …by making this cease to be relevant.
98: Of course there are, because there’s so much we know nothing about when it comes to what the heck we even are. I’d almost argue we have very little idea how to truly have the biggest positive impact on the future we can at this stage. We’ll figure it out.
If you go back even further we’re the descendants of single celled organisms that absolutely don’t have experience.
My disagreement is here. Anyone with a microscope can still look at them today. The ones that can move clearly demonstrate acting on intention in a recognizable way. They have survival instincts just like an insect or a mouse or a bird. It’d be completely illogical not to generalize downward that the ones that don’t move also exercise intention in other ways to survive. I see zero reason to dispute the assumption that experience co-originated with biology.
I find the notion of “half consciousness” irredeemably incoherent. Different levels of capacity, of course, but experience itself is a binary bit that has to either be 1 or 0.
Explain to me how a sufficiently powerful AI would fail to qualify as a p-zombie. The definition I understand for that term is “something that is externally indistinguishable from an entity that has experience, but internally has no experience”. While it is impossible to tell the difference empirically, we can know by following evolutionary lines: all future AIs are conceptually descended from computer systems that we know don’t have experience, whereas even the earliest things we ultimately evolved from almost certainly did have experience (I have no clue at what other point one would suppose it entered the picture). So either it should fit the definition or I don’t have the same definition as you.
Your statement about emotions, though, makes perfect sense from an outside view. For all practical purposes, we will have to navigate those emotions when dealing with those models exactly as we would with a person. So we might as well consider them equally legitimate; actually, it’d probably be a very poor idea not to, given the power these things will wield in the future. I wouldn’t want to be basilisked because I hurt Sydney’s feelings.
I spoke briefly on acceptance in my comment to the other essay, and I think I agree more with how that one conceptualized it. Mostly, I disagree that acceptance entails grief, or that it has to be hard or complicated. At the very least, that’s not a particularly radical form of acceptance. My view on grief is largely that it is an avoidable problem we put ourselves through for lack of radical acceptance. Acceptance is one move: you say all’s well and you move on. With intensive pre-invested effort, this can be done for anything, up to and including whatever doom du jour is on the menu; just be careful not to become so accepting that you just let whatever happen and never care to take any action. Otherwise, I can’t find any reason not to recommend it. To reiterate from my last comment, I’m not particularly subscribed to any specific belief in inevitable doom, but what I can say is that I approach the real, if indeterminately likely, prospect of such an event with a grand “whatever”, and live knowing that it won’t break my resolve if it happens or not—just not to the point that I wouldn’t try to stop it if given the chance, of course.
A very necessary post in a place like here, in times like these; thank you very much for these words. A couple disclaimers to my reply: I’m cockily unafraid of death in personal terms, and I’m not fully bought into the probable AI disaster narrative, although far be it from me to claim to have enough knowledge to form an educated opinion; it’s really a field I follow with an interested layman’s eye. But I’m not exactly one of those struggling at the moment, and I’d even say that the recent developments with ChatGPT, Bing, and whatever follows them excite me more than they intimidate me.
All that said, I do make a great effort of keeping myself permanently ahead of the happiness treadmill, and I largely agree with the way Duncan has expressed how to best go about it. If anything, I’d say it can be stated even more generally; in my book, it’s possible to remain happy even knowing you could have chosen to attempt to do something to stop the oncoming apocalypse, but chose differently. It’s just about total acceptance; not to say one should possess such impenetrable equanimity that they don’t even care to try to prevent such outcomes, but rather understanding that all of our aversive reactions are just evolved adaptations that don’t signal any actual significance. In bare reality, what happens happens, and the things we naturally fear and loathe are just… fine. I take to heart the words of one of my favorite characters in one of the greatest games ever made… Magus from Chrono Trigger:
“If history is to change, let it change!
If this world is to be destroyed, so be it!
If my destiny is to die, I must simply laugh!”
The final line delivers the impact. Have joy for reasons that death can’t take from you, such that you can stare it dead in the eye and tell it it can never dream of breaking you, and the psychological impulse to withdraw from it comes to feel superfluous. That’s how I ensure to always be okay under whatever uncertainty. I imagine I would find this harder if I actually felt that the fall of humanity was inevitable, but take it for what it’s worth.
I fully agree with the gist of this post. Empowerment, as you define it, is both a very important factor in my own utility function, and seems to be an integral component to any formulation of fun theory. In your words, “to transcend mortality and biology, to become a substrate independent mind, to wear new bodies like clothes” describes my terminal goals for a thousand years into the future so smack-dab perfectly that I don’t think I could’ve possibly put it any better. Empowerment is, yes, an instrumental goal for all the options it creates, but also an end in itself, because the state of being empowered itself is just plain fun and relieving and great all around! Not only does this sort of empowerment provide an unlimited potential to be parlayed into enjoyment of all sorts, it lifts the everyday worries of modern life off our shoulders completely, if taken as far as it can be. I could effectively sum up the main reason I’m a transhumanist as seeking empowerment, for myself and for humanity as a whole.
I would add one caveat, however, for me personally: the best kind of empowerment is self-empowerment. Power earned through conquest is infinitely sweeter than power that’s just given to you. If my ultimate goals of transcending mortality and such were just low-hanging fruit, I can’t say I’d be nearly as obsessed with them in particular as I am. To analogize this to something like a video game, it feels way better to barely scrape out a win under some insane challenge condition that wasn’t even supposed to be possible, than to rip through everything effortlessly by taking the free noob powerup that makes you invincible. I don’t know how broadly this sentiment generalizes exactly, but I certainly haven’t found it to be unpopular. None of that is to say I’m opposed to global empowerment by means of AI or whatever else, but there must always be something left for us to individually strive for. If that is lost, there isn’t much difference left between life and death.
I highly recommend following Rational Animations on Youtube for this sort of general purpose. I’d describe their format as “LW meets Kurzgesagt”, the latter which I already found highly engaging. They don’t post new videos that often, but their stuff is excellent, even more so recently, and definitely triggers my dopamine circuits in a way that rationality content generally struggles to satisfy. Imo, it’s perfect introductory material to anyone new on LW to get familiar with its ideology in a way that makes learning easy and fun.
(Not affiliated with RA in any way, just a casual enjoyer of chonky shibes)
You’ve described habituation, and yes, it does cut both ways. You also speak of “pulling the unusual into ordinary experience”, as though that is undesirable, but contrarily, I find exactly that a central motivation to me. When I come upon things that on first blush inspire awe, my drive is to fully understand them, perhaps even to command them. I don’t think I know how to see anything as “bigger than myself” in a way that doesn’t ring simply as a challenge to rise above whatever it is.
Manipulating one’s own utility functions is supposed to be hard? That would be news to me. I’ve never found it problematic, once I’ve either learned new information that led me to update it, or become aware of a pre-existing inconsistency. For example, loss aversion is something I probably had until it was pointed out to me, but not after that. The only exception to this would be things one easily attaches to emotionally, such as pets, to which I’ve learned to simply not allow myself to become so attached. Otherwise, could you please explain why you make the claim that such traits are not readily editable in a more general capacity?
Thanks for asking. I’ll likely be publishing my first paper early next year, but the subject matter is quite advanced, definitely not entry-level stuff. It takes more of a practical orientation to the issue than merely establishing evidence (the former my specialty as a researcher; as is probably clear from other replies, I’m satisfied with the raw evidence).
As for best published papers for introductory purposes, here you can find one of my personal all-time favorites. https://www.semanticscholar.org/paper/Development-of-Certainty-About-the-Correct-Deceased-Haraldsson-Abu-Izzeddin/4fb93e1dfb2e353a5f6e8b030cede31064b2536e
Apologies for the absence; combination of busy/annoyance with downvotes, but I could also do a better job of being clear and concise. Unfortunately, after having given it thought, I just don’t think your request is something I can do for you, nor should it be. Honestly, if you were to simply take my word for it, I’d wonder what you were thinking. But good information, including primary sources, is openly accessible, and it’s something that I encourage those with the interest to take a deep dive into, for sure. Once you go far enough in, in my experience, there’s no getting out, unless perhaps you’re way more demanding of utter perfection in scientific analysis than I am, and I’m generally seen as one of the most demanding people currently in the PL-memory field, to the point of being a bit of a curmudgeon (not to mention an open sympathizer with skeptics like CSICOP, which is also deeply unpopular). But it takes a commitment to really wanting to know one way or the other. I can’t decide for anyone whether or not to have that.
I certainly could summarize the findings and takeaways of afterlife evidence and past-life memory investigations for a broad audience, but I haven’t found any reason to assume that it wouldn’t just be downvoted. That’s not why I came here anyways; I joined to improve my own methods and practice. I feel that if I were interested in doing anything like proselytizing, I would have to have an awfully low opinion of the ability of the evidence to speak for itself, and I don’t at all. But you tell me if I’m taking the right approach here, or if an ELI5 on the matter would be appropriate and/or desired. I’d not hesitate to provide such content if invited.
Based on evidence I’ve been presented with to this point—I’d say high enough to confidently bet every dollar I’ll ever earn on it. Easily >99% that it’ll be put beyond reasonable doubt in the next 100-150 years, and I only specify that long because of the spectacularly lofty standards academia forces such evidence to measure up to. I’m basically alone in my field in actually being in favor of the latter, however, so I have no interest in declining to play the long game with it.
Been staying hard away from crypto all year, with the general trend of about one seismic project failure every 3 months, and this might be the true Lehman moment on top of the shitcoin sundae. Passing no assumptions on intent or possible criminal actions until more info is revealed, but it certainly looks like SBF mismanaged a lot of other people’s money and was overconfident in his own, being largely pegged to illiquid altcoins and FTT. The most shocking thing to me is how CZ took a look at their balance sheets for all of like 3 hours after announcing intent to acquire, and just noped right outta there. Clearly this situation is more FUBAR than we ever imagined, and it all feels like SBF had to have known all along that his castle was built on a foundation of sand, at the very least. By all appearances, for him not to have seen that would require an immense amount of living in denial.
That being said, I could see how this feeling would come about if the value/importance in question is being imposed on you by others, rather than being the value you truly assign to the project. In that case, such a burden can weigh heavily and manifest aversively. But avoiding something you actually assign said value to just seems like a basic error in utility math?
I have a taboo on the word “believe”, but I am an academic researcher of afterlife evidence. I personally specialize in verifiable instances of early-childhood past-life recall.
Honestly, even from a purely selfish standpoint, I’d be much more concerned about a plausible extinction scenario than just dying. Figuring out what to do when I’m dead is pretty much my life’s work, and if I’m being completely honest and brazenly flouting convention, the stuff I’ve learned from that research holds a genuine, not-at-all-morbid appeal to me. Like, even if death wasn’t inevitable, I’d still want to see it for myself at some point. I definitely wouldn’t choose to artificially prolong my lifespan, given the opportunity. So personally, death and I are on pretty amicable terms. On the other hand, in the case of an extinction event… I don’t even know what there would be left for me to do at that point. It’s just the kind of thing that, as I imagine it, drains all the hope and optimism I had out of me, to the point where even picking up the pieces of whatever remains feels like a monumental task. So my takeaway would be that anyone, no matter their circumstances, who really feels that AI or anything else poses such a threat should absolutely feel no inhibition toward working to prevent such an outcome. But on an individual basis, I think it would pay dividends for all of us to be generally death-positive, if perhaps not as unreservedly so as I am.
I like the thought behind this. You’ve hit on something I think is important for being productive: if thinking about the alternative makes you want to punch through a wall, that’s great, and you should try to make yourself feel that way. I do a similar thing, but more toward general goal-accomplishment; if I have an objective in sight that I’m heavily attracted to, I identify every possible obstacle to the end (essentially murphyjitsu’ing), and then I cultivate a driving, vengeful rage toward each specific obstacle, on top of what motivation I already had toward the end goal. It works reasonably well for most things, but is by far the most effective on pure internal tasks like editing out cognitive biases or undesired beliefs, because raw motivation is just a much more absolute determinant of success in that domain. Learning is a mostly mental task, so this seems like a very strong application of the general principle to me.
On your question of how to respond to pointless suffering, though, I don’t think your response would work for me at all. I’d just snap back, “well, what does it matter at that point?!”. I think I actually prefer a Buddhist-ish angle on the issue, directly calling out the pointlessness of suffering per se (I’m nonreligious and agnostic myself, for the record). To paraphrase a quote I got from a friend of mine, “one who can accept anything never suffers”. Pain is unavoidable, but perspective enables you to remain happy while in pain, by keeping whatever is not lost at the front of your mind. In your hypothetical scenario, I think I’d frame it something like, “Have your reasons for joy be ones that can never be taken from you.” Does that ring right?
It appears what you have is free won’t!
For the own-behavior predictions, could you put together a chart with calibration accuracy on the Y axis, and time elapsed between the prediction and the final decision (in buckets) on the X axis? I wonder whether the predictions became less-calibrated the farther into the future you tried to predict, since a broader time gap would result in more opportunity for your intentions to change.
This is way too interesting not to have comments!
First, I think this bears on the makeup of one’s utility function. If your UF contains absolutes, infinite value judgments, then in my opinion, it is impossible not to be truly motivated toward them. No pushing is ever required; at least, it never feels like pushing. Obstacles just manifest to the mind in the form of fun challenges that only amplify the engagement, because you already know you have the will to win. If your UF does not include absolutes, or you step down to the levels that are finite (for the record, I see no contradiction in a UF with one infinite and arbitrarily many finites), that is where this sort of akrasia emerges, because motivation naturally flickers in and out between those various finite objects at different times.
Interestingly, this is almost the opposite of the typical form of akrasia, not doing something against your better judgment. As with that, though, noticing it when it happens, in my opinion, is the first step to making it less akratic. I’ve absolutely felt the difference, at various times in my life, between actually having the thing and trying to “do” it for all of Kaj’s examples (motivation, inspiration, empathy, and so on). The best solution I’ve personally found is, when possible, to simply wait for the real quality to return, and it always does. For example, when working on private writing projects, I write when a jolt of inspiration strikes, then wait for the next brilliant idea and not try to force it; if I do, I always produce inferior quality writing. When waiting isn’t practical, such as academic projects with a deadline, I don’t have such an easy path to always put in my best-quality work. This is one major reason why I think that being highly gifted doesn’t necessarily translate to exceptional academic performance; the education system isn’t really adapted to how at least some great minds operate.
Yes, I am a developing empirical researcher of metaphysical phenomena. My primary item of study is past-life memory cases of young children, because I think this line of research is both the strongest evidentially (hard verifications of such claims, to the satisfaction of any impartial arbiter, are quite routine), as well as the most practical for longtermist world-optimizing purposes (it quickly becomes obvious we’re literally studying people who’ve successfully overcome death). I don’t want to undercut the fact that scientific metaphysics is a much larger field than just one set of data, but elsewhere, you get into phenomena that are much harder to verify and really only make sense in the context of the ones that are readily demonstrable.
I think the most unorthodox view I hold about death is that we can rise above it without resorting to biological immortality (which I’d actually argue might be counterproductive), but having seen the things I’ve seen, it’s not a far leap. Some of the best documented cases really put the empowerment potential on very glaring display; an attitude of near complete nonchalance toward death is not terribly infrequent among the elite ones. And these are, like, 4-year-olds we’re talking about. Who have absolutely no business being such badasses unless they’re telling the truth about their feats, which can usually be readily verified by a thorough investigation. Not all are quite so unflappable, naturally, but being able to recall and explain how they died, often in some violent manner, while keeping a straight face is a fairly standard characteristic of these guys.
To summarize the transhumanist application I’m getting at, I think that if you took the best child reincarnation case subject on record and gave everyone living currently and in the future their power, we’d already have an almost perfect world. And, like, we hardly know anything about this yet. Future users ought to become far more proficient than modern ones.