I figured Harry himself was just aro/ace and it was showing through even at a young age. I admit this was a bit of typical mind fallacious reasoning; people could tell I was unusual like that when I was 10.
espoire
Yeah, feels like it’s at a similar difficulty level as I’ve been experiencing trying to transcribe my own thought process as pseudocode. And I get the impression that few insights would be readily transferrable across different egregores, in which case each and every one might need its own individual effort.
Reminds me of the work that was done which caused the decline of the Ku Klux Klan: someone infiltrated, learned all the rituals and coded language used, then published that information—and that was all it took to cripple their power.
I wish you all the luck re: human enhancement.
Interesting, and thanks for taking the time!
That’s a very new-to-me take on getting AGI efforts to stop: understand and intervene directly on the egregore, rather than like, trying to influence individuals.
I’ll have to think about this.
It’s a pet cause of mine, to get as many people as I can off of the harmful social media platforms (which in my view is nearly all of them, weighted by readership). Possibly [considering “social media use” as an egregore, and considering how to interact with the egregore] might be more effective than my past efforts.
Your list of coded movements really rings-relevant to me—“second-order norm enforcement” made me immediately think of how people will vocally remark that you’re strange or ask why, if they learn you’re not on any social media that they’ve heard of. I suspect this mostly does not influence social-media-nonusers, but rather affects bystanders, erecting an additional barrier to exiting the egregore.
Thanks for the new mental model. Even if I end up not adopting it wholesale, it seems obviously full of useful parts!
Would you be willing to elaborate on what you meant by “decoding” egregores? I’m semi-familliar with the term (checking my impression of understanding: egregore = self-sustaining semi-agentic meme running on the computational substrate of more than one human brain, for example a corporation) but I’m not clear on what decoding means here. Like trying to transcribe the egregore’s algorithm into something easily human-readable?
Do you have a reference for information on the compute graph of the brain? I’d love to read about that.
I’ve been trying to reverse-engineer my own brain’s high-level algorithm into code, via introspection, and had non-zero success. Knowing more about brain anatomy in general sounds like the kind of thing likely to bump my guesses in useful directions.
I started from the same place as:
I spent years trying to come up with a mental strategy that would reliably generate willpower, all to no avail.
I have had some limited success with the Decision Theorist method. Noticing that my decision process (in this example, regarding whether to go to bed yet) is basically the same computation at 10pm, 1am, 3am, etc. I calculate the limit of this behavior (e.g. computation not meaningfully altered until the birds outside start chirping, or the sun comes up, or I get so tired it makes me nauseous), notice my local choice is using a false option: not [stay up 5 more minutes] versus [sleep now], but rather [stay up until the sun comes up] versus [sleep now]. Having noticed the true form of my options, the correct choice becomes easy to act upon.
Of course, I’m posting this at 6am local time. My results have been inconsistent. This plan typically fails when I don’t notice myself deciding, or don’t remember to do that “what’s the limit” calculation.
Thanks for the tip on Dextroamphetamine. Hopefully I’ll act on it.
I also have reason to believe my color perception is pretty abnormal, both before and after estradiol supplementation. Thus, not having heard of [anything relating to my vision] with relation to estradiol should perhaps not be considered surprising. My eye doctors have labelled it as “mild protanomaly”, though both offices remarked that the label doesn’t quite fit.
If still interested in more detail, see: https://www.lesswrong.com/posts/NyiFLzSrkfkDW4S7o/?commentId=tQaoxSgMteZnaWWee
I’m pretty certain that even if we figure out how to halt or reverse “aging” as in “the (set of) root cause(s) of the progressive syndrome of morbidities and heighten d disease risk universal among 80+ year old people today”, that there would be other forms of long-term accumulating damage to understand and reverse on longer time scales than a single century.
Some examples of long-term accumulating damage that we’ll eventually need to address, but not before people are reliably living to 100+: lead ion accumulation over a lifetime, ditto a bunch of other long-lived poisons, scar tissue accumulation, accumulation of various “dusts” (ex: asbestos) in the lungs, etc.
I think we’re currently, as a civilization, hunting for whatever lifespan/heathspan low-hanging fruit might exist, and will eventually shift to more systematic approaches as our civilization further understands aging and learns to do nanotech.
In the mean time, I’m basically keeping my eyes open for preventable permanent damage sources to avoid, to make my personal time limit less urgent. I take dust inhalation very seriously, ditto trace heavy metal exposure, ditto sunburn.
I have a pet theory that… I don’t see a way to implement *in vivo*, but I’d like to float in case it has any value.
The gist is the notion of self-copying genes that progressively pollute each cell lineage’s genome over decades. (And the explanation for why this doesn’t just accumulate across generations and end the species would be some combination of polluted sperm failing to outcompete surrounding healthier sperm on average, and polluted embryos being sufficiently more likely to miscarry.)
And the related solution (and experiment) would be editing gametes or zygotes to lack all copies of the self-copying gene, so that there’d be none of them to initiate the process of runaway self-copying gene accumulation. In the worlds where this idea leads to longevity escape velocity, it’d need to be upstream of the other pieces of aging, which… I’m guessing not all of them, but maybe enough? It’s based on the “retrotransposons” idea, from the literature.
I… have neither the knowledge nor the resources to go looking myself, yet, but it felt worth pointing out that hypothetically there might be avenues left to explore, if cellular reprogramming isn’t enough.
Excellent article! Thank you for teaching me the memorable overview of the topic, and attempting to spread curiosity!
One point of feedback for readability, and please don’t mistake the verbosity of the feedback for strength of evaluation relative to what I’ve already said; it really is an excellent article:
In the absolute worst-case situation, this should lead us to bump our DALYs for endometriosis up by 60%. Starting with a base DALY of 56.61 per 100k people, this leads us to 141.52.
This paragraph (and perhaps the surrounding couple paragraphs in either direction) was hard for me to follow, and this might indicate an outright error, though I’m not at all confident that is the case.
My interpretation of the preceding paragraphs was something like “in the study, they found 27 cases already diagnosed, and then 37 more for a total of 64 when they made a more thorough search” which would be a ~2.4× (64/27) multiplicative increase, or a 140% increase, but not a 60% increase? I’m guessing the 60% came from 37⁄64. “60% of extant cases were overlooked” seems leading to me, but “bump up by 60%” seems misleading to me.
I stopped evaluating the math past that point, and I don’t feel that this harmed my overall understanding of the article, nor the argument in the call to action; I still understood the point that “endometriosis is probably even more underfunded than it looks”, and took it on faith that the true numbers minus my confusion would still bear that out.
I’m not sure how exactly I’d rewrite the text for clarity, nor that it would even be correct to do so; maybe this is a good version for maximizing reader comprehension and I’m simply one of the rare failures there, but, well, one data point for you that it might be worth workshopping.
That, to me, seems to assume that those areas end up with a locally greater estrogen concentration (for all that summarizing a whole family of molecules related primarily by their signalling effects, can even sensibly be summarized by a single “concentration”), which I’m not actually sure is the case.
It seems to make intuitive sense that a chemical would be most concentrated near its site of introduction into the body, but then I can also think of specific counterexamples. Lead, for example, pools in the skeleton, and almost certainly isn’t being (originally) introduced into the body via the skeleton.
Additional data:
I know no one who has experienced psychosis, and have never before mentioned this fact in any context.
I noticed a few psychological changes myself, but nothing major.
I’m more curious, I think, but perhaps my environment simply coincidentally became more interesting by my existing interest function? Hard to say. I’m on average happier, though not by much. I experience and express emotions more easily, from less intense triggering experiences than I had previously required.
Some of my lower-level sensory perceptions shifted, too. I was almost red-green colorblind before, and I now distinguish red and green slightly better. I can see the red shining though most browns now. Indeed, browns and pure reds are much more vibrant, in a way entirely nonoverlapping with my entire previous experience. Also chocolate used to have a hint of an “earthy” taste, and now instead has a hint of “fruity”.
An anecdata point:
I couldn’t visualize the mountain at all. …but I feel like I was able to visualize an orange’s innards in high fidelity—which surprised me, because I often fail at 2d visualizations which seem to be easy for the majority of the population. I attribute the difference between the orange and the mountain to simple subject familiarity; I actually do know what’s in an orange.
I also had the experience of feeling like I was able to visualize 4d spaces in some non-abstract way when I studied non-Euclidean geometry in my early teens. I used visualizations in a 13-dimensional space in designing some software about 7 years ago, and am currently using a visual argument in a 5+ (variable) dimensional vector space to “prove” that a subsystem for my video game will achieve its purpose. I sometimes make 3d model assets by visualizing the 3d shape and then manually typing in coordinates for each vertex.
My case seems to me to suggest that 3+ dimensional visualization is a distinct skill/ability from 2d visualization—and that high competence in 2d visualization is not a prerequisite for higher-dimensional visualization. It also “feels” introspectively like a single skill for 3+ dimensional visualization, NOT a separate skill for each dimensionality as might be assumed due to 2d seeming to be a special case.
The existence of the 2d special case in my brain seems curious; naively, if I can handle any dimensionality 3+, I ought to be able to simply use that skill if it’s more competent than my 2d visualization. There having ever been a 2d special case makes some sense; I can imagine some instinctual ability there, or perhaps it being inductively simple to create given the 2d input data. But why did the 2d special case persist after it became outclassed by an emerging 3+d ability?
I’m now curious about what may happen if I attempt to explicitly involve my 3+d ability to take over for 2d visualization tasks. Can I gain 2d visualization capacity this way? Why is suppressing the “native” 2d mode so difficult for 2d tasks? If I do, will it break anything? I’d be worried about e.g. loss of other plausibly-instinctual visual abilities like facial recognition, emotion recognition, etc, but I already seem to be inept at those skills; I don’t have much to lose.
Yes, red and green seem subjectively very different—but only to conscious attention. A green object amid many red objects (or vice versa) does not grab my attention in the way that, e.g. a yellow object might.
When shown a patch of red-or-green in a lab setting, I see “Red” or “Green” seemingly at random.
If shown a red patch next to a green patch in a lab, I’ll see one “Red” and one “Green”, but it’s about 50:50 as to whether they’ll be switched or not. How does that work? I have no hypotheses that aren’t very low confidence. It seems as much a mystery to me as I infer it seems mysterious to you.
I’ve read that imagination (in the sense of conjuring mental imagery) is a spectrum, and I’ve encountered a test which some but not all phantasic people fail.
I don’t recall the details enough to pose it directly, but I think I do recall enough to reinvent the test:
Ask the subject to visualize a 3x3 grid of letters.
Provide the information required to construct the visualization in an unusual order, for example top-to-bottom right-to-left for people not accustomed to that layout.
Ask them to read the 3-letter word in each row.
Test details guessed above may not properly recreate the ability to distinguish levels of imagery. My hazy memory says the words might be top-to-bottom? Or the order of providing the letters might matter?
Someone actually seeing the image you’ve requested they construct would be able to trivially read off three words. …but someone without mental imagery or with insufficient mental imagery may fail.
I recall discovering that I really can’t imagine more than about 2 letters at a time before adding additional detail to my mental visual workspace forces the loss of something else. That seems pretty poor, and tracks with my inability to imagine human faces—my theory is that a specific face requires more details to distinguish it from other faces than the maximum amount of detail I can visualize.
Actually, having written this, it just now occurs to me that my cached thought may be incorrect, that all my other qualia processing is “normal”.
...I routinely (but not always) fail to perceive any qualia for hunger or smells (this predates COVID) -- yet, curiously, in the case of smells I somehow know (without any experience of perception) that there is a smell that I ought to be experiencing, and its rough intensity.
In the case of hunger, I’ll literally fail to know I need to eat. I’ll get the shakes and collapse and wonder why. I’ve needed to establish a habit of scheduled eating, to avoid this occurrence.
Previously, I had grouped these defects in with my inability to know my own wants—in my theory: trauma damage that severed the connection to certain mental modules—but it now occurs to me that an alternative hypothesis exists: that there’s a possible connection to my unusual visual qualia processing.
I see a couple leads to investigate, which could help shed additional light on the topic. One is common enough to have a name: synthesthesia. The other, I think, may be personally unique to me, or at least some combination of very-rare and never-discussed that I’ve never heard of it.
Synthesthesia, to my understanding, involves multiple qualia accompanying various experiences, notably including qualia native to a different sensory modality. E.g. “That sound was green.” Exploring the causal chain resulting in such utterances seems likely to turn up insights into qualia which will be more broadly applicable.
As for my unusual qualia processing: I am measurably red-green colorblind; in a laboratory setting, clean of context clues, I guess no better than chance whether a color is red or green, although I can reliably tell X and Y are both red or green and they’re the opposite color from each other. Yet in everyday life, I experience qualia for red and green, almost always “correctly” (in that the qualium I’ll call “Green” I experience when seeing actual green objects well in excess of 99% of the time, and vice versa for “Red” and red objects.)
My current theory as to how this works:
Whichever module assigns qualia information to my experiences has some memory and some world knowledge, which it uses to make educated guesses.
When I see red or green, I think my qualia-assignment process searches my visual field for anything known-green, for example plant life. (Or known-red, though this case is rarer.) Since I apparently can tell red and green apart, then having a known example allows it to chain the “this is definitely green” belief to everything else I’m seeing that’s red-or-green and not opposite to the known-green object.
By elimination, the remainder of objects are red.
The assigned qualia are stable, that is, they never change while I’m experiencing them, even when their incorrectness becomes evident. “That LED is green, not red.” --> No shift in perception.
...but repeated experiences with the same object separated by enough time, will result in me experiencing different qualia for the same object on different encounters, and feedback like the above “that LED is green, not red” will eventually be learned, and I’ll see “Green” reliably after enough feedback.
I think the existence of my defect may shed some light on the working of normal qualia.
That is, I think there’s a module which makes educated guesses about certain true properties of the world, based on the sensory stream, and annotates the sense information with its guesses, before that sense information reaches awareness. These annotations either become or select “qualia”, the inexplicable ineffable differences in experience correlating to (or encoding) actual sense data.
Further, I think that investigating the causal chain resulting in my unusual experience might allow us to localize the qualia-annotation process in my brain, and perhaps find a standard location in many brains.
I suggest a simple explanation: some of us have qualia and some of us don’t.
Well that’s an alarming hypothesis.
I’ve seen expressed (and held the view personally) that a world devoid of qualia is an example of a world devoid of value, in the consequentialist sense. …but at least in my case, my view was somewhat grounded in the idea that all people are morally significant combined with the implicit assumption that the overwhelming majority of adults experience qualia, so updating to a higher probability of “a significant fraction of people alive today do not experience anything like qualia” ought to also come with an update away from “non-qualia-experiencing agents lack moral value”.
I worry that some people may hold my prior view uncritically, and see an admission of not experiencing qualia as a moral license to disregard the person’s well-being. See various historical takes about “X minority doesn’t have souls” and the resultant treatment.
Does anyone have any other recommended easy practice questions? I feel like dissolving free will produced useful insights (notably, it seems I found a small portion of ideas not already present in posted solutions to free will), and I’d like to attempt more such problems at which it is suspected I’m likely to succeed.