Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
Gunnar_Zarncke
I’m not arguing either way. I just note this specific aspect that seems relevant. The question is: Is the babies body more susceptible to alcohol than an adults body. For example, does the liver work better or worse than for a baby? Are there developmental processes that can be disturbed by the presence of alcohol? By default I’d assume that the effect is proportional (except maybe the baby “lives faster” in some sense, so the effect may be proportional to metabilism or growth speed or something). But all of that is speculation.
From DeJong et al. (2019):
Alcohol readily crosses the placenta with fetal blood alcohol levels approaching maternal levels within 2 hours of maternal consumption.
https://scispace.com/papers/alcohol-use-in-pregnancy-1tikfl3l2g (page 3)
I have pointed at least half a dozen people (all of them outside LW) to this post in an effort to help them “understand” LLMs in practical terms. More so than to any other LW post in the same time frame.
Related: Unexpected Conscious Entities
Both posts approach personhood from orthogonal angles:
Unexpected Things that are People looks at outer or assigned personhood or selfhood recognized by the social environment.
h Unexpected Conscious Entities looks at inner attributes that may identify agent or personhood.
This suggests a matrix:
High legal / social personhood Low / no legal personhood High consciousness-ish attributes Individual humans Countries Low / unclear consciousness-ish attributes Corporations,
Ships, Whanganui RiverLLMs (?)
Between Entries
[To increase immersion, before reading the story below, write one line summing up your day so far.]
From outside, it is only sun through drifting rain over a patch of land, light scattering in all directions. From where one person stops on the path and turns, those same drops and rays fold into a curved band of color “there” for them; later, on their phone, the rainbow shot sits as a small rectangle in a gallery, one bright strip among dozens of other days.
From outside, a street is a tangle of façades, windows, people, and signs. From where a person aims a camera, all of that collapses into one frame—a roadside, two passersby, a patch of sky—and with a click, that moment becomes a thumbnail in a grid, marked only with a time beneath it.
From outside, a city map is a flat maze of lines and names on the navi. From where a small arrow marked as the traveler moves, those lines turn into “the way home,” “busy road,” a star marking “favorite place”; afterwards, the day’s travel is saved as one thin trace drawn over the streets, showing where they went without saying what it was like to walk there.
From outside, a robot’s shift is paths and sensor readings scrolling past on a monitor, then cooling into a long file on a disk. From where its maintenance program runs at night, that whole file is scanned once, checked for errors, and reduced to a short tag: “OK, job completed 21:32.” In the morning, a person wonders about the robot, presses a key, and sees that line.
From outside, one of the person’s days is a neat stack: a calendar block from nine to five, a few notifications, the number of steps and minutes of movement in a health app. From where they sit on the edge of the bed that night, phone in hand, what actually comes back is a pause under a tree, a sentence in one of those messages, the feeling in their stomach just before one of those calls; a sense of what they will write about the day later.
From outside, the question is a short sound in the room: “How was your day?” From where the person’s attention tilts toward it, the whole day leans on the edge of the answer: the pause under the tree, the urgent message, the glare off a shop window, the walk home with tired feet. After a moment, they say, “pretty good.”
From outside, the diary holds that same day as four short lines under a date, ink between two margins. From where the person leans over the page to write them, the whole evening presses in at once with the smell of the room, the weight in their shoulders, a tune stuck in their head. And only a few parts make it into words before the pen lifts and the lamp goes out.
From outside, years later, the diary is a closed block on a shelf among others. From where the same person sits with it open on their knees, that day comes back first as slanting lines under the date, a word scratched out and rewritten. The scenes seem to grow straight out of the words: sun between showers, a laugh on a staircase, the walk home in fading light. They wait for something else to come up, but their mind keeps going back to the page.
From outside, a later page holds only a line near the bottom: “Spent the evening reading old diaries.” From where they wrote it, what filled that night was less the days themselves than the pages: the weight of the stacked volumes on their lap, the slants of their younger handwriting, and the more confident tone.
From outside, the name on the inside cover is only a few letters on each booklet. From where the person sees that name above all the pages, it runs like a thin thread through the pauses under trees, the calls they dreaded, the walks home in the rain.
From outside, this evening is a room with a chair, a bedside table, a closed notebook on top. From where the person sits, there is the cover under one hand, the fabric of the chair under the other, breath moving, the particular tiredness of this day in their limbs; after a while they open the diary to today’s date, stare at the empty space, write two quick words, close the book again, and sit there a moment longer noticing that they are staring at the words on the paper while the room carries on around them.
p/m=
there is a typo here.
In Lois McMaster Bujold’s Vorkosigan Saga, Cordelia is pregnant and deals with coups, war, and difficult decisions more than once.
Thank you for your details analysis of outer and inner alignment. Your judgment of the difficulty of the alignment problem makes sense to me. I wish you would have more clearly made the scope clear and that you do not investigate other classes of alignment failure, such as those resulting from multi-agent setups (an organizational structure of agents may still be misaligned even if all agents in it are inner and outer aligned) as well as failures of governance. That is not a critique of the subject but just of failure of Ruling Out Everything Else.
Yeah. Holonomy is applicable: A stable recursive loop that keeps reshaping the data passing through it. But what we need on top of holonomy is that self-reference in a physical system always hits an opacity limit. And I think this is what you mean but your reference to Russell’s vicious-circle: It leads to contradictions because of incremental loss.
Though I wonder if maybe we can Escape the Löbian Obstacle.
You ask: Even if the brain models itself, why should that feel like anything? Why isn’t it just plain old computation?
A system that looks at itself creates a “point of view.” When the brain models the world and also itself inside that world, it automatically creates a kind of “center of perspective.” That center is what we call a subject. That’s what happens when a system treats some information as belonging to the system. How the border of the system is drawn differs, body, brain, mind differs, but the reference will always be a form of “mine.”
The brain can’t see how its own processes work (unless you are an advanced meditator maybe).
So when a signal passes through that self-model, the system can’t break it down; it just receives a simplified or compressed state. That opaque state is what the system calls “what it feels like.”Why isn’t this just a zombie misrepresenting itself? The distinction between “representation of feeling” and “actual feeling” is a dualist mistake. The rainbow is there even if it is not a material arc. To represent something as a felt, intrinsic state just is to have the feeling.
I argue that the inference bottleneck of the brain leads to two separate effects:
subjectivity—the feeling of being me (e.g. “I do it”)
phenomenal appearance—the feeling that there is something (qualia, e.g. “there is red”)
While both effects result from the bottleneck, the way they result from compression of different data streams should show different strength for different interventions. And indeed that is what we observe:
Trip reports on some psychedelics show self-dissolution without transparency (“No self” but strong intrinsic givenness (“I don’t know how this is happening, but no one is experiencing it”),
On the other hand, advanced meditators often report dereification (everything is transparent, perceptions forming can be observed) with intact subjecthood “I am clearly the one experiencing it”).
I have difficulty finding the function that shows which posts I have strongly upvoted. It might be useful to list the direct URLs that provide these functions in this FAQ (not only for top voted, but all, such as /allposts)
Sure. I take it you have meditation experience. What is your take on subjectivity and phenomenal appearance coming apart?
It means reductionism isn’t strictly true as ontology.
I think you are working from an intuition of reductionism being wrong, but I’m still not clear about the details of your intuition. A defensible position could be that physics does not contain all the explanatorily relevant information or that reality has irreducible multi-level structure. But you seem to be saying that reductionism is false because subjective perspective is a fundamental ingredient, and you want to prove that via the efficiently computable argument. But I still think it doesn’t work. First, it proves too much.
It isn’t obvious that biological structure isn’t efficiently readable from microstate.
Agree that it is not obvious.
Other macro facts might be but it’s of course less clear.
But it seems pretty clear to me that most biological systems actually do involve dynamics that make it computationally infeasible for an external observer to reconstruct the macrostructure from microstructure observations at a given point. And we can’t appeal to ‘complete history’ to avoid the complexity, because with full history you could also recover the key in the HE case; the only difference is that HE compresses its relevant history into a small, opaque region.
What I do agree with you: Physics only tracks microstructure. But phenomenal awareness, meaning, macro-patterns, and information structure are not obviously reducible as descriptions to microstructure. The homomorphic case is a non-refutable illustration of this non-transparency.
But I disagree that this is caused by a failure of efficient computability; instead, we can see it as a failure of microphysical description to exhaust ontology. This matters because inefficiency is an epistemic constraint on observers, while ontology is about what needs to be included in the description of the world.
If you generalize to optics, then it seems your condition for “exceeding physics” is “not efficiently readable from the microstate,” i.e.X is not a P-efficient function of the physical state.”But then it seems everything interesting exceeds physics: biological structure, weather, economic patterns, chemical reactions, turbulence, evolutionary dynamics, and all nontrivial macrostructure. I’m sort of fine with calling this “beyond” physics in some intuitive sense, but I don’t think that’s what you mean. What work does this non-efficiency do?
I’m worried we talk past each other.
You’re saying:
efficient reconstructibility is unclear in the rainbow case,
but whatever the right story is, it must handle both rainbow-like cases and the engineered homomorphic encryption case,
and if some of those cases force non-efficient supervenience, then we face your trilemma.
That part I agree with.
The point I’ve been trying to get at is: Once the same issue arises for ordinary optical appearances, we’ve left behind the special stakes of step 10! Because in the rainbow case, we all seem to accept (but maybe you disagree):
appearance is fully determined by physics,
but the mapping from microphysics to appearance may be extremely messy or intractable for an external observer,
and we don’t treat that as evidence that the visual appearance “exceeds physics.”
Or, if rainbow-style cases also fall under the trilemma, then the conclusion can’t be “mind exceeds physics.” It would have to be the stronger and more surprising “appearances as such exceed physics” or “macrostructure in general exceeds physics.” That’s quite different from your original framing, which presents the homomorphic encryption case as demonstrating a distinctive epistemic excess of mind relative to physics.
It seems you are biting the bullet and agreeing that the rainbow also has the problem of how a mind can be aware of it when it isn’t (efficiently) reconstructable. But then this seems to generalize to a lot, if not all, phenomena a mind can perceive. Doesn’t this reduce that conception of a mind ad absurdum?
I think step 10 overstates what is shown. You write:
“If a homomorphically encrypted mind (with no decryption key) is conscious … it seems it knows things … that cannot be efficiently determined from physics.”
The move from “not P-efficiently determined from physics” to “mind exceeds physics (epistemically)” looks too strong. The same inferential template would force us into contradictions in ordinary physical cases where appearances are available to an observer but not efficiently reconstructible from the microphysical state.
Take a rainbow. Let p be the full microphysical state of the atmosphere and EM field, and let a be the appearance of the rainbow to an observer. The observer trivially “knows” a. Yet from p, even a quantum-bounded “Laplace’s demon” cannot, in general, P-efficiently compute the precise phenomenal structure of that appearance. The appearance does not therefore “exceed physics.”
If we accepted your step 10’s principle “facts accessible to a system but P-intractable to compute from p outrun physics” we would have to say the same about rainbows:
the rainbow’s appearance to an observer “knows something” physics can’t efficiently determine.
That is an implausible conclusion. The physical state fully fixes the appearance; what fails is only efficient external reconstruction, not physical determination.
Homomorphic encryption sharpens the asymmetry between internal access and external decipherability, but it does not introduce a new ontological gap.
So I agree with the earlier steps (digital consciousness, key distance irrelevance) but think the “mind exceeds physics (epistemically)” inference is a category error: it treats P-efficient reconstructability as a criterion for physical determination. If we reject that criterion in the rainbow case, we should reject it in the homomorphic case too.
I like the sharp distinction you draw between
“Our Values are (roughly) the yumminess or yearning…”
and
“Goodness is (roughly) whatever stuff the memes say one should value.”
but the post treats these as more separable than they actually are from the standpoint of how the brain acquires preferences.
You emphasize that
“we mostly don’t get to choose what triggers yumminess/yearning”
and that Goodness trying to overwrite that is “silly.” Yet a few paragraphs later you note that
“a nontrivial chunk of the memetic egregore Goodness needs to be complied with…”
before recommending to “jettison the memetic egregore” once the safety-function parts are removed.
But the brain’s value-learning machinery doesn’t respect this separation. “Yumminess/yearning” is not fixed hardware; it’s a constantly updated reward model trained by social feedback, imitation, and narrative framing. The very things you group under “Goodness” supply the majority of training data for what later becomes “actual Values.” The egregore is not only a coordination layer or a memetically selected structure on top, it is also the training signal.
Your own example shows this coupling. You say that
“Loving Connection… is a REALLY big chunk of their Values”
while also being a core part of Goodness. This dual function of a learned reward target and the memetic structure that teaches people to want it, is typical rather than exceptional.
So the key point isn’t “should you follow Goodness or your Values?” but “which training signals should you expose your value-learning architecture to?” Then the Albert failure mode looks less like “he ignored Goodness” and more like “he removed a large portion of what shapes his future reward landscape.”
And for societies, given that values are learned, the question becomes which parts of Goodness should we deliberately keep because they stabilize or improve the learning process, not merely because they protect cooperation equilibria?
What is the relative cost between Aerolamp and regular air purifiers?
For regular air purifiers, ChatGPT 5.2 estimates 0.2€/1000m3 of filtered air.
From the Aerolamp website:
And ChatGPT estimates 0.02 to 0.3€/1000m³ for the Areolamp—quite competitive esp. given that it is quieter.