But when the argument for the alternative boils down to “eat shit, not chemicals”...
I’m kidding, but only slightly. Organic fertilization is a bit gross, and I think most food companies would prefer to not be associated with any of dead leaf matter, rotting leftovers, or manure.
Counterargument: Composting totally became a thing, and that potentially puts the grossness right in your backyard.
(Huh. Composting is certainly something people do on an individual level to micro-combat the usage of nitrogen fertilizers, with probably a very negligible effect. And some people do seem dedicated to it. But I suspect that if you asked most people who do it, they would claim it’s about landfills or something, not soil nitrogen content.)
Even if it had evolved, any detailed form of communication that had the potential to transmit hard-to-break imperatives is something you want to be very, very careful with.
Defection, manipulation, and novel avenues for disease-transmission or parasitism heavily disincentivize this. It’s intuitively “gross” for a reason.
TL;DR: Infections and defections would probably utterly wreck this. The blood-brain barrier exists for a reason. While we did get language, in other ways we’ve evolved specifically to hide information from each other; it’s not straightforwardly evolutionarily favored. Large clusters of highly-related organisms have more incentive to do this (bacterial mats, ants, our own cells, etc.), and the information-bandwidth they share with each other through pheromones and chemical signals is actually pretty staggering. But at a glance, I do think they pay a cost in increased (and more elaborate) avenues for manipulations and infections to reap the benefits of this privilege.
Edit to add: Linking some additional strongly-related articles! SSC’s Maybe Your Zoloft Stopped Working Because a Liver Fluke Tried To Turn Your Nth-Great-Grandmother into a Zombie and the paper it centers on, Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation
It bears pointing out that evolution usually seems to want the opposite of this for our centralized-decision-maker organs. Notice that instead of making the brain more open-access over time, most species went the direction of making it isolated from even our own bodies, using things like the heavy-filtering brain/blood barrier. The risk of that sensitive organ getting poisoned, diseased, or biologically manipulated was just too high to risk it.
(Nematodes make what humans call “embodied thinking” look like a joke; the serotonin from their digestive tract is felt by their brains directly.)
People used to die in droves (and at young ages!) of measles, tuberculosis, and a million other things. Even before cities, herd-living put us under immense pressure to develop a pretty intensely specialized immune system. If we had to worry about giving a disease a highway to our central nervous system, that would be… very, very bad. It might even guarantee that such a species would never get to centralize such things at all, such a force is the risk of infection.
Not to even get into the possibility of physical brain-hijackings by concepts (like memes, but oh-so-much-worse!), or even just catching communication-transmitted Kuru… but here’s a pretty vivid speculative description of just how bad being an evolved “open book” that granted others write-access would probably get.
And with regards to the “benefits” of open communication -of information conveyed in a language that’s very hard to fake- we do still have some information transmitted in body-language and words. That certainly captures some of the benefit. But it bears pointing out that we’re a species that un-evolved any obvious presentation of whether a female is in estrus, and has very strong inhibitions around trying not to gain information from each other’s body odor. “Complete, total honesty” is not something evolution typically selects for, and it didn’t veer entirely that way for us. Even in the less-cutthroat modern era (at least, compared to our distant savannah past), Greg Egan’s Closer feels like a pretty realistic depiction of how we might feel about it if we ever did fully share our mental experience with even one another person. I’ll avoid spoiling it too badly, but we’d probably quickly uncover a lot of things about one other that we wished we didn’t know.
Bacterial mats, giant networks of fungi, and eusocial insects with strong genetic kinship might have strong enough evolutionary incentives for this to line up, although higher relatedness actually exacerbates the infection concern. And between the cells of multicellular creatues, certainly quite a lot does get communicated. Many of these examples do seem to “share their mind” in at least some meaningful sense. They transmit a lot of information and orders to one another, and have a communal decision-making process at varying degrees of centralization/decentralization. Pheromones for insects, various signalling secretions from bacteria, the oodles of transactions and deliveries between our cells at every moment… combined these can be very high-bandwidth. Almost incomprehensibly so, if you’ve ever seen attempts to measure and chart such things.
Ants could practically be said to have a pheromone “language,” complete with clan-identification tags. And as a way to selectively trigger a highly-specific neural pathway, or activate a known set of behaviors in a conspecific with the same brain-configuration, pheromones are not a bad way to go? The behavior patterns pheromones set off can get oddly specific at times.
And… ants also get tricked by pheromones into feeding the brood parasites that eat their own young. And what we call “bacterial sex” (high-bandwidth communication of DNA?) is actually virus-esque plasmids trying to transmit themselves to new bacteria, like an infection. Some plasmids might even come with addiction molecules, which is an extra-douchey way for an plasmid to convey “replicate me, or die.” And in coordinated bacteria, you do sometimes see defectors. So… it’s still pretty manipulable, and it sure gets manipulated.
The more stereotyped behaviors you can set off through external signals, the more you have a “broader attack surface,” in the cybersecurity lingo. And biological parasitism is ubiquitous, and fractaline, and adaptive, and uses any damn attack surface it can get.
Humans? A fluke. Parasites are evolution’s true darlings.
I thought this was an interesting question… although I definitely get the feeling like I’m missing some of the context behind this “planetary boundaries” write-up.
(What is the Stockholm Resilience Center? What are its motives and methods? Why was it doing this analysis, and how did you end up running into it?)
I agree that fertilizer runoff gets talked about a lot less than climate change, and I’m not entirely sure of why that is. I just looked it up, and “Organic”-labeled things apparently do already mandate organic fertilizers (which should be N-neutral on net?). So there’s at least that.
Regarding their assessment… one of the factors that seemed to be weighted a lot in their assessment was “level of antropogenic change vs natural variation in Process X.” I’d expect that to have heavily-weighted the Nitrogen Cycle, since our interference in it dwarves the variability due to natural processes (something they themselves spell out. The Haber process is a strange, powerful, magical, inorganic thing the humans cooked up).
This metric is… not precisely the same as “level of damage this change can cause.” They seem to have set up some sort of threshold for what they consider “dangerously high”, and I don’t really understand the thought-process or reasoning they used in picking those thresholds. But I think they factored “change vs natural variation” into their thinking a good deal.
My understanding from my own Ag background is that fertilizer runoff really can be a big problem; nitrogen and phosphorus are a major limiting nutrient for plant-life, both on land and in freshwater ecosystems (the ocean surface is iron-limited, interestingly). In a sense, this is exactly why we use it; we want our food crops to grow at a wild rate, one rarely seen in nature.
When there’s a sudden influx of nitrogen into a freshwater system, one of the possible consequences is an algal bloom. These go through a boom-and-bust cycle (with the seasons, or resource-availability patterns), leading to a massive algal die-off. During this die-off, decay bacteria wipe out the underwater oxygen-supply, leading to knock-on effects on freshwater ecosystems like massive fish die-offs. Enclosed spaces like lakes are perhaps especially vulnerable, since there is no way for the N and P to ever exit the system, potentially perpetuating the cycle indefinitely.
That said, suddenly cutting fertilizer usage would practically ensure that food production would suddenly drop way down, and that would ill-serve a lot of other human values. Reducing runoff looks like a very hard problem, to me. Finding ways to remove excess P and N from environment seemed to be an area with at least a little bit of interest, but it doesn’t seem very actionable on an individual rather than city or state level, which might explain the low publicity? Unsure.
Some rules seem to have an element of “cost of compliance” that relies on being able to predict the future, in a way that even specialists have little hope of doing. Sometimes, this leads to a risk-taker-enriched (aka asshole-filtered) gray zone surrounding a well of value which may-or-may-not have been poisoned (something of a Shroedinger’s Well?).
If the gray zone is valuable, then for a while, the market might heavily favor gray-area violators of this type. But occasionally, one of these zones suddenly transforms into a trap for gray-area violators at all levels of competence. At least in theory, I could see some law-makers deploying this variety of illegible rules on purpose, to use that as a wasp-trap for sneaky, competent boundary-pushers. I have very little idea of whether this actually happens on-purpose much, and a lot of things that might initially look like this probably turn out to be “you gotta know a guy” in retrospect.
Some examples I can think of that might fit this pattern...
The question of exactly which financial regulations would be deemed “applicable to Bitcoin” was going to have to be done largely in retrospect, and I think even specialists largely couldn’t make reliable predictions about this. On a related note, I know a story where a pre-blockchain gold-backed online currency tried to get a permit and was prohibited, due to regulators deciding that they didn’t fall into the relevant reference class. This company was later penalized into bankruptcy for operating without that permit, when a later court ruling decided that they did fall into that reference class.
More broadly, the question of “will rules around patents will be leveraged against you” seems to sometimes fall near this gray-zone. That one’s dampening profile is a bit weirder and complicated, though. The dampening-effect there seems to be disproportionately borne by the medium- to well-funded. It seems to reward obscurity, because small corporations are usually not worth going after about patent violations. Medium-sized ones might be, and often settle out of court to avoid a lawsuit, making it profitable to go after them; I would guess that they’re the ones penalized the worst by this, but I’m not certain. Top corporations probably fall somewhere in-between; on the one hand, they tend to have good lawyers (repelling frivolous lawsuits), but on the other hand, they might stand to lose a very large sum in court.
Possibly anything where court rulings being made on the same case seem to see-saw back-and-forth as it goes up levels.
I stumbled into a satisfying visual description:
This came out of thinking about explaining sparseness in higher dimensions*, and I started getting into visuals (as I am known to do). I’ve see most of the things below described in 2D (sometimes incredibly well!), but 3D is a tad trickier, while not being nearly as painfully-unintuitive as 4D. Here’s what I came up with!
Also, someone’s probably already done this, but… shrug?
In a simple case of Monte Carlo/random-sampling from a three-dimensional space with a one-dimensional binary or float output...
You’re trying to work out what is in a sealed box using only a thin needle (analogy for the function that produces your results). The needle has a limited number of draws, but you can aim the needle really well. You can use it to find out what color the object in the box is at each of the exact points you decided to measure. You also have an open box in which to mark off the exact spot in space at which you did the draw, using Unobtainium to keep it in place.
Each of your samples is a fixed point in space, with a color set by the needle(function/sampler’s) output/result. These are going to be the cores that you base your render around. From there, you can...
For Nearest Neighbor (NN), you can see what happens when you situate colored-or-clear balloons at each of the sample points, and start simultaneously blowing up all of them up with their sample point as a loose center. Gradually, this will give you a bad “rendering” of whatever the color-results were describing.
(Technically it’s Voronoi regions; balloons are more evocative)
Decision trees are when you fill in the space using colored rectangular blocks of varying sizes, based on the sample points.
Most other methods involve creating a gradient between 2 different points that yielded different colors. It could be a linear gradient, an exponential gradient, a step function, a sine wave, some other complicated gradient… as you get more complicated, you better hope that your priors were really good!
A human with lots of prior experience looking at similar objects tries to infer what the rest looks like. (This is kinda a troll answer, I know. Thinking about it.)
3D Pointillism even just sounds like a really bad idea. Pointillism on a 2D surface in a 3D space, not as bad of an idea! Sparsity at work?
The Blind Men and an Elephant fable is an example of why you don’t want to do a render from a single datapoint, or else you might decide that the entire world is “pure rope.” If they’d known from sound where each other were located when they touched the elephant, and had listened to what the others were observing, they might have actually Nearest Neighbor’d themselves a pretty decent approximate elephant.
*Specifically, how it’s dramatically more effort to get a high-resolution pixel-by-pixel picture of a high-dimensional space. But if you’re willing to sacrifice accuracy for faster coverage, in high dimensions a single point’s Nearest Neighbor zone can sloppily cover a whole lot of ground (or multidimensional “volume”).
I have this vague intuition that some water-living organisms have fuzzier self-environment boundaries than most land organisms. It sorta makes sense. Living in water, you might at least get close to matching your nutritional, hydrational, thermal, and dispersion needs via low-effort diffusion. In contrast, on land you almost certainly need to create strongly-defined boundaries between yourself and “air” or “rock”, since environments can reach dangerous extremes.
Fungi feel like a bit of an exception, though, with their extracellular digestion habits.
If The Ocean was a DM:
Player: I want to play a brainless incompletely-multiceullular glob with no separation between its anus and its mouth and who might have an ability like “glow in the dark” or “eject organs and then grow them back.” It eats tiny rocks in the water column.
The Land DM: No! At least have some sensible distribution-of-nutrients gimmick! Like, you’re not even trying! Like, I’m not anti-slimes, I’ll allow some of this for the Slime Mold, but they had a clever mechanic for succinctly generating wider pathways for nutrient transfer between high-nutrition areas that I’d be happy to...
The Ocean DM: OMG THAT’S MY FAVORITE THING. Do you have any other character concepts? Like, maybe it can also eject its brain and grow it back? Or maybe its gonads are also in the single hole where the mouth and anus are?
The Land DM: …
...huh. I guess I know of one particular variety, and that variety is very self-contained and circling-adjacent (I almost could have called it “Narrative Circling”, if that didn’t seem like such a contradiction-in-terms). But from the wiki article, T-Group appears to refer to a more nebulous and broad category of things, some of which seem not nearly so self-contained.
The thing I had run into functioned basically as described here (scroll down for the written description). This read to me as clearly cicling-adjacent, and I didn’t think all that hard about where it had come from.
The wikipedia description struck me as surprisingly uninformative about the details of the practice itself. But from poking around a bit on the internet just now… I get the impression that T-Group can refer to something similar to what I described, but can also be used to refer to something close to an experimental leadership/decision-making structure that uses the “T-Group” as part of their intragroup conflict-resolution method?
I knew that the variety I had run into was a bit homebrew, and probably had aspects of circling bred into it. I don’t think I appreciated just how different it could be from other people’s usage of/context for the term. That said, I do see some signs of shared lineage.
The techniques feel related, and the facilitating ethos of awareness, learning, honesty, and goallessness feels similar. But the variety I ran into felt more tightly-defined and compartmentalized, and I was mostly doing it with strangers.
I admit that with a high bar of trust and decently committed participants, I could actually see it working well as a social-information-gathering method? But the idea of being dragged into doing it with coworkers, or of treating it like a primary conflict-resolution technique, seems quite troubling to me.
Circling is working on a similar problem, and training capacities that are used as workarounds: This feels true to me.
I think this even more visible in the circling-variant called “T-Group,” where people tell a short narrative on why they think they’re having a described emotional reaction. Very frequently, the explanations encapsulate 1-2 layers of meta, or explicitly gesture at certain/uncertain pieces of common knowledge.
(ex: Joe is responding to Jane’s response and feels U, Jane is responding to Jane’s interpretation of Joe’s response to Jane and feels V. Albert notes X, models that Betty would be troubled by X, and feels concern that there might not be common knowledge of this. Betty believes that there is common knowledge of X, and that everyone feels wary about it. Betty queries if anyone disagrees with that interpretation. Carrie notes that she hadn’t initially been aware of X, but is now aware of X, and feels sad and a little scared.)
When I look at it this way, it becomes even clearer why T-Group comes with an exhortation to always include the experiencer as an object in your sentence (“I feel”, “I make it mean”, “I infer”). If the next person is going to do meta on your meta, it helps if they don’t need to recalculate out the layer representing “you,” and it’s useful to explicitly differentiate between yourself and common knowledge.
(Actually, the more I think about it, the more T-Group looks like a hybrid between circling and explicit modeling. And the fact that they work well together suggests to me that not everything in the circling skillset gets eclipsed when you switch to explicit-modeling.)
Ticks just repeatedly break my intuitions. How does something that small and R-selected have a 2-year-long lifecycle?
Oh goodness yes, fungi. Two-nuclei sexual stages startled me when I first learned of it. It’s a highly-diverse and successful clade, filling in an array of niches all across the specialist/generalist spectrum, and ranging anywhere from unicellular to syncytial to organisms the size of a city. Plus, many of them seem to manifest that evolutionary pseudo-”inventiveness” that I usually associate with bacteria.
Yeah… this is reading as more “moralizing” and “combative” than as “trying to understand and model my view,” to me. I do not feel like putting more time into hashing this out with you, so I most likely won’t reply.
It has a very… “gotcha” feel to it. Even the curiosity seems to be phrased to be slightly accusatory, which really doesn’t help matters. Maybe we have incompatible conversation styles.
(Meta-note: I spent more time on this than I wanted to. I think if I run into this sort of question again, I’m going to ask clarifying questions about what it was that you in particular wanted, because answering broad questions without having a clear internal idea of a target audience contributed to both spending too much time on this, and contributed to feeling bad about it. Oops.)
Personally, one of the most damning marks against the physical-distance intuition in particular is its rampant exploitability in the modern world, where distance is so incredibly easily abridged. If someone set up a deal where they extract some money and kidnap 2 far-away people in exchange for letting 1 nearby person go, someone with physical-distance-discounting might keep making this deal, and the only thing the kidnappers would need to use to exploit it is a truck. If view through a camera is enough to abridge the physical distance, it’s even easier to exploit. I think this premise is played around with in media like The Box, but I’ve also heard of some really awful real-world cases, especially if phone calls or video count as abridging distance (for a lot of people, it seems to). The ease and severity of exploitation of it definitely contributes to why, in the modern world, I don’t just call it unintuitive, I call it straight-up broken.
When the going exchange rate between time and physical distance was higher, this intuition might not have been so broken. With the speed of transport where it is now...
Maybe at bottom, it’s also just not very intuitive to me. I find a certain humor in parodies of it, which I’m going to badly approximate here.
“As you move away from me at a constant speed, how rapidly should I morally discount you? Should the discounting be exponential, or logarithmic?”
“A runaway trolley will kill 5 people tied to the train track, unless you pull a switch and redirect it to a nearby track with only 1 person on it. Do you pull the switch?” “Well, that depends. Are the 5 people further away from me?”
I do wonder where that lack-of-intuition comes from… Maybe my lack of this intuition was originally because when I imagine things happening to someone nearby and someone far away, the real object I’m interacting with in judging /comparing is the imagination-construct in my head, and if they’re both equally vivid that collapses all felt physical distance? Who can say.
In my heart of hearts, though… if all else is truly equal, it also does just feel obvious that a person’s physical distance from you really should not affect your sense of their moral worth.
I think you’re inferring some things that aren’t there. I’m not claiming an agent-neutral morality. I’m claiming that “physical proximity,” in particular, being a major factor of moral worth in-and-of-itself never really made sense to me, and always seemed a bit cringey.
Using physical proximity as a relevant metric in judging the value of alliances? Factoring other metrics of proximity into my personal assessments of moral worth? I do both.
(Although I think using agent-neutral methods to generate Schelling Points for coordination reasons is quite valuable, and at times where that coordination is really important, I tend to weight it extremely heavily.)
When I limit myself to looking at charity and not alliance-formation, all types of proximity-encouraging motives get drowned out by the sheer size of the difference in magnitude-of-need and the drastically-increased buying power of first-world money in parts of the third world. I think that’s a pretty common feeling among EAs. That said, I do conduct a stronger level of time- and uncertainty-discounting, but I still ended up being pretty concerned about existential risk.
The strange pressures of their subterranean lifestyle, which eukaryote described somewhat, probably covers most of it. Inbreeding/isolation is probably the other half of the puzzle.
I’ll try to show how some of these traits tie in with low-oxygen and subterranean living, in those places where it wasn’t already covered.
A lot of these do bottom out to the pressures of creating a large nest, and dealing with an underground low-oxygen environment.
Eusociality and large protective communal nest-building mesh together really well, which I think fed into a lot of the items in the Richard Alexander list of predictions mentioned above (the accuracy is really impressive!).
Lack-of-hair and weird teeth seem pretty obviously developed for digging/crawling lifestyle. Acid-pain immunity has been proposed to be a consequence of having to handle an otherwise-intolerable level of lactic acid buildup (‘sore muscles’) while digging in low-oxygen zones. The strange metabolic properties (which probably feed into cancer resistance considerably) also seem to be a way to handle their lower-oxygen-availability lifestyle; endothermy can be surprisingly energy-intensive.
Really, living a low-temperature low-sugar careful-energy-usage low-ambient-cell-replication lifestyle tends to defend against cancer and improve lifespan quite a bit in general (ex: calorie restriction for mammals often extends lifespan, raising fruit flies at low temperatures can full-on double it). Molerats seem to have been under heavy pressure to biologically enforce a strict energy-usage regimen, and they take this to an incredible extreme. So you’d expect to see some cancer resistance, although it’s still crazy in terms of degree.
(I’d totally buy that they probably have some additional nutty things going for them. I think I’ve heard a theory that they have unusually-stringent cell checkpoints pre-division?)
Inbreeding and genetic isolation
No, really. It’s both a strong push towards kin-selection (a basis of eusociality if there ever was one), and an exacerbator of genetic drift. The changes might not always be anywhere near this favorable, but isolation still tilts things towards the weird, and the faster rate to saturation increases your ability to build adaptations on top of adaptations. (see also: high weirdness for species living in islands, caves).
Adding a couple of mine to this list, will probably add more as I think of them...
Invertebrates: Sacculina (parasitic barnacle that hormonally manipulates crabs), Sawflies (primitive hymenopterans with caterpillar-like larvae, some love forest fires), Ticks, Phengaris arion
Vertebrates: Chameleon, Hoatzin, Bats (just… bats)
Plants: Amorphophallus genus (The entire stalk is a single giant leaf. It sheds that leaf every few years, puts up a giant flower, then sheds that and goes back to growing another single giant leaf?), Orchids, Magnolia (ancient monocots that convergently evolved to look like dicots, flowers that predate bees)
I’m giving partial-points to: scale insects, fig wasps, termites, Neotrogla curvata, Pandoravirus, Neuroptera
Out-of-context quote: I’d like for things I do to be either ignored or deeply-understood and I don’t know what to do with the in-betweens.
After seeing a particularly overt form of morality-of-proximity argued (poorly) in a book...
Something interesting is going on in my head in forming a better understanding of what the best steelman of morality-of-proximity looks like? The standard forms are obviously super-broken (there’s a lot of good reasons why EA partially builds itself as a strong reaction against that; a lot of us cringe at “local is better” charity speak unless it gets tied into “capacity building”). But I’m noticing that if I squint, the intuition bears a trace of similarity to my attitude on “What if we’re part of an “all possible worlds” multiverse, in which the best and worst and every in-between happens, and where on a multiversal scale… anything you do doesn’t matter, because deterministically all spaces of possibility will be inevitably filled.”
Obviously, the response is: “Then try to have a positive influence on the proximal. Try to have a positive influence on the things you can control.” And sure, part of that comes from giving some weight to the possibility that the universe is not wired that way, but it’s also just a very broadly-useful heuristic to adopt if you want to do useful things in the world. “Focus on acting on the things you can control” is generally pretty sound advice, and I think it forms a core part of the (Stoic?) concept of rating your morals on the basis of your personal actions and not the noisy environmental feedback you’re getting in reply. Which is… at least half-right, but runs the risk of disconnecting you from the rest of the world too much.
If we actually can have the largest positive impact on the proximal, then absolutely help the proximal. This intuition just breaks down in a world where money is incredibly liquid and transportable and is worth dramatically more in value/utility in different places.
(Of course, another element of that POV might center around a stronger sense that sys1 empathetic feelings are a terminal good in-and-of-itself, which is something I can’t possibly agree with. I’ve seen more than enough of emotional contagion and epidemics-of-stress for a lifetime. Some empathy is useful and good, but too much sounds awful. Moving on.)
I guess there needs to be a distinction made between “quantity of power” and “finesse of power,” and while quantity clearly can be put to a lot more use if you’re willing to cross state (country, continent...) lines, finesse at a distance is still hard. And you need at least some of both (at this point) to make a lasting positive impact (convo for later: what those important components of finesse are, which I think might trigger some interesting disagreements). Finding specialists you trust to handle the finesse (read: steering) of large amounts of power is often hard, and it’s even harder to judge for people who don’t have that finesse themselves.
And “Finesse of power” re: technical impact, re: social impact, re: group empowerment/disempowerment (whether by choice or de-facto), re: PR or symbology… they can be really different skills.
Good to know! I’m assuming that’s mostly an automatic consequence of everything being a comment, but that also fits with what I’d want from this sort of thing.
Damn it, Oli. Keep your popularity away from my very-fake pseudoprivacy :P
(Tone almost exactly half-serious half-jokingly-sarcastic?)
((It’s fine, but I’m a little stressed by it.))