In one of the sub-comments, I thought about the tests that identified mental-imagery, and started thinking of how you might test for several variants of “lack of sense of self” or some related attributes.
Related Tests for inner-sense or model of self
No-qualia seems challenging to test. But “no model of self” (one form of “lack-of-self-awareness”) seems halfway-there, or at least in the correct spirit of the question? And I think that could be tested reliably; just get a group of people to predict their own behavior, and watch that subset of the group who reliably fail catastrophically at this.
For lack of consistency and other-awareness… There’s a Nazi (ETA: Eichmann) who seemed likely to be a troublingly-vivid example of “no consistent worldview or other-awareness”; all his words and beliefs were inconsistent platitudes, and he seemed genuinely surprised when Jewish judges didn’t feel sympathy for the difficulty in his attempts to get promoted by doing a “good job” optimizing trains for death. Unfortunately, I can’t track down his name. If someone knows who I’m talking about, I’d love to be pointed at an article about his strange psychology again.
Lack-of-attachment-to-internal-identity seems to be another semi-related thing. I feel like there are some things where I care about “identity-alignment” a great deal, and other matters that others clearly care about where I just lack any feeling of identity euphoria/dysphoria around the matter regardless of what I do. I suspect there are some people who lack either sensation altogether. Probably some fraction of those people come across as identity chameleons; people who switch out identities according to external incentives, because they have no internal reason not to.
(Personally, meditation updated me considerably towards a reduced attachment to internal-identities, but there are still some I’m attached to and care about maintaining.)
Alexithymia is a phenomenon where you lack awareness of your own emotions, sometimes even as you are acting them out. This seems easy to test in a manner similar to green/red color-blindness; have the person try to appraise what sort of emotion they’re feeling, and then read their circumstances or watch their behavior for a read of which emotion it actually is, and see who usually seems to misjudge it (or believe they’re not feeling emotions entirely).
Another p-zombie variant
There’s a different p-zombie subtype I’ve been thinking about a great deal myself..
If you set up a system where there’s an observer, an actor, and an environment, then there are 2 kinds of consciousness:
The observer influences, controls, or is the actor who does things in the environment
A policy feedback loop
The observer is just modeling what the actor will do in the environment
A one-way modeling of the agent
I suspect the later can feel “conscious” even if the observer never influences the actor in any way.
Humans are usually a bit of both, but someone who only has “consciousness” in the later capacity feels a bit like a… “consciousness hitchhiking on a q-zombie” to me.
(Related: The Elephant and The Rider)
Mental imagery: Drawing seems to reliably distinguish between coherent-mental-visualizers and those who aren’t. Tasks like “count the stripes on the tiger” make sense to a vivid/detailed visualizer, but not to someone who is just holding on to the concept “striped big cat.”
I suspect drawing-attempts would also reliably identify people like “The Man who Mistook His Wife for a Hat”, who viewed people as a “disorganized bag of facial-features” and had to rely on a single distinctive trait to identify even people he knew well (ex: Albert Einstein & his eccentric hairstyle), and who described things that weren’t there when trying to interpret a low-feature image like a picture of the dunes of the Sahara.
Biology-nerd LWer here (or ex-biology-nerd? I do programming as a job now, but still talk and think about bio as a fairly-high-investment hobby). BS in entomology. Disclaimer that I haven’t done grad school or much research; I have just thought about doing it and talked with people who have.
I suspect one thing that might appeal to these sorts of people, which we have a chance of being able to provide, is an interesting applied-researcher-targeted semi-plain-language (or highly-visual, or flow-chart/checklist, or otherwise accessibly presented) explanation of certain aspects of statistics that are particularly likely to be relevant to these fields.
ETA: A few things I can think of as places to find these people are “research” and “conferences.” There are google terms they’re going to use a lot (due to research), and also a lot of them are going to be interested in publishing and conferences as a way to familiarize themselves with new research in their fields and further their careers.
Leaning towards the research funnel… here’s some things I understand now that I did not understand when I graduated, many of which I got from talking/reading in this community, which I think a “counterfactual researcher me” would have benefited from a lucid explanation of:
Transferring intuitions around normalization
how to do it, why to do it
see how it eliminates spurious leads in data like goddamned magic
How to Handle Multiple-Hypothesis-Testing
A really good explanation of MCMC
Implied “priors,” model assumptions, and how to guess which ones to reach for and screen out the ones that are wrong
When is trying to add ML to your research a good or bad idea
Things I think we’ve done that seem appealing from a researcher perspective include...
Some of the stats stuff (I may not remember the precise sources, but this is where I picked up an understanding of most of what I listed above)
Thoughtful summaries/critiques of certain papers
Scott Alexander’s stuff on how to thoughtfully combine multiple semi-similar studies (so very much of it, but maybe one in particular that stood out for me was his bit on the funnel graph)
I vaguely remember seeing a good explanation of “how and why to use effect sizes” somewhere
(...damn, is Scott really carrying the team here, or is this a perception filter and I just really like his blog?)
Small sample sizes, but I think in the biology reference class, I’ve seen more people bounce off of Eliezer’s writing style than the programming reference class does (fairly typical “reads-as-arrogant” stuff; I didn’t personally bounce off it, so I’m transmitting this secondhand). I don’t think there’s anything to be done about this; just sharing the impression. Personally, I’ve felt moments of annoyance with random LWers who really don’t have an intuitive feel for the nuances for evolution, but Eliezer is actually one of the people who seems to have a really solid grasp on this particular topic.
(I’ve tended to like Elizer’s stuff on statistics, and I respected him pretty early on because he’s one of the (minority of) people on here who have a really solid grasp of what evolution is/isn’t, and what it does/doesn’t do. Respect for his understanding of a field-of-study I did understand, rubbed off as respecting him in fields of study he understood better than I did (ex: ML) by default, at least until my knowledge caught up enough that I could reason about it on my own.)
((FWIW; I suspect people in finance might feel similarly about “Inadequate Equilibria,” and I suspect they wouldn’t be as turned off by the writing style. They are likely to be desirable recruits for other reasons: finance at its best is fast-turnaround and ruthlessly empirical, it’s often programming or programming-adjacent, EA is essentially “charity for quantitatively-minded people who think about black swans,” plus there’s something of a cultural fit there.))
Networking and career-development-wise… quite frankly, I think we have some, but not a ton to offer biologists directly. Maybe some EA grants for academics and future academics that are good at self-advocacy and open to moving. I’ve met maybe a dozen rationalists I could talk heavy bio with, over half of which are primarily in some other field at this point. Whereas we have a ton to offer programmers, and at earlier stages of their careers.
(I say this partially from personal experience, although it’s slightly out-of-date: I started my stay in the Berkeley rationalist community ~4 years ago with a biology-type degree. I had a strong interest in biorisk, and virology in particular. I still switched into programming. There weren’t many resources pointed towards early-career people in bio at the time (this may have changed; a group of bio-minded people including myself got a grant to host a group giving presentations on this topic, and were recently able to get a grant to host a conference), and any that existed was pointed at getting people to go to grad school. Given that I had a distaste for academia and no intention of going to grad school, I eventually realized the level of resources or support that I could access around this at the time was effectively zero, so I did the rational thing and switched to something that pays well and plugged in with a massive network of community support. And yes, I’m a tad bitter about this. But that’s partially because I just had miscalibrated expectations, which I’m trying to help someone else avoid.)
Many of the ideas that most alienated me from normal people are pretty mundane here.
(Years ago, some normal person asked me what I thought about what would happen in the future, in light of overpopulation and the climate crisis. When my response involved “AI-based catastrophe,” “problems capitalism is or isn’t adequate to solving,” and geoengineering, they straight up checked out of that conversation and asked somebody else.)
So what am I thinking about that might seem a little strange even here...
I’ve apparently been putting a whole lot of thought in the last couple of months into the extent to which idealization (or the pairing of idealization/demonization, which are probably different sides of the same coin given how they turn on a dime) is utterly ubiquitous and seems to be extremely bad for good governance. Indirectly, it strongly incentivizes those in power to develop worse epistemics (cover things up, don’t ask questions, be easy for others to model) no matter how good they originally were. Now that I’ve started looking for it, I keep seeing evidence everywhere.
I’ve gently-but-seriously considered trying out a process loosely based on the one described in this crazy notebooking write-up.
One belief I’ve had for a while, which might be slightly strange in this group, is that I’m not bothering with cryonics. It seems to break down into 2 factors… 1) I believe almost any post-Singularity “humans” will be radically different to be point of not identifying or being identifiable as the same thing, and even if “I” do make it to the end, I’ll quickly modify myself into something neuroticism-free that I can’t internally identify with (whether by means of myself, or AI-imposed values, the outcome is the same). Therefore, having my exact mental configuration probably doesn’t matter too much. 2) Weird “bug” of prioritizing continuity only of “predictable external identity,” while largely deprioritizing or treating as free-variables most of my internal continuity that’s independent from that (in other words… I allow myself to make radical mental shifts, so long as I expect to be able to behave similarly, continue to serve my future self, keep my allies, and keep my word).
Probably the single highest “craziness index” idea I’ve mulled on in the past year is whether I wanted to track and see if there’s correspondence between meditational vibrations (and where I “feel” them to be; I have a sense of “location” with them) and Brodmann Areas (or a similar location-numbering system, so that I can leave myself partially-blind on initially assigning them). It’s the kind of thing where I expect the original framing to fail, but I also expect to learn something interesting in the process. Settled on “not worth the effort,” though.
Most of what I’m thinking about is probably merely eccentric/special-interest… biology stuff, metaphorical correspondences between financial data and ideas from evolution or entropy, how I’m using intuitions/perceptions and getting better at communicating them clearly...
(Plus a fairly typical human baseline: social, emotional, productivity, self-improvement, identity, future planning)
One of my favorite little tidbits from working on this post: realizing that idea innoculation and the Streisand effect are opposite sides of the same heuristic.
Bubbles in Thingspace
It occurred to me recently that, by analogy with ML, definitions might occasionally be more like “boundaries and scoring-algorithms in thingspace” than clusters per-say (messier! no central example! no guaranteed contiguity!). Given the need to coordinate around definitions, most of them are going to have a simple and somewhat-meaningful center… but for some words, I suspect there are dislocated “bubbles” and oddly-shaped “smears” that use the same word for a completely different concept.
Homophones are one of the clearest examples; totally disconnected bubbles of substance.
Another example is when a word covers all cases except those where a different word applies better; in that case, you can expect to see a “bite” taken out of its space, or even a multidimensional empty bubble, or a doughnut-like gap in the definition. If the hole is centered (“the strongest cases go by a different term” actually seems like a very common phenomenon), it even makes the idea of a “central” definition rather meaningless, unless you’re willing to fuse or switch terms.
Relatedly: I would bet someone money that Greg Egan does something insight-meditation-adjacent.
I started reading his work after someone noted my commentary on “the unsharableness of personal qualia” bore a considerable resemblance to Closer. And since then, whenever I read his stuff, I keep seeing him giving intelligent commentary and elaboration on things I had perceived and associated with deep meditation or LSD (the effects are sometimes similar for me). He’s obviously a big physics fan, but I suspect insight meditation is another one of his big “creativity” generators. (Before someone inevitably asks: No, I don’t say that about everything.)
To me, Egan’s viewpoint reads as very atheist, but also very Buddhist. If you shear off all the woo and distill the remainder, Buddhism is very into seeing through “illusions” (even reassuring ones), and he seems to have a particular interest in this.
I can make up a plausible story that developing an obsession with how we coordinate-and-manifest the illusion of continuity from disparate brain-parts… could be a pretty natural side-effect of sometimes watching the mental sub-processes that generate the illusion of “a single, conscious, continuous self” fall apart from one another? (Meditation can do that, and it’s very unsettling the first time you see it.).
So, here’s the specific thing I can think of that seems like it might be helpful...
I try to be cautious about using meditation-based wire-heading or emotional-dulling, but at minimum, there’s a state one step down from enlightenment (equanimity) that perceives suffering as merely “dissonance” in vibrations. The judging/negative-connotation gets dropped, and internal-perception of emotional affect is pretty flat (Note of caution: the emotions probably aren’t gone, it’s more like you perceive them differently. I’m not 100% sure how it works, myself. While it might sound similar, it’s not quite the same as dissociation; the movement is more like you lean into your experience rather than out of it. Also, I read in a paper that its painkiller properties are apparently not based on opiods? Weird, right? So neurologically, I don’t really know how it works, although I might develop theories if I researched it a bit harder.).
Enlightenment/fruition proper doesn’t even form memories, although I’ve never been able to sustain that state for longer than a few seconds. But when it drops, it usually drops back into equanimity… so I guess between the two, it’d be a serious improvement on “eternal conscious suffering”?
Unfortunately, to get into Enlightenment territory, there’s a series of intermediate steps that tend to set off existential crises, of widely-varying severity. Any book or teacher that doesn’t take this and the wireheading potential seriously, is probably less good than one who does. That said, I still recommend it, especially for people who seem to keep having existential crises anyway. But it’s a perception-alteration workbench; its sub-skills can sometimes be used to detrimental ends, if people aren’t careful about what they install.
Here’s one plus-side that you don’t need the additional context to understand: I kinda suspect that at least most people would eventually find the right combination of insights and existential-crises to bumble into enlightenment by themselves, if they had an eternity of consecutive experiences to work with. Especially given that there seem to be multiple simple practices that get around to it eventually (although it might take a couple of lifetimes for some people).
As someone who has had stream-entry, and the change-in-perception called Enlightenment… I endorse your read of it as being potentially useful in this case?
I’m going to give more details in a sub-comment, to give people who are already rolling their eyes a chance to skip over this.
While I could rattle off the benefits of “delegating” or “ops people”, I don’t think I’ve seen a highly-discrete TAP + flowchart for realizing when you’re at the point where you should ask yourself “Have my easy-to-delegate annoyances added up to enough that I should hire a full-time ops person? (or more).”
Many people whose time is valuable seem likely to put off making this call until they reach the point where it’s glaringly obvious. Proposing an easy TAP-like decision-boundary seems like a potentially high-value post? Not my area of specialty, though.
Now that we’ve gone over some of the considerations, here’s some of the concrete topics I see as generally high or low hazard for open discussion.
Broad-application antiviral developments and methods
Virus detection and monitoring
How to report lab hazards
...and how to normalize and encourage this
Broadly-applicable protective measures
The state of funding
The state of talent
What broad skills to develop
How to appeal to talent
Who talent should talk to
These things may be worth specialists discussing among themselves, but are likely to do more harm than good in an open thread.
Disease delivery methods
Specific Exploitable Flaws in Defense Systems
Ex: immune systems, hospital monitoring systems
It is especially bad to mention them if they are exploitable reliably
If you are simultaneously providing a comprehensive solution to the problem, this can become more of a gray-area. Partial-solutions, or challenging-to-implement solutions, are likely to fall on the bad side of this equation.
Much of the synthetic biology surrounding this topic
Arguments for and against various agents using disease as an M.O.
Warning: Big pile of text, Decomposing definitions
I notice that there’s a lot of disputably-relevant axes for assessing if something is a moonshot, and that was making me hesitant to answer. So… I’m going to be “that guy” who deep-dives defining terms. Hopefully this will be constructive?
Here are some disputable axes for assessing “moonshots” that popped to mind:
How possible is it to profit off of incremental progress on this project’s sub-goals?
At the limit: Does it have to be completely unprofitable until you win, after which there is a steep, step-like function after which there are massive returns?
But even space wasn’t this extreme! There were rockets and missiles before there were rocket ships. And my loose understanding of the way Elon Musk is doing things is about as incremental as you can make space rockets and still have it be rocket science (admittedly: not very). Those many fancy landing-control tests don’t make the final launch not a moonshot, though.
At lower levels: At what point is this just the kind of “profitable-incremental-progress via selection” algorithm that capitalism encourages and supports just fine?
How severe was the projects “start-up cost”? How much hard-to-consolidate infrastructure, intelligence, data, and resources were required before this project was even conceivable?
For one reason or another, is the most plausible counterfactual that if this one group wasn’t doing this, no group would be doing this? Or would be doing this much more poorly?
Classically, megaprojects refer to large-scale works of architectural infrastructure, but we live in an information age. Do massive broadly-beneficial projects of informational infrastructure count, or not?
How much of humanity needs to be impacted by the outcome of this project? Does the outcome need to be beneficial? Or is this really about affecting the narrative humanity has about itself, and not the effect it has on the day-to-day way that people experience their lives? (ex: space projects)
(And… interesting! ryan_b selected a almost completely different set of axes. Maybe it’s not a particularly consistently-defined concept?)
I feel conflicted about including any of these as requirements (several don’t actually seem that desirable), but I think the normal way moonshots are thought of and defined tends to center around analogy to the Space Race. And therefore, tends to involve almost all of these features being present.
“No incremental progress measures” seems like a particularly key part of the definition, and yet a potentially negative thing to filter by. Whenever you can, good incremental progress assessment is usually a positive thing to add to a project.
Thought experiment: If you had to compare 2 identical cold-fusion projects, one of which came up with a bunch of intermediate steps and tested them, and one which didn’t and just had one big “did you get everything right?” assessment right at the end… which one is the moonshot? But which one is probably the better project?
Under this lens, there’s a pretty important distinction to be made between things that are being treated as moonshots, and problems that have to be approached as moonshots.
Maybe the right question is… “What are moonshot problems that someone is seriously tackling?” Or just dropping the moonshots framing, and getting a list of interesting megaprojects. Or just picking the 1-2 axes you most care about, and sorting on them explicitly.
(FWIW; any one of those is a valid thing to want, and it’s a good question! That is part of why I put in the effort to try to break it down.)
P.S. Is there some way I should have used the Question-on-a-Question / Related Question function to do this? If so, could someone walk me through how that’s supposed to work?
It’s not quite moonshots, but here’s wikipedia’s list of Megaprojects.
When it comes to more classic architectural infrastructure projects, I’d be shocked if China (and maybe Singapore) don’t have several. But I also expect them to lean towards the incremental-progress-is-visible model where they can. Because when you can, having assessable incremental progress really is almost always the better choice?
Serious New Physics seems to be full of nightmare-to-fund megaprojects. What’s that upcoming telescope that’s going to be able to test if weird extrasolar asteroids like ’Oumuamua passing by Earth are a common or rare event? Somebody’s got to be building a new and more powerful collider, somewhere in the world. Who’s working on fusion reactors? Who’s working on crafting quantum computers?
Thinking of megaprojects & moonshots got me thinking of some information-infrastructure stuff that might or might not count, depending on definition. So I’m going to throw on a list of some tentative and disputable “Information Age Megaprojects” too...
Someone back-of-envelope estimated that Wikipedia was the culmination of 100 mil hours of work in 2008, and assuredly more by today.
While it’s success feels predestined nowadays, when it first outdid formal and privately-funded attempts at online encyclopedias, that was a very surprising outcome to many
If you loosen the restrictions, biology probably has tons of megaprojects. (BLAST, GenBank, PubChem… NIH is funding and maintaining vast databases, and all the infrastructure that makes them navigable. How much work that is probably depends on whether you include or exclude the work it took to extract the data they’re peddling.)
It feels almost-incoherent for me to think of biological projects as “moonshots”? There are interesting and novel things being done constantly (Many of them clever, irreplacable things. Some of them fundamental things! Some of them potentially high-impact!), and yet the term just does not seem to fit. Just about every interesting synthetic biology project nowadays has a “merely” moderately-large to large start-up cost, a measure of incremental progress, and a huge element of chance. Some of those could have huge impacts if they work. Should I call those moonshots? They’re not really big enough for “megaprojects” to fit, cost-wise.
Google (Alphabet? GoogleX?) seems to have a taste for this
Google’s book-scanning project (before it was shut down)
Loon is their internet-access balloon project
The Android Operating System?
The Google search-engine itself?
The Internet Archive?
People building 3D models of entire cities, in games such as Second Life?
Does Bitcoin count?
I suspect not, but I couldn’t really articulate why. If I wanted to argue one way or another with this one, I actually wouldn’t know where to start.
I’m used to thinking of those more as “Megaprojects”. “A List of Megaprojects” is a great goal, but I feel like it might be worth clarifying that in the question up-top a little bit.
Single datapoint, but… I love markdown for notes. The formatting shorthands are great, and an automatically-generated Table of Contents is a nice plus.
It hasn’t been easy for me to find anything that offered both flow-chart diagramming and cloud-sync, though. And both markdown and HTML are bafflingly awkward to make tables with (Why can’t somebody just set up a painless csv wrapper? Or if that exists, please tell me?).
Most decent markdown editors are also LaTeX capable, which I probably use even more than diagrams. There’s a bit of a learning-curve for that, but nowadays I can type out most equations without so much as looking at the keyboard (let alone looking up symbols).
But when the argument for the alternative boils down to “eat shit, not chemicals”...
I’m kidding, but only slightly. Organic fertilization is a bit gross, and I think most food companies would prefer to not be associated with any of dead leaf matter, rotting leftovers, or manure.
Counterargument: Composting totally became a thing, and that potentially puts the grossness right in your backyard.
(Huh. Composting is certainly something people do on an individual level to micro-combat the usage of nitrogen fertilizers, with probably a very negligible effect. And some people do seem dedicated to it. But I suspect that if you asked most people who do it, they would claim it’s about landfills or something, not soil nitrogen content.)
Even if it had evolved, any detailed form of communication that had the potential to transmit hard-to-break imperatives is something you want to be very, very careful with.
Defection, manipulation, and novel avenues for disease-transmission or parasitism heavily disincentivize this. It’s intuitively “gross” for a reason.
TL;DR: Infections and defections would probably utterly wreck this. The blood-brain barrier exists for a reason. While we did get language, in other ways we’ve evolved specifically to hide information from each other; it’s not straightforwardly evolutionarily favored. Large clusters of highly-related organisms have more incentive to do this (bacterial mats, ants, our own cells, etc.), and the information-bandwidth they share with each other through pheromones and chemical signals is actually pretty staggering. But at a glance, I do think they pay a cost in increased (and more elaborate) avenues for manipulations and infections to reap the benefits of this privilege.
Edit to add: Linking some additional strongly-related articles! SSC’s Maybe Your Zoloft Stopped Working Because a Liver Fluke Tried To Turn Your Nth-Great-Grandmother into a Zombie and the paper it centers on, Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation
It bears pointing out that evolution usually seems to want the opposite of this for our centralized-decision-maker organs. Notice that instead of making the brain more open-access over time, most species went the direction of making it isolated from even our own bodies, using things like the heavy-filtering brain/blood barrier. The risk of that sensitive organ getting poisoned, diseased, or biologically manipulated was just too high to risk it.
(Nematodes make what humans call “embodied thinking” look like a joke; the serotonin from their digestive tract is felt by their brains directly.)
People used to die in droves (and at young ages!) of measles, tuberculosis, and a million other things. Even before cities, herd-living put us under immense pressure to develop a pretty intensely specialized immune system. If we had to worry about giving a disease a highway to our central nervous system, that would be… very, very bad. It might even guarantee that such a species would never get to centralize such things at all, such a force is the risk of infection.
Not to even get into the possibility of physical brain-hijackings by concepts (like memes, but oh-so-much-worse!), or even just catching communication-transmitted Kuru… but here’s a pretty vivid speculative description of just how bad being an evolved “open book” that granted others write-access would probably get.
And with regards to the “benefits” of open communication -of information conveyed in a language that’s very hard to fake- we do still have some information transmitted in body-language and words. That certainly captures some of the benefit. But it bears pointing out that we’re a species that un-evolved any obvious presentation of whether a female is in estrus, and has very strong inhibitions around trying not to gain information from each other’s body odor. “Complete, total honesty” is not something evolution typically selects for, and it didn’t veer entirely that way for us. Even in the less-cutthroat modern era (at least, compared to our distant savannah past), Greg Egan’s Closer feels like a pretty realistic depiction of how we might feel about it if we ever did fully share our mental experience with even one another person. I’ll avoid spoiling it too badly, but we’d probably quickly uncover a lot of things about one other that we wished we didn’t know.
Bacterial mats, giant networks of fungi, and eusocial insects with strong genetic kinship might have strong enough evolutionary incentives for this to line up, although higher relatedness actually exacerbates the infection concern. And between the cells of multicellular creatues, certainly quite a lot does get communicated. Many of these examples do seem to “share their mind” in at least some meaningful sense. They transmit a lot of information and orders to one another, and have a communal decision-making process at varying degrees of centralization/decentralization. Pheromones for insects, various signalling secretions from bacteria, the oodles of transactions and deliveries between our cells at every moment… combined these can be very high-bandwidth. Almost incomprehensibly so, if you’ve ever seen attempts to measure and chart such things.
Ants could practically be said to have a pheromone “language,” complete with clan-identification tags. And as a way to selectively trigger a highly-specific neural pathway, or activate a known set of behaviors in a conspecific with the same brain-configuration, pheromones are not a bad way to go? The behavior patterns pheromones set off can get oddly specific at times.
And… ants also get tricked by pheromones into feeding the brood parasites that eat their own young. And what we call “bacterial sex” (high-bandwidth communication of DNA?) is actually virus-esque plasmids trying to transmit themselves to new bacteria, like an infection. Some plasmids might even come with addiction molecules, which is an extra-douchey way for an plasmid to convey “replicate me, or die.” And in coordinated bacteria, you do sometimes see defectors. So… it’s still pretty manipulable, and it sure gets manipulated.
The more stereotyped behaviors you can set off through external signals, the more you have a “broader attack surface,” in the cybersecurity lingo. And biological parasitism is ubiquitous, and fractaline, and adaptive, and uses any damn attack surface it can get.
Humans? A fluke. Parasites are evolution’s true darlings.
I thought this was an interesting question… although I definitely get the feeling like I’m missing some of the context behind this “planetary boundaries” write-up.
(What is the Stockholm Resilience Center? What are its motives and methods? Why was it doing this analysis, and how did you end up running into it?)
I agree that fertilizer runoff gets talked about a lot less than climate change, and I’m not entirely sure of why that is. I just looked it up, and “Organic”-labeled things apparently do already mandate organic fertilizers (which should be N-neutral on net?). So there’s at least that.
Regarding their assessment… one of the factors that seemed to be weighted a lot in their assessment was “level of antropogenic change vs natural variation in Process X.” I’d expect that to have heavily-weighted the Nitrogen Cycle, since our interference in it dwarves the variability due to natural processes (something they themselves spell out. The Haber process is a strange, powerful, magical, inorganic thing the humans cooked up).
This metric is… not precisely the same as “level of damage this change can cause.” They seem to have set up some sort of threshold for what they consider “dangerously high”, and I don’t really understand the thought-process or reasoning they used in picking those thresholds. But I think they factored “change vs natural variation” into their thinking a good deal.
My understanding from my own Ag background is that fertilizer runoff really can be a big problem; nitrogen and phosphorus are a major limiting nutrient for plant-life, both on land and in freshwater ecosystems (the ocean surface is iron-limited, interestingly). In a sense, this is exactly why we use it; we want our food crops to grow at a wild rate, one rarely seen in nature.
When there’s a sudden influx of nitrogen into a freshwater system, one of the possible consequences is an algal bloom. These go through a boom-and-bust cycle (with the seasons, or resource-availability patterns), leading to a massive algal die-off. During this die-off, decay bacteria wipe out the underwater oxygen-supply, leading to knock-on effects on freshwater ecosystems like massive fish die-offs. Enclosed spaces like lakes are perhaps especially vulnerable, since there is no way for the N and P to ever exit the system, potentially perpetuating the cycle indefinitely.
That said, suddenly cutting fertilizer usage would practically ensure that food production would suddenly drop way down, and that would ill-serve a lot of other human values. Reducing runoff looks like a very hard problem, to me. Finding ways to remove excess P and N from environment seemed to be an area with at least a little bit of interest, but it doesn’t seem very actionable on an individual rather than city or state level, which might explain the low publicity? Unsure.