David Friedman is awesome. I came to the comments to give a different Friedman explanation for one generator of economic rationality from a different Friedman book than “strangepoop” did :-)
In “Law’s Order” (which sort of explores how laws that ignore incentives or produce bad incentives tend to be predictably suboptimal) Friedman points out that much of how people decide what to do is based on people finding someone who seems to be “winning” at something and copy them.
(This take is sort of friendly to your “selectionist #3” option but explored in more detail, and applied in more contexts than to simply explain “bad things”.)
Friedman doesn’t use the term “mimesis”, but this is an extremely long-lived academic keyword with many people who have embellished and refined related theories. For example, Peter Thiel has a mild obsession with Rene Girard who was obsessed with a specific theory of mimesis and how it causes human communities to work in predictable ways. If you want the extremely pragmatic layman’s version of the basic mimetic theory, it is simply “monkey see, monkey do” :-P
If you adopt mimesis as THE core process which causes human rationality (which it might well not be, but it is interesting to think of a generator of pragmatically correct beliefs in isolation, to see what its weaknesses are and then look for those weaknesses as signatures of the generator in action), it predicts that no new things in the human behavioral range become seriously optimized in a widespread way until AFTER at least one (maybe many) rounds of behavioral mimetic selection on less optimized random human behavioral exploration, where an audience can watch who succeeds and who fails and copy the winners over and over.
The very strong form of this theory (that it is the ONLY thing) is quite bleak and probably false in general, however some locally applied “strong mimesis” theories might be accurate descriptions of how SOME humans select from among various options in SOME parts of real life where optimized behavior is seen but hard to mechanistically explain in other ways.
Friedman pretty much needed to bring up a form of “economic rationality” in his book because a common debating point regarding criminal law in modern times is that incentives have nothing to do with, for example, criminal law, because criminals are mostly not very book smart, and often haven’t even looked up (much less remembered) the number of years of punishment that any given crime might carry, and so “can’t be affected by such numbers”.
(Note the contrast to LW’s standard inspirational theorizing about a theoretically derived life plan… around here actively encouraging people to look up numbers before making major life decisions is common.)
Friedman’s larger point is that, for example, if burglary is profitable (perhaps punished by a $50 fine, even when the burglar has already sold their loot for $1500), then a child who has an uncle who has figured out this weird/rare trick and makes a living burgling homes will see an uncle who is rich and has a nice life and gives lavish presents at Christmas and donates a lot to the church and is friends with the pastor… That kid will be likely to mimic that uncle without looking up any laws or anything.
Over a long period of time (assuming no change to the laws) the same dynamic in the minds of many children could lead to perhaps 5% of the economy becoming semi-respected burglars, though it would be easy to imagine that another 30% of the private economy would end up focused on mitigating the harms caused by burglary to burglary victims?
(Friedman does not apply the mimesis model to financial crimes, or risky banking practices. However that’s definitely something this theory of behavioral causation leads me to think about. Also, advertising seems to me like it might be a situation where harming random strangers in a specific way counts as technically legal, where the perpetration and harm mitigation of the act have both become huge parts of our economy.)
This theory probably under-determines the precise punishments that should be applied for a given crime, but as a heuristic it probably helps constrain punishment sizes to avoid punishments that are hilariously too small. It suggests that any punishment is too small which allow there to exist a “viable life strategy” that includes committing a crime over and over and then treating the punishment as a mere cost of business.
If you sent burglars to prison for “life without parole” on first offenses, mimesis theory predicts that it would put an end to burglary within a generation or four, but the costs of such a policy might well be higher than the benefits.
(Also, as Friedman himself pointed out over and over in various ways, incentives matter! If, hypothetically, burglary and murder are BOTH punished with “life without parole on first offense” AND murdering someone makes you less likely to be caught as a burglar, then murder/burglary is the crime that might be mimetically generated as a pair of crimes that are mimetically viable when only one of them is not viable… If someone was trying to use data science to tune all the punishments to suppress anti-social mimesis, they should really be tuning ALL the punishments and keeping careful and accurate track of the social costs of every anti-social act as part of the larger model.)
In reality, it does seem to me that mimesis is a BIG source of valid and useful rationality for getting along in life, especially for humans who never enter Piaget’s “Stage 4” and start applying formal operational reasoning to some things. It works “good enough” a lot of the time that I could imagine it being a core part of any organism’s epistemic repertoire?
Indeed, entire cultures seem to exist where the bulk of humans lack formal operational reasoning. For example, anthropologists who study such things often find that traditional farmers (which was basically ALL farmers, prior to the enlightenment) with very clever farming practices don’t actually know how or why their farming practices work. They just “do what everyone has always done”, and it basically works...
One keyword that offers another path here is one Piaget himself coined: “genetic epistemology”. This wasn’t meant in the sense of DNA, but rather in the sense of “generative”, like “where and how is knowledge generated”. I think stage 4 reasoning might be one real kind of generator (see: science and technology), but I think it is not anything like the most common generator, neither among humans nor among other animals.
I can see two senses for what you might be saying…
I agree with one of them (see the end of my response), but I suspect you intend the other:
First, it seems clear to me that the value of a philosophy early on is a speculative thing, highly abstract, oriented towards the future, and latent in the literal expected value of the actions and results the philosophy suggests and envisions.
However, eventually, the actual results of actual people whose hands were moved by brains that contain the philosophy can be valued directly.
Basically, the value of the results of a plan or philosophy screen off the early expected value of the plan or philosophy… not entirely (because the it might have been “the right play, given the visible cards” with the deal revealing low probability outcomes). However, bad results provide at least some Bayesian evidence of bad ideas without bringing more of a model into play.
So when you say that “the actual values of transhumanism” might be distinguished from less abstract “things done in the name of transhumanism” that feels to me like it could be a sort of category error related to expected value? If the abstraction doesn’t address and prevent highly plausible failure modes of someone who might attempt to implement the abstract ideas, then the abstraction was bad.
(Worth pointing out: The LW/OB subculture has plenty to say here, though mostly by Hanson, who has been pointing out for over a decade that much of medicine is actively harmful and exists as a costly signal of fitness as an alliance partner aimed at non-perspicacious third parties through ostensible proofs of “caring” that have low actual utility with respect to desirable health outcomes. Like… it is arguably PART OF OUR CULTURE that “standard non-efficacious bullshit medicine” isn’t “real transhumanism”. However, that part of our culture maybe deserves to be pushed forward a bit more right now?)
A second argument that seems like it could be unpacked from your statement, that I would agree with, is that well formulated abstractions might contain within them a lot of valuable latent potential, and in the press of action it could be useful to refer back to these abstractions as a sort of True North that might otherwise fall from the mind and leave one’s hands doing confused things.
When the fog of war descends, and a given plan seemed good before the fog descended, and no new evidence has arisen to the contrary, and the fog itself was expected, then sticking to the plan (however abstract or philosophical it may be) has much to commend it :-)
If this latter thing is all you meant, then… cool? :-)
Has someone been making bad criticisms of transhumanism lately?
In 2007, when this was first published, I think I understood which bravery debate this essay might apply to (/me throws some side-eye in the direction of Leon Kass et al), but in 2018 this sort of feels like something that (at least for a LW audience I would think?) has to be read backwards to really understand its valuable place in a larger global discourse.
If I’m trying to connect this to something in the news literally in the last week, it occurs to me to think about He Jiankui’s recent attempt to use CRISPR technology to give HIV-immunity to two girls in China, which I think is very laudable in the abstract but also highly questionable as actually implemented based on current (murky and confused) reporting.
Basically, December of 2018 seems like a bad time to “go abstract” in favor of transhumanism, when the implementation details of transhumanism are finally being seriously discussed, and the real and specific challenges of getting the technical and ethical details right are the central issue.
One thing to keep in mind is sampling biases in social media, which are HUGE.
Even if we just had pure date ordered posts from people we followed, in a heterogeneous social network with long tailed popularity distributions the “median user” sees “the average person they follow” having more friends than them.
Also, posting behavior tends to also have a long tail, so sloppy prolific writers are more visible than slow careful writers. (Arguably Asimov himself was an example here: he was *insanely* prolific. Multiple books a year for a long time, plus stories, plus correspondence.)
Then, to make the social media sampling challenges worse, the algorithms surface content to mere users that is optimized for “engagement”, and what could be more engaging than the opportunity to tell someone they are “wrong on the Internet”? Unless someone is using social media very *very* mindfully (like trying to diagonalize what the recommendation engine’s think of them) they are going to what causes them to react.
I don’t know what is really happening to the actual “average mind” right now, but I don’t think many other people know either. If anyone has strong claims here, it makes me very curious about their methodology.
The newsfeed team at Facebook probably has the data to figure a lot of this out, but there is very little incentive for them to be very critical or tell the truth to the public. However, in my experience, the internal cultures of tech companies are often not that far below/behind the LW zeitgeist and I think engineering teams sometimes even go looking for things like “quality metrics” that they can try to boost (counting uses of the word “therefore” or the equivalent idea that uses semantic embedding spaces instead) as a salve for their consciences.
More deeply, like on historical timescales, I think that repeated low level exposure to lying liars improves people’s bullshit detectors.
By modern standards, people who first started listening to radio were *insanely gulllible* in response to the sound of authoritative voices, both in the US and in Germany. Similarly for TV a few decades later. The very first ads on the Internet (primitive though they were) had incredibly high conversion rates… For a given “efficacy” of any kind of propaganda, more of the same tends to have less effect over time.
I fully expect this current media milieu to be considered charmingly simple, with gullible audiences and hamhanded influence campaigns, relative to the manipulative tactics that will be invented in future decades, because this stuff will stop working :-)
(You might think meta-iteration involves making the other player forget what it learned in iterated play so far, so that you can re-start the learning process, but that doesn’t make much sense if you retain your own knowledge; and if you don’t, you can’t be learning!)
If I was doing meta-iteration my thought would be to maybe turn the iterated game into a one-shot game of “taking the next step from a position of relative empirical ignorance and thereby determining the entire future”.
So perhaps make up all the plausible naive hunches that I or my opponent might naively believe (update rules, prior probabilities, etc), then explore the combinatorial explosion of imaginary versions of us playing the iterated game starting from these hunches. Then adopt the hunch(es) that maximizes some criteria and play the first real move that that hunch suggests.
This would be like adopting tit-for-tat in iterated PD *because that seems to win tournaments*.
After adopting this plan your in-game behavior is sort of simplistic (just sticking to the initial hunch that tit-for-tat would work) even though many bits of information about the opponent are actually arriving during the game.
If I try to find analogies in the real world here it calls to mind martial arts practice with finite training time. You go watch a big diverse MMA tournament first. Then you notice that grapplers often win. Meta-iteration has finished and then your zeroth move is to decide to train as a grappler during the limited time before you fight for the first time ever. Then in the actual game you don’t worry too much about the many “steps” in the game where decision theory might hypothetically inject itself. Instead, you just let your newly trained grappling reflexes operate “as trained”.
Note that I don’t think this even close optimal! (I think “Bruce Lee” beats this strategy pretty easily?) However, if you squint you could argue that this rough model of meta-iteration is what humans mostly do for games of very high importance. Arguably, this is because humans have neurons that are slow to rewire for biological reasons than epistemic reasons...
However, when offered the challenge that “meta-iteration can’t be made to make sense”, this is what pops into my head :-)
When I try to think of a more explicitly computational model of meta-iteration-compatible gaming my attention is drawn to Core War. If you consider the “players of Core War” to be the human programmers, their virtue is high quality programming and they only make one move: the program they submit. If you consider the “players of Core War” to be the programs themselves their virtues are harder to articulate but speed of operation is definitely among them.
Paul, I love what you’re doing here, have been thinking about this a long time. I look forward to seeing an answer and would like to write a clarifying essay full of non answers :-)
By “get our attention” I mean: be interesting enough that we would already have noticed it and devoted some telescope time to looking in more detail at that part of the sky. (Once they have our attention it seems significantly cheaper to send a message.)
This suggests that we can list various anomalies that might have been thought to be extraterrestrials and already received attention, and then exclude them for various reasons.
1. For example, Tabby’s Star recently had me wondering/hoping/worrying for a good year or two.
It is only 1,280 light years from Earth and I think it is plausible that we wouldn’t even be able to see similar stars on the far side of our own galaxy which is mere ~100k light years in diameter… it can’t count for this exercise because seeing it from other galaxies would be quite a trick.
HOWEVER, despite being an F type star (that shouldn’t be variable (that varies in very irregular ways)) it was interesting enough raise $100k on Kickstarter for telescope time, and to deserve its own feed. I think people are pretty sure it is natural at this point, with a probable case of “indigestion” from the star colliding with a metallic planet in the last 10k years or so.
However, the fact that it got our attention means someone might do that to one planet/star combo like clockwork, every 1000 years in a regularly spaced line of stars.
It could work as a local “we exist” signal whose clocklike timing would count as the signature of intentional planning and sort of function like an invitation to show up at the logical NEXT star in the timed “indigestion collision” sequence to watch the collision and parley with whoever else showed up…
However, I don’t think these events would be bright enough for the weird question?
(This does raise the question as to what counts as a “message” and what the bitrate of said message is allowed to be? Is a valid message just “this was intentionally created”, or “this was intentionally sent”, or “here is a place that will be interesting at a future time” or something even more than that? Also, what if the evidence of intentionality comes from a coincidence of timing spread across spans of time that requires detailed astronomical records for longer than humans seem to be able to maintain political or cultural or linguistic institutions?)
2. In 1967 Pulsars caused people to be very excited for a short period of time, thinking that such regularity must be intentional. However then it was worked out that pulsars were just spinning charged neutron star remnants leftover from supernovas. Still, they are pretty great natural clocks ;-)
This might make them a great “medium” in which to encode intentionality, but it means you have to modulate or sculpt them somehow so that when alien astronomers get interested they can see a deviation from what’s natural.
Another problem is that they are highly directional, with most of the energy going out of their wobbling north and south poles (which when they wobble across your telescope is one of the pulses), so they don’t signal very widely.
Another problem is that they aren’t actually very bright. We see them in the Milky Way, and in our galactic neighbor the Large Magellanic Cloud, but finding an unusually bright pulsar 2 million light years away in Andromeda was newsworthy. In 2003 McLaughlin and Cordes tried to find very bright pulsars further afield and maaaaybe got a hit in M33 (aka “The Triangulum Galaxy”) which is only 3M light years away. But seeing these things from 8000M light years away is highly questionable.
Binary pulsars are more rare and more likely to get scientific attention.
The first binary pulsar, discovered in 1974, won the 1993 Nobel in physics for Taylor and Hulse. By 2005 there were 113 discovered. They are interesting because they modulate the “clock” dynamics inherent to singleton pulsars.
Binary pulsars tick faster when coming towards you and tick slower when moving away, so the orbital parameters of the system can be characterized precisely just from the timing of the ticks. These orbital parameters measurably changes on the timescale of human lives, slowing down in a way that can be naturally interpreted as indirect proof that gravity waves exist and are pulling energy out of such massive systems :-)
If you wanted to catch someone’s attention you might construct or find a three star system that included a pulsar aimed the way you wanted to send a message, and then mess with the orbital parameters intentionally.
Non hierarchical three star systems are chaotic by default and well understood chaotic systems can be controlled with surprisingly little energy which might make something like this attractive.
A probable hierarchical trinary-with-a-pulsar (and so not necessarily chaotic) that includes a sun-like star was surveyed in 2006. The third star is not totally confirmed, and even if it exists the arrangement here is more like a binary system, where one of the binaries has a large planet/star/thing orbiting it alone (hence “hierarchical” and hence probably not chaotic).
There is another pulsar trinary that might be chaotic found in 2014. These things tend not to last however, because “chaos”.
Those are the only two I know of. I’m pretty sure the trinaries are being examined “because physics” but I’ve heard no peeps about unusual patterns of timing from them. But still, no matter how many neighbors pulsars have, they are fundamentally too dim and too directional to count as part of an answer to the weird question here I think...
3. The 234 star’s that might be called “Borra’s Hundreds” can probably also be discounted directly because at best, if these are signaling extraterrestrials, then they are just using puny pulsed lasers with roughly our own planet’s industrial energy outputs, in more or less the visible spectrum (blockable by dust), which probably doesn’t count because it obviously can’t be seen from somewhere far away like the Sloan Great Wall.
The idea, initially articulated by Ermanno Borra in 2010 as I minimally understand it, is that a laser could shoot out light of nearly any frequency (frequency as given by the wavelength of individual photons), but if we or aliens could pulse the quantity of photons sent out fast enough, this would be visible to typical methods for measuring the “frequency of light from a star” in standard spectrographic surveys whose intentional goal is to figure out the atomic constituents of those stars from the wavelengths (and hence the frequencies) of the specific photons they emit. The methods aren’t looking for very fast pulses of more and then less photons, but they could nonetheless see them by “accident”.
In 2012, Borra tried to explain it again and spelled out more of the connections to SETI, basically saying that formal SETI was doing one thing, but spectrographic star surveys were better funded and you could do SETI there too just by processing the exact same data through another filter to make the possible injected signals pop out.
Aliens seeking to be discovered would know anyone smart would do spectrographic surveys of the stars, so that would be an obvious place to try to put a signal.
Then in 2016 Borra published again, now with Trottier as a coauthor, saying that he’d gone ahead and looked at archival spectral data, and found 234 stars that seemed to be sending out “peculiar periodic spectral modulations” of the sort that he predicted… unless the recorded version of the data had frequency artifacts in it?
As summarized by Snopes (normally a good source) the claim is disregarded but all the criticisms are status attacks rather than attending to any kind of object level analysis of the math, the physics, or the collected data.
The BEST argument against Borra is one I’ve almost never seen leveled, which is that the data processing method involved complex math, and had error bars, and they analyzed 2.5 million stars and only found 234 results. This makes me instantly wonder: data mining artifact?
But in that case you’d expect someone to make this argument seriously and explain in detail how the math went wrong somewhere? I don’t get it.
Maybe people think that lasers that blink with a terahertz frequency are impossible because of “laser physics” or something? But no one seems to have raised this objection. And it seems to me like it might be possible to do this just from having a normal continuous laser and then spin something very very fast that periodically blocks the light coming out of the laser? I’m not a laser engineer, I don’t know, it just seems weird to me that I’ve seen no speculation one way or another.
I’ve tried googling the coordinates of the stars Borra found and none of them have wikipedia pages, Google sends all the searches for the stellar coordinates back to Borra’s own paper. I don’t know how many light years away any of them are.
There’s no kickstarter. The normal SETI people at UC Berkeley eventually, in October of 2016, agreed to look at a few of Borra’s stars but you could see their heart wasn’t in it. There’s been no word since then.
However, despite humans being boring and uninterested in important things, what about a generalization of this method! :-)
(EDIT NOTE: In the first draft I had text here where I imagined Niven’s fictional Ringworld made out of an impossible super material and then suggested modifications to create a “flicker ring” that could spin around a star and make the star appear to blink at spectral frequencies from certain perspectives. My optical reasoning was ludicrously wrong in the first draft, built around how things would be seen from very close rather than very far. Even with the hypothetical magic substance “scrith” a flicker ring big enough and fast enough to look right at a vast distance would be impossible. The material would have to be many orders of magnitude more magical than scrith to work in this capacity.)
4. Hoag’s Object is pretty fascinating and fascinatingly pretty.
Sometimes I wonder if the only reason we don’t believe in aliens yet is some kind of social signaling equilibrium similar to plate tectonics.
In 1915 Wegener was like “Duh, the continents obviously line up like a jigsaw puzzle” and people were like “No way!” and then 50 years later they were like “Oh, yeah, I guess so, funny how this is obvious to kids now but wasn’t obvious to fancy scientists in 1890...”
If there are “Hoagians” shepherding all the stars in their galaxy into a pretty ring as a collective art project (or maybe just to prevent expensive damaging collisions?), that would be pretty epic.
In terms of the weird question however, the problem is that Hoag’s Object is only 9M light years away (vs Andromeda’s 2M, and that’s part of why we easily see it. Picking it out uniquely from 8000M light years away would be a totally other thing. Also, it is only visible if you see it from the poles rather than the edges, which is another reason it isn’t a very good universal signal.
5. Black hole collisions have never been attributed to aliens, to my knowledge. However, they are obviously big and awesome and get a lot of news. If you could survey moderately sized black holes in your galaxy and nudge them around in a controlled way you might have a partial solution? Timed collisions would be hard to deny were aliens I think. Imagine:
Chirp! (then wait 16.30 days)
Chirp! (2.32 days) Chirp! (then wait another 16.30 days)
Chirp! (2.32 days) Chirp! (2.32 days) Chirp!
You going to tell me that’s not an intentional “here I am!” signal? You can’t! :-P
From a long term signaling perspective (like to break through the Fermi Paradox by visibly declaring once and for all “intelligence existed!” before the Great Filter gets you) the problem here would be that this would be a one time signal that only communicates to a small shell of stars a precise distance away.
Many such events could have occurred before humans could hear them, and many might exist after we go extinct, with us none the wiser :-/
6. Gamma Ray Bursts are more usually associated with death and life. Basically they are so bright that they would probably cause mass extinctions in their home galaxies.
However, if you could figure out a way to cause them (not that hard? just crash neutron stars into each other in head on collisions?) and somehow survive a series of six-ish closely timed blasts then it could work like black holes, but way more obvious. No theory of relativity is even required to know to build a gravity wave detector! Black holes are still probably better in terms of style points, because their collisions don’t seem to cause mass extinctions :-P
Anyway, my point is that all of these are thing that have already come to mainstream scientific human attention and caused lots of exploratory interest and analysis.
ALSO, all of them have been more or less dismissed by mainstream astronomers as being conclusive evidence of extraterrestrial civilizations.
ALSO, I don’t instantly see super obvious ways to twist any of these things around to function as a clean cut answer to the weird question where a short-lived Kardashev Type III species with our physics and material science (but better and more manufacturing capacity) could set something up, have it persist after the Great Filter gets them, and signal to everyone forever.
I’m sure this day will be remembered in history as the day that LessWrong became great again!
Your experimental results might be indicative of something other than problems merely within LW...
I decided to test the hypothesis that LessWrongers practice weak scholarship in regards to jargon. In particular, that for many important terms the true source of knowledge has not been transmitted to community members. [bold added]
The problem here is that a better reference group than “LessWrongers” might be “scientists”?
Or perhaps the the group of “scholars” (understood as all the scientists, plus all the people “not doing real science” per whatever weird definition someone has for calling something “science”), or perhaps even the still larger category of “humans”?
There is a generalized problem with scholarship related cognition in the the widespread failure of humans to remember the source of the contents of their minds. Photographs of events you weren’t even alive for become vague visual memories. Hearsay becomes eyewitness report. Fishy stories from people you know you shouldn’t trust become stories you don’t remember the source of… and then become things you weakly believe… basically: in general, by default, human minds are terrible at retaining auditable fact profiles.
But suppose that we don’t expect that much of generic humans, and only hold scientists to high intellectual standards?
Still a no go!
As per Stigler’s Law Of Eponymy there are almost no laws which were actually named after their (carefully searched for) originators! The general pattern is similar to art: “Good scientists borrow, great scientists steal.”
In practice, the thing that will be remembered by large groups of people is good popularization, especially when a well received version keeps things simple and vivid and doesn’t even bother to mention the original source.
If LW can fix this, it will be doing something over and above what science itself has accomplished in terms of scholarly integrity. (Whether this will actually help with technological advances is perhaps a separate question?)
For an example here, I know about “ugh fields” because I invented that term and know the details of its early linguistic history.
1. The coining in this case preceded the existence of the overcomingbias blog by a few years… it was coined in conversations in the 2001-2003 era in and around College of Creative Study (CCS) seminars at UC Santa Barbara (UCSB) between me and friends, some of whom later propagated the term into this community.
My use of the term was aimed at describing the subjective experience of catastrophic procrastination along with some causal speculation. It seemed that mild anxiety over a looming deadline could cause mild diversion into a nominally anxiety ameliorating behavior like video games… which made the deadline situation worse… and thereby turned into a positive feedback of “ugh”. These ugh fields would feel they have an external source whose apparent locus is “the deadline”, with the amount of ugh increasing exponentially as the deadline gets closer and closer.
(I failed a class or two back then more or less because of this dynamic until I restructured my soul into a somewhat more platonically moderate pattern using Allan Bloom’s translation of The Republic as my inspiration. Basically: consciously locally optimized hedonism has potentially unrecoverable failure modes and should be used with caution, if at all. Make lists! Perhaps amortize hedonism over times equal to or greater than your personal budgeting cycle? Or maybe better yet try to slowly junk hedonism in favor of duty and virtue? Anyway. This is a WIP for me still...)
2. Two of my friends from UCSB (Anna and Steve) were part of the conversations about me failing classes at UCSB and working out a causal model thereof, and in roughly 2008 brought the term to “Benton House” (which was the first “rationalist house” wherein lived participants in “the visiting fellows program” of the old version of MIRI which was then called “the Singularity Institute for Artificial Intelligence (SIAI)”).
3. The term then propagated through the chalk board culture of SIAI (and possibly into diaspora rationalist houses?) and eventually the concept turned into a LW post. The new site link for this post doesn’t work at the moment that I write this, but archive.org still remembers the 2010 article when I said of “ugh fields”:
It is a head trip to see a pet term for a quirk of behavior reflected back at me on the internet as an official name for a phenomenon.
4. And the term keeps rolling around. It basically has a life of its own now, accreting hypothetical mechanisms and stories and interpretations as it goes.
It would not surprise me if some academic (2 or 10 or 50 years from now) turns it into a law and the law gets named after them, in fulfillment of Stigler’s Law :-P
The core thing I’m trying to communicate is that humans in general can only think sporadically, and with great effort, and misremember almost everything, and especially misremember sources/credit/trust issues. The world has too many details, and neurons are too expensive. External media is required.
Lesswrongers falling prey to attribution failures is to be expected by default, because Lesswrong is full of humans. The surprising thing would be generally high performance in this domain.
My working understanding is that many of the original english language enlightenment folks were mindful of the problem and worked to deal with it by mostly distrusting words and instead constantly returning to detailed empirical observations (or written accounts thereof), over and over, at every event where it was hoped that true knowledge of the world might be “verbally” transmitted.
London, New York, and nine full time employee in the NYT media orbit… updated!
I see below that you’re aiming for something like “fear in political situations,”. This calls to mind, for me, things like the triangle hypothesis, the Richardson arms race model, and less rigorously but clearly in the same ambit also things like confidence building measures.
These are tough topics and I can see how it might feel right to just “publish something” rather than sit on one’s hands. I have the same issue myself (minus the courage to just go for it anyway) which leads me mostly to comment rather than top post. My sympathy… you have it!
Uh… I can try to unroll the context and thinking I guess..
I think in my head I initially associated the name with childhood memories of a vaguely Investigative TV News Program that was apparently founded in 1986.
Also, it appears to be the name of an entire genre of magazines that includes things like New Statesmen which makes it a bit tricky to google for details about the thing itself, rather than the category of the same name.
It seemed plausible to me, given the general collapse of the journalism industry, that the old 1990′s brand still existed, had moved to the Internet, mutated extensively, and was now reduced to taking potshots at people like Scott in order to drum up eyeballs?
(Plausibly the website could be co-branded with a TV version still eking out some sort of half life among the cable TV channels with 3 or 4 digit numbers, that could trace its existence back to 1986?)
None of what seemed plausible to me is actually true.
The old thing named Current Affairs apparently died in 1996, and was briefly revived in 2005 and then died again. The new thing started in 2015, and has nothing to do with the old thing.
Since I was surprised by the recency of the founding of the new incarnation of “something named Current Affairs” it seemed to me that other people might be confused too, so I linked to the supporting evidence.
Also, when Scott speaks indirectly of the callout, he makes a “request not to be cited in major national newspapers”. But the name here is so maddeningly generic that I have difficulty even Googling my way to reliable circulation numbers.
Is it actually major? Do they even have a paper print format? I’m still not sure, and don’t really care. Maybe Scott was fooled into thinking they matter too at first?
Basically, my model at this point, given the paucity of hard data, is that this new Current Affairs could easily be nothing like a “major national newspaper” but rather it could just be like two or three yahoos in a basement struggling to be professional journalists in an age when professional journalism is dying, and finding that they have to start trolling virtuously geeky bloggers to stir up drama and attract eyeballs to their website to make ends meet.
The circulation numbers and actual ambient reputation potentially matter, because if they are very low then who cares if some troll hasn’t read Scott’s old essay very carefully, but if many high quality eyeballs were reading the inaccurate summary and criticism, then the besmirching insinuations could hurt Scott.
In the meantime, maybe this will be the beginning of a beautiful friendship. When strangers get into fights in real life, it isn’t totally uncommon for them, years later, to end up great friends who know each other’s true measure :-)
I appreciate that you’re asking at a very “high level of meta” about a controversial topic.
Also, I appreciate that you helped me to know that something had even happened. I read Scott’s original article back when it was fresh, but the Robinson piece wasn’t on my radar until I searched for Scott’s rebuttal on the basis of the question and found a link back to it.
I’m still not sure if I understand all the ins and outs here, but I will say that this is a complex topic which I personally avoid writing about because in many ways I’m sort of a coward...
However Scott reads to me as grappling with complicated ideas, in public, against his own interests, in a basically admirable way, while Robinson reads to me as having had to push some content out on a deadline (with a larger goal of trying to get his readers to buy the topmost book in the image at the end of his article).
I sympathize with Scott having been dissed in a magazine whose name suggests falsely that it has a long history and thus having been put in a position to either (1) defend himself and give the upstart that is insulting him the attention which was probably point of the attack or (2) not defend himself.
I think Scott’s move of not putting his rebuttal on his own main page, but just putting it where it can be searched for (so it comes up as a defense if people search for the topic specifically, but doesn’t move a lot of eyeballs) and running the URL through donotlink.it was quite smart. He appears to understand how he’s being trolled and is responding in a way that navigates it pretty well :-)
Cybernetic polytheism is hard to do right, because you have to have a strong sense of cybernetics first. You need to understand and explore the center and the edges of a large scale optimization dynamic, explore the empirical details it entails, and generally get a scientific understanding of it… then, for lulz, you might name it and personify it.
“Evolution” is a good example. This process is instantiated in biology. It operates over heritable patterns of deoxyribonucleic acid whose transcription into protein by living cells constructs new cells and agglomerations of cells in the shape of bacteria and macroscale organisms… each with basically the same DNA as before, but with minor variations. There is math here: punnett squares, fixation, etc.
Now we could just leave it at that. The science is good enough.
But not everyone has time for the biology, or has the patience to learn the math. Also, the existence of biological structures has been attributed by non-biologists to gods with narrative character that doesn’t really map that well to the biological principles.
Thus there is a strong temptation to perform a narrative correction and offer “better theology” to translate the science into something with more cogent emotional resonances.
Like… species were not created by a benevolent watch maker who loves us. That’s crazy.
Actually, if biological nature (or biological nature’s author) has any moral character, that character is at least half evil. This entity thinks nothing of parasitism or infanticide, except to promote them if these processes produce more copies of DNA and censor them of they produce fewer copies of DNA.
It tries countless redundant experiments (the same mutation over and over again) that leads to both misery and death, but even calling these experiments is generous… there is almost no intentional pursuit of knowledge (although HSP genes are pretty cool, and sort of related), no institutional review boards to ensure the experiments are ethical, no grant proposals arguing in favor of the experiments in terms of the value of the knowledge they might produce.
Evolution, construed as a god, is a god we should fear and probably a god we should fight.
We can probably do better than it does, and if we don’t do better it will have its terrible way with us. Those who worship this god without major elements of caution and hostility are scary cultists… they are sort selling their great great grand children into slavery to something that won’t reward them, and can’t possibly feel gratitude. A narrative from old school horror or science fiction, that matches the right general tone, is Azathoth.
But you can’t just make up the name Azathoth and say that it is a god and coin a bunch of other weird names, and make up some symbolic tools for dealing with them, and mix it together willy-nilly, and not mention biology or evolution at all.
You have to start with the science and end with the science.
Back in 2004-2005 (in a time I look back fondly on, because I was an OK kid) I was basically a naive techno-optimist about computers and software and AI, but I got seriously worried about Peak Oil.
All the muggles had a “policy level” understanding that the consumer energy economy (and everything in general) would be basically fine, but everyone I could find whose “gears level” understanding of fossil fuel economics was predicting some kind of doom. The futures markets basically said “in 2005, 2009, and 2019 OPEC will politically control the price of oil, and it will be ~$39 per barrel” but that didn’t make any object level sense when you dug into the details.
I went kind of crazy, trying to reconcile these things, and read a lot of object level quantitative anthropology trying to figure out whether I was crazy or everyone else was.
What ended up happening is that the economic/technological solution arrived late (but more or less “before serious collapse”, like failures of supply chains or the dissolution of traditional constitutions) and also Obama was elected in the midst of a relatively mild “financial collapse” that included oil prices spiking to over $120 per barrel (plus food riots in poor countries).
Since Obama was tribally blue (and the obvious corrective policies were tribally red) and elected with a mandate to solve “the Great Recession” he could get energy extraction reform in a way a red politician could never get away with.
Blue establishment activists objecting to backroom deals like this would be disloyal (only “outsider” ideological leftists, like those involved in the Dakota/Bakken/Standing Rock protests could pragmatically object), and red establishment activist networks were happy to unshackle the frackers and toss a regulatory bone to shale oil. By 2010 things were much less scary, and by 2013 the trajectory of US oil production had totally and dramatically deviated from the predictions inherent to the Hubbert’s Peak model of historical oil production.
I consider the 2004-2013 period to have been very personally educational from a “theory of history” perspective :-)
My pet name for the hypothetical field (coined by Michael Flynn in the late 1980′s) is “cliology” (named after Clio the Muse of History), and one of many barriers to creating a sociologically viable community of cliology researchers (I’m tempted to call it the “Fundamental Hypothesis of Cliology” as a joke?) is that most major insights in this field are inherently useful for guiding investment and are thus hoarded within the investing class as “one-off trade secrets”.
The memetic incentives for serious public knowledge production in this domain would be extremely tricky to set up, and are unlikely to happen except via “great man” or “great circle” interventions. The Fundamental Hypothesis of Cliology suggests that Elon Musk could maybe do it, or a new “thing like the Vienna Circle” might be able to do it, but that’s more or less what it would take. Also, even after the initial “boost” from this effort, public research would stall and/or devolve the moment any critical subset of people died, or got day jobs, or got head hunted by a hedge firm, or whatever. The memetic incentive patterns would probably continue to hold for each incremental addition to the field, more or less forever?
So in 2250 (assuming technology keeps advancing and yet there are still autonomous mortal human-shaped minds with their hands on the reins of history) they might very well think that the causality of our period of history was quite retrospectively straightforward… but they will be treating insights that help uniquely predict 2280 (or whatever their window of prediction is) as trade secrets.
I really like this comment!
I think I see you calling explicit attention attention to your model of cognition, and how your own volitional mental moves interact with seemingly non-volitional mental observations you become aware of.
Then you’re integrating this micro-experimental data into an explanatory framework that implicitly acknowledges the possibility that your own model of yourself might be wrong, and even if it is right other people might work differently or have different observations.
I think that to get any sort of genuine, reproducible, safe, inter-subjectively validated meditative science that knows general laws of subjective psychology, it will involve conversations in this mode :-)
Etymologically, “meditation” comes from the latin meditari, “to study”.
To make a “science word” we switch to ancient greek, where “meletan” means “to study or meditate”. The three original “Boetian muses” were memory (Mnemosyne, who often is considered the mother of them all), song (Aoede), and meditation (Melete)… so if a science existed here it might be called “meletology”?
A few times I’ve playfully used the term “meletonaut” to describe someone whose approach to the field is more exploratory than scholarly or experimental.
If I hear you correctly, in your cognitive explorations, you find that you can page through memories while watching yourself for symptoms of high “adrenaline” (by which I mean often actual adrenaline, but also the general constellation of “arousal” including heart rate and sweaty skin and probably cortisol and so on).
And then maybe when you think of yourself as “aware of your feelings” that phrase could be unpacked to say that you have a basically accurate metacognitive awareness of which memories or images cause adrenaline spikes, without the active metacognitive awareness itself causing an adrenaline spike.
So if someone accuses you of “causing feelings” you can defend yourself by saying the goal is actually to help people non-emotionally know what “causes them to have emotions” without actually “experiencing the feelings directly” except as a means of gathering emotional data.
I think I understand the basis of such defense, and the validity of the defense in terms of the real value of using this technique for some people.
My personal pet name for specifically this exploratory technique (which can be performed alone and appears to occur in numerous sociological and religious contexts) is “engram dousing”.
The same basic process happens in the neuro lingusitic programming (NLP) community as one step of a process they might call something like “memory reconsolidation”.
It also happens in Scientology, where instead of self reported adrenaline symptoms they use an “e-meter” (to measure sweaty palms electronically) and instead of a two person birthday circle they formalize the process quite a bit and call it an “audit”. In scientology it is pretty clear they noticed how great this is as an introductory step in acquiring blackmail material and gaining the unjustified trust of marks (prior to headfucking them) and optimized it for that purpose.
Which is not to say that circling is as bad as scientology!
Also, apostate scientologists regularly report that “the tech” of scientology (which is scientology’s jargon term for all their early well scripted psychological manipulations of new members) does in fact work and gives life benefits.
With dynamite, construction workers could suddenly build tunnels through mountains remarkably fast so that trains and roads could go places that would otherwise have been economically impossible. Dynamite used towards good ends, with decent safety engineering and skill, is great!
But if someone wants to turn a garbage can upside down, strap a chair to it, and have me sit in the chair while they put a smallish, roughly measured quantity of dynamite under it… even if the last person in the chair survived and thought it was a wild ride and wants to do it again… uh… yeah… I would love to watch from a safe distance, but I think I’d pass on sitting in the chair.
And more generally, as an aspiring meletologist and hobbyist in the sociology of religion, all I’m trying to say is that engram dousing (along with some other mental techniques) is like “cognitive nuclear technology”, and circling might not be literally playing with refined uranium, but “the circling community in general” appears to have some cognitive uranium ore, and they’ve independently refined it a bit, and they’re doing tricks with it.
That’s all more or less great :-)
But it sounds like they are not being particularly careful, and many of them might not realize their magic rocks are powered by more than normal levels of uranium decay, and if they have even heard of Louis Slotin then they don’t think he has anything to do with their toy (uranium) pellets.
Ideally, everyone would have the opportunity to explore vulnerability carefully, step by step, with a skilled therapist or something to turn to if things ever got dicey.
I think this is an essential line, and a core problem. For more than a half century the social capital of the average person in the US has been falling and falling and falling. A therapist is sort of just a person you pay to pretend to be a genuine friend, without you having to reciprocate friendship back at them. That it is considered reasonable or ideal (as the first thought) to go to a paid professional to get basic F2F friend services is historically weird.
Maybe it is the best we can do, but… like… it didn’t used to be this way I don’t think, and that suggests that it could be like it was in the past if we knew what was causing it.
I’m pretty sure these people don’t think that what they are doing “borrows from” hypnosis or trance or suggestibility hacking or mesmerism or whatever words you want to use for it.
Their emotions are high, caused by skillful intentional actions, and involves a general dynamic of “playing along” with numerous secondary “critical cognitive faculties” seemingly disengaged. Their focus is on their own feelings, and how their feelings feel, and so on. It isn’t that they don’t notice what’s directly happening to (and inside) them, it is that they notice very little else.
Maybe that’s great. Being in religions seems empirically to be somewhat positive for people?
Maybe the preacher there has studied hypnosis and optimized things for trance states… but I don’t think that would been required for him to be interacting with more or less the same basic mechanisms in people’s cognitive machinery.
Those mechanisms are not particularly exotic or hard to mess with, but they cut directly to “goal-content integrity” and so caution is appropriate.
The details reminds me a lot of hypnosis, with thoughts about thoughts, instead of just thinking things directly.
Breath. Body attention. Meta. Listen to the voice. Respond and recieve. Be open to the update. Body attention. Meta. Listen to the voice. Everyone trancing themselves and everyone else in a fuzzy haze...
Or how about, actually, NO!
How about instead we try to ramp up our critical faculties and talk about models and evidence?
I do not trust casual hypnosis because hypnosis can become “not casual” very fast.
Hypnosis is a power tool and basically it is one of those “things I won’t work with” unless it is wartime and my side is losing and it seems highly relevant to victory. And it probably wouldn’t be my side I’d be hypnotizing, it would be the bad guys.
“We broke the rules, Harry,” she said in a hoarse voice. “We broke the rules.”
“I...” Harry swallowed. “I still don’t see how, I’ve been thinking but—”
“I asked if the Transfiguration was safe and you answered me! ”
There was a pause...
“Right...” Harry said slowly. “That’s probably one of those things they don’t even bother telling you not to do because it’s too obvious. Don’t test brilliant new ideas for Transfiguration by yourselves in an unused classroom without consulting any professors.”
Except there are no decent professors in this subject. (There were crazy CIA mind control experiments, but instead of publishing their results, the records were mostly purged in 1973.)
I’ve thought a lot about iterated chicken, especially in the presence of agent variations.
I suspect the local long term iteration between a rememberable (sub-Dunbar?) number of agents leads to pecking orders, and widespread iteration in crowds of “similarly different” agents leads to something like “class systems”.
For example, in the US, I think every human knows to get out of the way of things that look like buses, because that class of vehicles expects to be able to throw its weight around. Relatedly, the only time a Google car has ever been in a fender bender where it could be read as “at fault” using local human norms was when it was nosing out into traffic and assumed a bus would either yield or swing wide because of the car’s positional priority.
What have you noticed about chinese traffic patterns? :-)
If I understand correctly, the cognitive process/bias/heuristic/whatever of “sacredness” is relevant here.
Neither nails nor dollars are sacred so you’re free to trade dollars for nails.
A kidney is sacred, so you can’t trade that for dollars, but you can trade it for another kidney (although such trades still feel a bit weird).
Sacred things are often poorly managed in practice, and sacredness is easy to make fun of, but a decent defense of sacredness might be that it is one of the few widely installed psychological mechanisms in real life for managing the downsides of having markets in things. Thus, properly deployed sacredness might let you have “trade” in one area without ending up with “totalalizing trade”?
In the smaller and hopefully lower stakes world of video games, I think the suggestion would be to have card classes with different trading characteristics.
The lowest class of very non-sacred things could be swapped with extremely low transaction costs within the class and also be tradeable directly for money.
Higher sacredness things would have a separate market, perhaps with transaction costs like needing a purchaseable delivery mechanism or imposing delays so that objects go into limbo after the trade is finalized while “being delivered”. The most sacred things would be “inalienable” so they can’t be traded or given away or perhaps not even be destroyed.
Exactly where sacredness should be deployed in order to maximize fun seems like a deep and relatively unstudied problem.
One place in real life where the inalienability of something has large and substantive differences from jurisdiction to jurisdiction is the question of the rights of artistic creators to their artwork. In some jurisdictions, an artist cannot legally sell their right to veto the use of their artwork if deployed in artistically compromising ways (like use in advertising or political campaigns) after mere copyrights have been sold.
In the US artistic moral rights are not treated as very sacred, and the lack of sacredness in art production is probably part of the US’s cultural dominance a la Hollywood, but it has arguably also had large effects in the lives of artists, visibly so with people like Bill Waterson and Prince.