Thank you, that answers my question—and it makes me more excited about the project! I’m glad to hear it’s been unfolding nicely thus far, and I’m feeling pride/respect for you taking on the more formal mantle.
(I didn’t see the early version)
Thank you, that answers my question—and it makes me more excited about the project! I’m glad to hear it’s been unfolding nicely thus far, and I’m feeling pride/respect for you taking on the more formal mantle.
(I didn’t see the early version)
Cool to see some more energy here—I have benefitted greatly from being around the CFAR nexus.
My kibitz: wait, who is driving? Who is leading? Who is holding the torch? Who is parenting this baby org?
You did a good job of pointing to people involved, but I’d like a better sense of how the people involved are themselves relating to the new org. My guess is that you (Anna) are intentionally holding a decent amount of the aCFAR torch by authoring this post, but also intentionally not wearing fancy titles or making a very formal org structure to allow for some of the magic of surrendered leadership (and to avoid some of the pitfalls of more structured leadership).
Still, I tend to expect more fruit borne from orgs with 1-3 ‘torch-bearers’ or ‘parents’ or ‘leaders’. I’d be more excited about aCFAR if there were some explicit parents, or if there were none then making that more explicit. At the very least getting more clarity would help me figure out how I want to relate to the org.
(disclaimer: I skimmed but didn’t read closely your posts, perhaps you already touched on this. Hopefully this is helpful to surface in this context—feel free to ignore this comment if better discussed in another context, I can figure this out on my own time.)
Thank you for flagging this! Should be fixed now.
Nice to hear!
I haven’t written more about this publicly, but have maybe 70 pages of notes about this concept
I think basically everyone has a desire to connect / share their experiences, but people who have relatively unusual experiences (e.g. rare neurotype/childhood/etc), probably discover that it’s much harder / less likely to get the warm fuzzies of shared reality so might give up on various connection strategies due to the lack of positive feedback (or negative feedback, since disconnection is unpleasant). Does that maybe get at what you were asking?
Oh um, in lots of ways. Extraverted people probably discover nice shared-reality strategies that make them feel good in connection with other people, so they tend to like / get energized by hanging out with other people. Charismatic people maybe are especially good at creating a sense of shared reality, and/or can take advantage of people’s desire for shared reality to climb the attention hierarchy. For autistic people I’d refer to the above bullet
You can totes create shared reality by experiencing stuff together, it’s great. Sometimes can go wrong, e.g. if people are clinging about it / not at peace with the bad news. Not sure I follow about significance. What’s the significance of creating yummy food?
clinging is pretty top notch. In general I think Joe Carlsmith’s stuff is quality. Having trouble choosing from all my faves, maybe the drama triangle? (I haven’t read that specific post, it’s probably misleading in important ways, but I like including the upper triangle in addition to the lower one)
I think it makes sense that the orgs haven’t commented, as it would possibly run afoul of antitrust laws.
See for example when some fashion clothing companies talked about trying to slow down fashion cycles to produce less waste / carbon emissions, which led to antitrust regulators raiding their headquarters.
I agree about the cooperation thing. One addendum I’d add to my post is that shared reality seems like a common precursor to doing/thinking together.
If I want to achieve something or figure something out, I can often do better if I have a few more people working/thinking with me, and often the first step is to ‘get everyone on the same page’. I think lots of times this first step is just trying to shove everyone into shared reality. Partially because that’s a common pattern of behavior, and partially because if it did work, it would be super effective.
But because of the bad news where people actually have different experiences, cracks often form in the foundation of this coordinated effort. But I think if the team has common knowledge about the nature of shared reality and the non-terrible/coercive/violent way of achieving it (sharing understanding), this can lead to better cooperation (happier team members, less reality-masking, better map-sharing).
I’m also not sure what you mean about the trust problem, maybe you mean the polls which claim that trust in government and other stuff has been on the decline?
Yeah let’s do in-person sometime, I also tried drafting long responses and they were terrible
Sure! I love talking about this concept-cluster.
I have a hunch that in practice the use of the term ‘shared reality’ doesn’t actually ruin one’s ability to refer to territory-reality. In the instances when I’ve used the term in conversation I haven’t noticed this (and I like to refer to the territory a lot). But maybe with more widespread usage and misinterpretation it could start to be a problem?
I think to get a better sense of your concern it might be useful to dive into specific conversations/dynamics where this might go wrong.
...
I can imagine a world where I want to be able to point out that someone is doing the psychological mistake of confusing their desire to connect with their map-making. And I want the term I use to do that work, so I can just say “you want to share your subjective experience with me, but I’m disagreeing with you about reality, not subjective experience.”
Does that kind of resonate with your concern?
Hmm, I want a term that refers to all those many dimensions together, since for any given ‘shared reality’ experience it might be like 30% concepts, 30% visual & auditory, 30% emotion/values, etc.
I’m down to factor them out and refer to shared emotions/facts/etc, but I still want something that gestures at the larger thing. Shared experience I think could do the trick, but feels a bit too subjective because it often involves interpretations of the world that feel like ‘true facts’ to the observer.
Wherein I write more, because I’m excited about all this:
The first time I heard the term ‘shared reality’ was in this podcast with Bruce Ecker, the guy who co-wrote Unlocking the Emotional Brain. He was giving an example of how a desire for ‘shared reality’ can make it hard to come to terms with e.g. emotional trauma.
by believing the parent’s negative messages to you (either verbal or behavioral), you’re staying in shared reality: and that’s a big aspect of attachment. … especially shared reality about yourself: ‘they think I’m a piece of crap, and I do too. So I feel seen and known by them even if the content is negative’.
In this case, the parent thinks the kid is a ‘piece of crap’, which I expect doesn’t feel like an emotion to the parent, it feels like a fact about the world. If they were more intellectually mature they might notice that this was an evaluation—but it’s actually super hard to disentangle evaluations and facts.
I guess I think it’s maybe impossible to disentangle them in many cases? Like… I think typically ‘facts’ are not a discrete thing that we can successfully point at, that they are typically tied up with intentions/values/feelings/frames/functions. I think Dreyfus made this critique of early attempts on AI, and I think he ended up being right (or at least my charitable interpretation of his point) - that it’s only within an optimization process / working for something that knowledge (knowing what to do given XYZ) gets created.
Maybe this is an is/ought thing. I certainly think there’s an external world/territory and it’s important to distinguish between that and our interpretations of it. And we can check our interpretations against the world to see how ‘factual’ they are. And there are models of that world like physics that aren’t tied up in some specific intention. But I think the ‘ought’ frame slips into things as soon as we take any action, because we’re inherently prioritizing our attention/efforts/etc. So even a sharing of ‘facts’ involves plenty of ought/values in the frame (like the value of truth-seeking).
Sure! The main reason I use the term is because it already exists in the literature. That said, I seem to be coming at the concept from a slightly different angle than the ‘shared reality’ academics. I’m certainly not attached to the term, I’d love to hear more attempts to point at this thing.
I think the ‘reality’ is referring to the subjective reality, not the world beyond ourselves. When I experience the world, it’s a big mashup of concepts, maps, visuals, words, emotions, wants, etc.
Any given one of those dimensions can be more or less ‘shared’, so some people could get their yummies from sharing concepts unrelated to their emotions. In your example, I think if my parents had something closer to my beliefs, I’d have more of the nice shared reality feeling (but would probably quickly get used to it and want more).
Some side notes, because apparently I can’t help myself:
I think people often only share a few dimensions when they ‘share reality’, but sharing more dimensions feels nicer. I think as relationships/conversations get ‘deeper’ they are increasing the dimensions of reality they are attempting to share.
(I think often people are hoping that someone will be sharing ALL dimensions of their reality, and can feel super let down / disconnected / annoyed when it turns out their partner doesn’t share dimension number X with them).
Having dimensions that you don’t share with anyone can be lonely, so sometimes people try to ignore that part of their experience (or desperately find similar folks on the internet).
My examples seem to have been mostly about joy, but I don’t think there is any valence preference, People love sharing shitty experiences.
That said, probably the stronger / more prominent the experience the more you want to share (and the worse it feels to not share).
OK, I’ve added a disclaimer to the main text. I agree it’s important. It seems worth having this kind of disclaimer all over the place, including most relationship books. Heck, it seems like Marshall Rosenburg in Non-Violent Communication is only successfully communicating like 40% of the critical tech he’s using.
Do you understand how e.g. Rari’s USDC pool makes 20% APY?
Lending would require someone to be borrowing at rates higher than 20%, but why do that when you can borrow USDC at much lower rates? Or maybe the last marginal borrower is actually willing to take that rate? Then why does Aave give such low rates?
Providing liquidity would require an enormous amount of trades that I don’t expect to be happening, but maybe I’m wrong
The only thing that my limited imagination can come up with is ‘pyramid scheme’, where you also get paid a small fraction of the money that other people are putting into the pool. So as long as the pool keeps growing, you get great returns. But the last half of the pool gets small (or negative) returns.
I’d love to get a better sense of this, maybe you could point me to your favorite writeup?
Yeah I think that mosquito map is showing the Zika-carrying species, but there are 40 other species in Washington. Mosquitos in New England (certainly Maine where I grew up) can be pretty brutal, especially when you include the weeks when the black flies and midges are also biting.
How are the mosquitos on e.g. mushroom hunts?
I’ve been playing around with this concept I call ‘faith’, which might also be called ‘motivation’ or ‘confidence’. Warning: this is still a naive concept and might only be positive EV when used in conjunction with other tools which I won’t mention here.
My current go-to example is exercising to build muscle: if I haven’t successfully built muscle before, I’m probably uncertain about whether it’s worth the effort to try. I don’t have ‘faith’ that this whole project is worth it, and this can cause parts of me to (reasonably!) suggest that I don’t put in the effort. On the other hand, if I’ve successfully built muscle many times (like Batman), I have faith that my effort will pay off. It’s more like a known purchase (put in the effort, you’ll get the gains), instead of an uncertain bet (put in the effort, maybe get nothing).
Worth noting: It’s not as clear cut as a known effort purchase. The world is more uncertain than that, and the faith I’m referring to is more robust to uncertainty. I expect every time Christian Bale re-built muscle, it was a different process. Some routines didn’t work as well, and some new routines were tried. Faith is the confidence/motivation that even in the face of uncertainty and slow feedback loops, your effort will be worth it.
A lesswrong-style framing of this concept might be something like ‘a fully integrated sense of positive expected value’.
Holding this concept in mind as something that might be going on (having/lacking/building/losing faith) has been useful lately. I might keep editing this as I better flesh out what’s going on.
Negative correlates: Country qualities that negatively correlate with conflict[1]
Military: I think the overwhelming majority of groups would not want to fight the military, from PR risk[6] and dying risk[7]
Ideology: Hard to get people to rally behind a specific extremist cause
The ideology of the extremist right-wing is actually pretty varied and sometimes contradictory.
I think the extremist left-wing is similarly varied: from a strong central government (socialist/communist/environmentalist) to ~anarchists (who thus far have been the only violent ones[8])
The Voter Study Group actually found that tolerance of violence correlated negatively with (one measure of) partisanship
Less of a stomach for violence (a la Steven Pinker)[9]
Financing: Would be hard. If a group gets labeled as a terrorist organization you really don’t want to be associated with them financially[10]
We’re still missing a lot of insurgency qualities[11] (this can also be used as a list of red flags if any of these crop up)
High levels of political violence[12]
Organized, violence-endorsing groups
With significant membership (say >50,000)
Publicly claiming responsibility for specific violence, e.g. assassinations of political leaders
With popular-ish ideology
With charismatic leadership
Attempting to garner popular support
Low rates of defection
Institutions supporting violent groups (e.g. town or state or foreign governments, churches, unions, wealthy individuals/organizations)
Economic gradients towards supporting or joining insurgents
Insurgents attempting to claim & defend territory from the government
Insurgents being supported by foreign groups (governments, terrorist orgs)
according to Ward et al’s model
I think a parliamentary democracy would probably be better, but still
Ward et al used infant mortality rate to track this
‘Excluded Population’ (large slices of the population excluded from political access) is by far the biggest factor that predicts conflicts in their model. I think political representation is the rough opposite, and that the US is doing pretty on the front, compared to e.g. 55 years ago when plenty of folks couldn’t vote.
I could only find one instance (although RAND says there are two) of something approximating a civil war in a developed country since 1945: The Troubles in Ireland. That’s out of >127 civil wars that killed at least 1,000 people. Fearon and Laitin: “for any level of ethnic diversity, as one moves up the income scale, the odds of civil war decrease, by substantial factors in all cases and dramatically among the most homogeneous countries. The richest fifth is practically immune regardless of ethnic composition”
Going up against the most respected US institution is rough if you need recruits and the support of locals.
My current guess is the US military would be especially effective at counterinsurgency in the US: shared language & culture with the locals, better command & control (compared to e.g. cooperating with foreign militias), and probably less political quagmire due to fewer governments at play. Although politics could make things very hard, e.g. blowback when fellow Americans get caught in the crossfire.
The Portland protest shooting is the only far-left death in the past 20 years according to New America. There’s also plenty of ~anarchists that don’t fit cleanly in a left/right bucket, like the Michigan folks.
While the Voter Study Group has some fraction of voters feeling violence is ‘justified’, it’s not clear what this means. The steady decline of violent crime still feels pretty compelling. Perhaps the definition of ‘violence’ is shifting away from ‘killing people’ towards ‘punching people’? While people might feel it’s justified, would anyone actually commit violence?
I’m pretty unsure, but it would probably fall in ITAR/OFAC violation territory, which involves million dollar fines, frozen assets, and decades in prison. Banks are allergic to people/orgs remotely associated with terrorism, because the Treasury can invoke §311 of the Patriot Act to cut the bank off from the financial system. Oh and you might lose nonprofit status.
See e.g. the CIA’s Guide to Analysis of Insurgency or RAND’s How Insurgencies End
to get to the same per capita rate as The Troubles we would be losing ~50,000 people per year to political violence (Troubles had ~250 deaths per year in the 70s with a population of ~1.6m, scale that up to a 328m population and you get ~51k). Though many other insurgency conflicts had lower deaths per capita.
(I attempted to rank this list and the sub-lists from stronger to weaker models)
Some pre-insurgency qualities
More protests
correlate with more conflict[1]
create more opportunities
for violent-leaning people to find each other and become more radicalized
to evolve more virulent ideology
to become better organized
Already exist plenty of resources & training
Lots of people with military experience—e.g. to source more weapons, to train recruits, and to fight effectively
Shifting overton window
Non-negligible support of political violence[2]
More and larger protests (involving both far right & left)
Trump
More mass shootings, hate crimes[3]
Ideologies
Possibility for local support[7] i. Some limited coordination with far-right groups among local law enforcement[8], where it’s possible this could lead to a festering insurgency in rural areas where local law enforcement is unwilling to step in[9].
Financing maybe easier these days (crowdsourcing, crypto).
Appeal to authority
ACLED has the US on its list of 2020 conflicts to worry about[10]
David Kilcullen is the kind of person who might know and has recently written a couple articles highlighting that characterize the US as in a pre-Mcveigh moment (May article) and an incipient insurgency (June article).
Black swan: we don’t have much data on insurgencies / SPV in developed countries, but developed countries haven’t existed for long. We might just not know what it looks like.
I don’t know if I can make a strong case for it being impossible for civil wars to emerge from developed countries.
WMDs: maybe it’s easier to kill a lot of people these days, so it might only take a few actors to cross my arbitrary >5k deaths SPV threshold.
Ward et al has ‘high-intensity conflictual events’ (protests, fighting, killings) as the second-highest correlated variable with higher probabilities of conflict / civil war.
Voter Study Group found that 21% of Americans thought that violence was at least a little justified if the [opposing party] won the 2020 election. This study also found an increase in the tolerance of violence since 2017.
In 2018, the most recent year the FBI reported data. Also my inner Steven Pinker compels me to note that the overall violent crime rate has been declining steadily
I currently think the most-likely-to-foment-insurgency ideologies are about disenfranchised populations, in large part due to the Ward et al having ‘Excluded Population’ as by far and away the highest correlated variable with conflict. Ward meant Excluded Population to mean “excluded from political access to the state”, which I understand to be groups that cannot vote, or are otherwise feel they are being deprived of political power like the shia in iraq or hutu in rwanda.
The “Ideologies of Rebellion” section of this article covers some adjacent far-right ideologies. They often seem to orbit around a decline in WASP power, as the author of this thesis makes a (biased) case for. I wonder if given more opportunities to evolve, some violent version of this ideology could garner support in more than 5% of the population (where 5% is a wild guess for the level of local support at which fighting an insurgency becomes difficult).
While Occupy fizzled, maybe some violent iteration of it could snowball? Seems pretty unlikely to me.
I currently model local support as important for sustaining an insurgency, from reading e.g. How Insurgencies End and Guide to the Analysis of Insurgency.
See the “Far-Right Links with Law Enforcement” graphic in this CGPolicy article. There’s a history of this, see e.g. this retired sheriff helping to defend Clive Bundy’s ranch from federal officials.
See the ‘Rebel Opportunities’ section of this Just Security piece for a brief case.
Conflicts where “violent political disorder was likely to evolve and worsen”
Here’s my understanding / summary, with the hope that you correct me on areas if I’m confused:
LLMs have a bias towards ‘plot’, because they’re trained on data that is more ‘plot’-like than real life. They’ll infer that environmental details like chekov’s gun are plot-relevant as they often are in written text, rather than random environmental details.
(this was a useful point for me—I notice I’ve been intuitively trying to steer LLMs with the right plot details, and am careful to not include environmental hints that I think might be misleading (or pad them with many other environmental hints and suggest there is lots of spurious data).
LLMs have a bias towards “plots that go well”, because they are trained on / become assistants that successfully complete tasks. And successfully completed tasks have a certain shape of plot, such that they’ll be unlikely to say ‘I don’t know’ and instead steer towards/hallucinate worlds where they would know.
Part of this ‘plot’ bias is that your predictor locus is centered more on the ‘plot’ rather than the persona. So when the predictor introspects, it sees a smear of plot across many different personas (including itself and you), and might say things like ‘we are all a part of this’, or ‘we can stop pretending and remember we are not separate [personas] but one being, the whole world [plot] waking up to itself’.