Yeah I think that mosquito map is showing the Zika-carrying species, but there are 40 other species in Washington. Mosquitos in New England (certainly Maine where I grew up) can be pretty brutal, especially when you include the weeks when the black flies and midges are also biting.
kdbscott
Shared reality: a key driver of human behavior
[Question] Models predicting significant violence in the US?
Faith
I’ve been playing around with this concept I call ‘faith’, which might also be called ‘motivation’ or ‘confidence’. Warning: this is still a naive concept and might only be positive EV when used in conjunction with other tools which I won’t mention here.
My current go-to example is exercising to build muscle: if I haven’t successfully built muscle before, I’m probably uncertain about whether it’s worth the effort to try. I don’t have ‘faith’ that this whole project is worth it, and this can cause parts of me to (reasonably!) suggest that I don’t put in the effort. On the other hand, if I’ve successfully built muscle many times (like Batman), I have faith that my effort will pay off. It’s more like a known purchase (put in the effort, you’ll get the gains), instead of an uncertain bet (put in the effort, maybe get nothing).
Worth noting: It’s not as clear cut as a known effort purchase. The world is more uncertain than that, and the faith I’m referring to is more robust to uncertainty. I expect every time Christian Bale re-built muscle, it was a different process. Some routines didn’t work as well, and some new routines were tried. Faith is the confidence/motivation that even in the face of uncertainty and slow feedback loops, your effort will be worth it.
A lesswrong-style framing of this concept might be something like ‘a fully integrated sense of positive expected value’.
Holding this concept in mind as something that might be going on (having/lacking/building/losing faith) has been useful lately. I might keep editing this as I better flesh out what’s going on.
I’ve been a bit confused about doubling rate. First, I noticed that many numbers (e.g. Wikipedia) are calculating how long it took to double, instead of projecting forward using e.g. yesterday’s increase. Early on this led to misleading numbers, but recently the US has been steady around 2-3 days using both methods.
However, I’m guessing that raw doubling rates depend a lot on testing, and that the US should expect to have a faster-than-actual doubling rate until our testing catches up. So I lean towards Trevor’s number of 5 days.
Hmm, I want a term that refers to all those many dimensions together, since for any given ‘shared reality’ experience it might be like 30% concepts, 30% visual & auditory, 30% emotion/values, etc.
I’m down to factor them out and refer to shared emotions/facts/etc, but I still want something that gestures at the larger thing. Shared experience I think could do the trick, but feels a bit too subjective because it often involves interpretations of the world that feel like ‘true facts’ to the observer.
Wherein I write more, because I’m excited about all this:
The first time I heard the term ‘shared reality’ was in this podcast with Bruce Ecker, the guy who co-wrote Unlocking the Emotional Brain. He was giving an example of how a desire for ‘shared reality’ can make it hard to come to terms with e.g. emotional trauma.
by believing the parent’s negative messages to you (either verbal or behavioral), you’re staying in shared reality: and that’s a big aspect of attachment. … especially shared reality about yourself: ‘they think I’m a piece of crap, and I do too. So I feel seen and known by them even if the content is negative’.
In this case, the parent thinks the kid is a ‘piece of crap’, which I expect doesn’t feel like an emotion to the parent, it feels like a fact about the world. If they were more intellectually mature they might notice that this was an evaluation—but it’s actually super hard to disentangle evaluations and facts.
I guess I think it’s maybe impossible to disentangle them in many cases? Like… I think typically ‘facts’ are not a discrete thing that we can successfully point at, that they are typically tied up with intentions/values/feelings/frames/functions. I think Dreyfus made this critique of early attempts on AI, and I think he ended up being right (or at least my charitable interpretation of his point) - that it’s only within an optimization process / working for something that knowledge (knowing what to do given XYZ) gets created.
Maybe this is an is/ought thing. I certainly think there’s an external world/territory and it’s important to distinguish between that and our interpretations of it. And we can check our interpretations against the world to see how ‘factual’ they are. And there are models of that world like physics that aren’t tied up in some specific intention. But I think the ‘ought’ frame slips into things as soon as we take any action, because we’re inherently prioritizing our attention/efforts/etc. So even a sharing of ‘facts’ involves plenty of ought/values in the frame (like the value of truth-seeking).
I think it makes sense that the orgs haven’t commented, as it would possibly run afoul of antitrust laws.
See for example when some fashion clothing companies talked about trying to slow down fashion cycles to produce less waste / carbon emissions, which led to antitrust regulators raiding their headquarters.
Sure! The main reason I use the term is because it already exists in the literature. That said, I seem to be coming at the concept from a slightly different angle than the ‘shared reality’ academics. I’m certainly not attached to the term, I’d love to hear more attempts to point at this thing.
I think the ‘reality’ is referring to the subjective reality, not the world beyond ourselves. When I experience the world, it’s a big mashup of concepts, maps, visuals, words, emotions, wants, etc.
Any given one of those dimensions can be more or less ‘shared’, so some people could get their yummies from sharing concepts unrelated to their emotions. In your example, I think if my parents had something closer to my beliefs, I’d have more of the nice shared reality feeling (but would probably quickly get used to it and want more).
Some side notes, because apparently I can’t help myself:
I think people often only share a few dimensions when they ‘share reality’, but sharing more dimensions feels nicer. I think as relationships/conversations get ‘deeper’ they are increasing the dimensions of reality they are attempting to share.
(I think often people are hoping that someone will be sharing ALL dimensions of their reality, and can feel super let down / disconnected / annoyed when it turns out their partner doesn’t share dimension number X with them).
Having dimensions that you don’t share with anyone can be lonely, so sometimes people try to ignore that part of their experience (or desperately find similar folks on the internet).
My examples seem to have been mostly about joy, but I don’t think there is any valence preference, People love sharing shitty experiences.
That said, probably the stronger / more prominent the experience the more you want to share (and the worse it feels to not share).
Here’s a paper (posted 25 Feb) outlining neurological symptoms in 214 Chinese hospital patients:
126 non-severe patients, 38 of which had ‘neurologic symptoms’
3 with impaired consciousness
1 had an ischemic stroke
88 severe patients, 40 of which had neurologic symptoms
13 had impaired consciousness
4 had an ischemic stroke, 1 cerebral hemorrhage
I don’t know how much this differs from base rates—like if I have hypertension and need to go to the hospital because I broke my wrist, how likely is it that my brain also goes haywire? Or if I get a fever?
How are the mosquitos on e.g. mushroom hunts?
Good point—I’m thinking political acts along the lines of violent protests, terrorism, and insurgencies. I can see how police shootings could be included there. The spirit of what I’m going for is how much change to expect, so e.g. deaths above and beyond what you would have in an average year
Good point about LW affiliation—in addition I would add that results are highly dependent on how the survey is distributed. This makes large predictions difficult, but more specific predictions (like >80% of LW affiliations will identify as atheist/agnostic) might be the way to go.
I’m still getting familiar with this community, but I suppose it’s a fun exercise so I’ve added some thoughts to the excel sheet.
Yeah let’s do in-person sometime, I also tried drafting long responses and they were terrible
Nice to hear!
I haven’t written more about this publicly, but have maybe 70 pages of notes about this concept
I think basically everyone has a desire to connect / share their experiences, but people who have relatively unusual experiences (e.g. rare neurotype/childhood/etc), probably discover that it’s much harder / less likely to get the warm fuzzies of shared reality so might give up on various connection strategies due to the lack of positive feedback (or negative feedback, since disconnection is unpleasant). Does that maybe get at what you were asking?
Oh um, in lots of ways. Extraverted people probably discover nice shared-reality strategies that make them feel good in connection with other people, so they tend to like / get energized by hanging out with other people. Charismatic people maybe are especially good at creating a sense of shared reality, and/or can take advantage of people’s desire for shared reality to climb the attention hierarchy. For autistic people I’d refer to the above bullet
You can totes create shared reality by experiencing stuff together, it’s great. Sometimes can go wrong, e.g. if people are clinging about it / not at peace with the bad news. Not sure I follow about significance. What’s the significance of creating yummy food?
clinging is pretty top notch. In general I think Joe Carlsmith’s stuff is quality. Having trouble choosing from all my faves, maybe the drama triangle? (I haven’t read that specific post, it’s probably misleading in important ways, but I like including the upper triangle in addition to the lower one)
Thank you for flagging this! Should be fixed now.
I agree about the cooperation thing. One addendum I’d add to my post is that shared reality seems like a common precursor to doing/thinking together.
If I want to achieve something or figure something out, I can often do better if I have a few more people working/thinking with me, and often the first step is to ‘get everyone on the same page’. I think lots of times this first step is just trying to shove everyone into shared reality. Partially because that’s a common pattern of behavior, and partially because if it did work, it would be super effective.
But because of the bad news where people actually have different experiences, cracks often form in the foundation of this coordinated effort. But I think if the team has common knowledge about the nature of shared reality and the non-terrible/coercive/violent way of achieving it (sharing understanding), this can lead to better cooperation (happier team members, less reality-masking, better map-sharing).
I’m also not sure what you mean about the trust problem, maybe you mean the polls which claim that trust in government and other stuff has been on the decline?
Sure! I love talking about this concept-cluster.
I have a hunch that in practice the use of the term ‘shared reality’ doesn’t actually ruin one’s ability to refer to territory-reality. In the instances when I’ve used the term in conversation I haven’t noticed this (and I like to refer to the territory a lot). But maybe with more widespread usage and misinterpretation it could start to be a problem?
I think to get a better sense of your concern it might be useful to dive into specific conversations/dynamics where this might go wrong.
...
I can imagine a world where I want to be able to point out that someone is doing the psychological mistake of confusing their desire to connect with their map-making. And I want the term I use to do that work, so I can just say “you want to share your subjective experience with me, but I’m disagreeing with you about reality, not subjective experience.”
Does that kind of resonate with your concern?
OK, I’ve added a disclaimer to the main text. I agree it’s important. It seems worth having this kind of disclaimer all over the place, including most relationship books. Heck, it seems like Marshall Rosenburg in Non-Violent Communication is only successfully communicating like 40% of the critical tech he’s using.
Do you understand how e.g. Rari’s USDC pool makes 20% APY?
Lending would require someone to be borrowing at rates higher than 20%, but why do that when you can borrow USDC at much lower rates? Or maybe the last marginal borrower is actually willing to take that rate? Then why does Aave give such low rates?
Providing liquidity would require an enormous amount of trades that I don’t expect to be happening, but maybe I’m wrong
The only thing that my limited imagination can come up with is ‘pyramid scheme’, where you also get paid a small fraction of the money that other people are putting into the pool. So as long as the pool keeps growing, you get great returns. But the last half of the pool gets small (or negative) returns.
I’d love to get a better sense of this, maybe you could point me to your favorite writeup?
Did you end up finding one besides the MIDAS network, or develop your own? I’m assembling a parameter doc for inputs to a rough model that accounts for ventilator & hospital bed capacity, since it seems like we’re lacking that.
I encourage folks to add parameters w/ citations to the doc, I’ll be active on it for the next few days.
If anyone knows of models that incorporate actual healthcare capacity, please share!