I feel like 100$, or even 10$, might work even better in practice. Trivial inconveniences and all that.
Self
(I at least suspect this is my comparative advantage. But I’m not good at communicating [insights], a skill that comes neither with <analytical rigor> nor with <high-res introspective access>.
It also seems like the <after controlling for situational factors, status psychology explains more than half of variance in human behavior> camp is essentially right, which colors most genuine discussion less pretty than most people would prefer, especially those with less introspective insight.
I (somewhat predictably, given my status incentives) hold that this is an important, central problem civilization has, bc mutual information is the fundament of cooperation, or expressed more concretely the better we model each other the easier it is to avoid common deception & adversity attractors.)
You [don’t] have to believe!
You know how high school sports coaches like to go on about how “You have to believe you will win!”? And how the standard rationalist response is “Nonsense, of course you don’t. Beliefs are supposed to track reality, not be wishful thinking. Believe what looks to be true, try your best, and find out if you win”?
The coach does have a point though, and there’s a reason he’s so adamant about what he’s saying. If you expect to lose—if you’re directing attention towards the experience of your upcoming loss—then you are intending to lose, and good luck winning if you aren’t gonna even try. The problem is that he’s expecting on the level of “Will we win this game?”, which, according to the data, isn’t looking like it’s something we can control. He doesn’t know what else to do, and he doesn’t want to just give up, so of course he’s going to engage in motivated thinking. Fudging the data until he can expect success is the only way he can hope to succeed. It’s a load bearing delusion.[8][9]
One way to do better is to deliberately trade correctness of expectation for effort without letting delusion spread to infect the rest of your thinking. “Yeah, I’m probably going to lose. I don’t care. I intend to win anyway”. Or, in other words “Do or do not. There is no ‘try’”. That means setting yourself up for failure, expecting success knowing that you aren’t likely to have that expectation realized. It’s not pleasant, and that gap between your expectations and the data coming from reality is what suffering is. But with suffering comes hope, and sometimes the tradeoff is worthwhile.
This post seems highly relevant.
It describes <a solution to this dilemma> that also is <a mental mechanism humans use natively>.
“Pretend the emotion is a person or cute animal who can talk” is a pretty great trick.
Huh. Tried this on my social media cravings.
Couldn’t visualize them as an animal, but managed <a stream of energy between me and my laptop screen>. Managed to make the stream talk in my mind.
This behaved like a “talking lens” laid over my perception. As if the craving itself was live-reacting to objects on my screen while I clicked and scrolled.
Informative via making the involved needs concrete.
Improved my intuitions, ty.
Keeps baffling me how much easier having a concept for something makes thinking about it.
What about this one:
“Hivemind” is best characterized as a state of zero adversarial behavior.
“Humanity becomes a hivemind” is the single least dystopic coherent image of the future.
Illustrative post. The downvotes confuse me.
Depression is a formidable cognitive specialization.
There may have been other, unmentioned optimization targets that also need eloquence
Predictions:
(75%) Groups who successfully[1] adopt trust technology will economically and politically outcompete the rest of their respective societies rather quickly (less than 10 years).
The efficiency gains feasibly up for grabs in the first 15 years compared to statusquo are over 100% (75%) or over 400% (50%).
(66%) Society-wide adoption of trustbuilding tech is a practical path / perhaps the only practical path towards sane politics in general and sane AI politics in particular.
The whole gestalt of why this is a huge affordance seems self-evident to me, it’s a cognitive weakness of mine to often not know which parts of my thinking need more words written out loud to be legible.
But one intuition is: Regular “natural” human cultures are accidental products sampled from environments where deception-heavy strategies are dominant, and this imposes large deadweight costs on all pursuits of value, including economic value, happiness, friendship, and morality. Explicitly: Most of our cognition goes into deceiving others, and the density of useful acts could be multiple times higher.
- ^
i.e. build mutual understandings at least to, but ideally surpassing, the point of family-like intimacy / feeling the others as extensions of oneself
I’m not eloquent enough to express how important I think this is.
I feel like such intuitions could be developed. - I’m more uncertain where I would use this skill.
Though given how OOD it is there could be significant alpha up for grabs
(Q: Where would X-Ray vision for cluster structures in 5-dimensional space be extraordinarily useful?)
Hmm. Yeah. It gets difficult to display points with the same XY coordinates and different RGB coordinates
With colors you can in principle display data in 5-dimensional space on a 2D medium without flattening.
Bottlenecks (cognitive):
- intuitively knowing the RGB values of colors you’re seeing
- intuitively perceiving color differences as 3-dimensional distancesFeasible? Useful?
Latest in Shit Claude Says:
Credibility Enhancing Displays (CREDs)
Ideas spread not through their inherent quality but through costly displays of commitment by believers. Words are cheap; actions that would be irrational if the belief were false are persuasive.Predictive angle: The spread of beliefs correlates more strongly with observable sacrifices made by believers than with evidence or argument quality.
Novel implication: Rationalists often fail to spread ideas despite strong arguments because they don’t engage in sufficient credibility enhancing displays. Effective belief transmission requires demonstration through personal cost[1].
The easiest way for rats to do this more may be “retain nonchalant confidence when talking about things you’re certain are true, even in the face of audience skepticism”
- ^
I think the “personal cost” angle is mistaken. Costly Signaling only requires the act would be costly if you didn’t posses the trait.
- ^
Aspies certainly seem to do this less!
You mean, like him as a blogger? Or as a person in real life?
The latter? Like, I subconsciously parse his blogging voice not unlike as if it were a person in my tribal surroundings, and I like/admire/relate to that virtual person, and I think this is what causes some aspect of persuasion
I mean yes it’s embarrassing, but it’s what I see in myself and what seems to be most consistent with what everyone else is doing, certainly more consistent than what they claim they’re doing.
E.g. it seems rare for someone who actively dis-appreciates the sequences to not also dislike Eliezer for what seems like vibes-based reasons more than content-based reasons
But then again, all models are false!
If I peer into my own past, where arguably I was more autistic than today, I can see that my standards for admiration seem to have been much stricter. I basically wouldn’t ever copy role models because there were no role models to copy. This may be the shape of an important caveat
They do, but the explanation proposed here matches everything I know most exactly and simply.
E.g. it became immediately clear that the sequences wouldn’t work nearly as well for me if I didn’t like Eliezer.
Or the way fashion models are of course not selected for attractiveness but for more mimetic-copying-inducing highstatus traits like height/confidence/presence/authenticity
and others
And yeah not all of the Claude examples are good, I hadn’t cherrypicked
More thoughts that may or may not be directly relevant
What’s missing from my definition is that deception happens solely via “stepping in front of the camera”, i.e. via the regular sensory channels of the deceived optimizer, ie brainwashing or directly modifying memory is not deception
From this follows to deceive is to either cause a false pattern recognition or to prevent a correct one, and for this you indeed need familiarity with the victim’s perceptual categories
I’d like to say more re: hostile telepaths or other deception frameworks but am unsure what your working models are
Comparative advantage at work