This is a problem with profit-maximization, or large corporations, or something, not billionaires — Facebook and Walmart would face the same incentives and do those same things even if their ownership were more distributed.
Nick_Tarleton
I think the rest of these points are colorable and appreciate you saying them, but
politicians offending foreign countries is not, in any sense of the word, an exceptional situation that demands exceptional action. Reagan famously joked that he was about to nuke the USSR, triggering an escalation in the alert state across East Asia.
Threatening to take territory from an ally by force is far beyond “offending foreign countries”, not precedented to my knowledge, and very bad.
More recently, Tao on a different problem:
Recently, the application of AI tools to Erdos problems passed a milestone: an Erdos problem (#728 https://www.erdosproblems.com/728) was solved more or less autonomously by AI (after some feedback from an initial attempt), in the spirit of the problem (as reconstructed by the Erdos problem website community), with the result (to the best of our knowledge) not replicated in existing literature (although similar results proven by similar methods were located).
So it seems we now have a non-example, or something closer to one (depending on how you weight “similar results proven by similar methods”).
To elaborate on this, I think one of the arguments for verbal consent was along these lines: “Some women panic freeze and become silent when they’re uncomfortable, they aren’t capable of saying no in that state.”
This should be an argument for affirmative consent, which isn’t the same as verbal consent (like, you mention “waiting for… nonverbal makeout initiation”). I do see people conflate them or background-assume that the one must be the other, which I think makes these conversations a lot worse.
IME in this domain, in epistemology-for-humans terms that may or may not translate easily into Solomonoff/MDL, taking a compact generator of a complex phenomenon too seriously — like, concentrating probability too strongly on its predictions, not taking anomalies seriously enough or looking hard enough for them, insufficiently expecting there to be more to say that sounds different in kind — is asking to be wrong, and not doing that is a way to be less wrong.
(Looking for compact generators but not taking them too seriously is good, but empirically seems to require more skill or experience.)
You don’t need to single out a specific complex theory to say “this simple theory is concentrating probability too strongly”, or to expect there to be some complex theory that pays for itself.
Look, I don’t like dealing with the sort of stuff I called “deep nonconsent” in this post. Sure, I’m quite kinky in bed, but in the rest of the mating process?
I believe you, but I also find the conjunction of [your kinks] and [choosing the ‘nonconsent’ frame/terminology] interesting, and would guess that it’s not a coincidence. In particular, I believe I’ve observed across people that [things like the former] are associated with [biases and blind-spots-biasing toward things like the latter].
(And I feel like ‘nonconsent’ as frame/terminology is strained at best, and… bad, muddled in a worrying way, in a way that rhymes with what t.g.t.a. is saying.)
Another I played with was e.g. “blame avoidance”, i.e. something-like-ladybrain really wants any dating/sex to happen in a way which is “not her fault”. That seems to mostly generate the same predictions.
Do you think it has some disadvantage, such that you didn’t choose to mention it at all in the OP?
So yeah, I am totally ready to believe there’s some other nearby generator, and if you have one which also better explains some additional things then please state it I want to know it.
A very nearby guess: women tending to prefer a ‘patient’ role in dating/mating, and/or tending to prefer men who take and are good at an ‘agent’ role — this has broader explanatory power for common male/female roles and attraction patterns.
(But also all the things you’re talking about, while anecdotally real, seem at least less broadly/strongly true to me than they do to you; women sending inexplicit-and-deniable-but-strong signals of interest seems more common and not a turnoff; flirting seems more symmetrical (not predicted by either of these models); etc.)
Reading this gave me the same feeling I used to get reading Brent’s stuff
FWIW, while I can see this if I try, it feels very different to me that I haven’t seen John do anything like be hostile to out-of-frame data once presented, or play games to dismiss it, or send what I read as signals that he will do so.
It’s not anti-inductive to believe that sets of phenomena in [some domain] are unlikely to have simple explanations at [some level of abstraction].
WRT social behavior, it does seem to me that ‘a whole lot of things pushing toward the same or similar results’ is remarkably common, and the OP is too dismissive of other explanations, once it’s thought of one, for my priors. (At the same time, I totally think [simple theories that help explain significant sets of dynamics] are real.)
I’m also curious about other observations, from James or others who agreed. (I’m not against the claim, but haven’t noticed it, and can’t think of examples. Bad if true.)
(… I guess there is a social-and-memetic cluster around Aella that rubs me the wrong way, for reasons that kinda rhyme with this? But I wouldn’t describe it the way James did, and definitely don’t see that cluster on LW itself much or see it as LW consensus reality.)
The medical system is full of safeguards to make very sure that no procedure it does will ever hurt you.
I agree that the medical system often has valuable safeguards than DIYing a treatment doesn’t, but this strikes me as way too optimistic about medical error & incompetence & bad epistemology.
Is the “nonconsent” of rape(play) fantasies (‘anticonsent’?) the same as the “nonconsent” of the other dynamics here (‘aconsent’?)?
(Probably tangential but:)
even relatively simple precautions would drastically increase the likelihood you would survive
This seems wrong to me, for most people, in the event of a prolonged supply chain collapse, which seems a likely consequence of large-scale nuclear war. It could be true given significant probability on either a limited nuclear exchange, or quick recovery of supply chains after a large war.
Some people seemingly just have a detail-less prior that the USG is more powerful than anyone else in whatever domain (e.g. has broken all encryption algorithms); without further information I’d assume this is more of that.
So many people seem eager to rush to sell their souls, without first checking to see if the Devil’s willing to fulfill his end of the bargain.
In cases like this I assume the point is to prove one’s willingness to be make the hard choice, not to be effective (possibly to the extent of being ineffective on purpose). This can be just proving to oneself, out of fear of being the kind of person who’s not able to make the hard choice — if I’m not in favor of torturing the terrorist, that might be because I’m squeamish (= weak, or limited in what I can think (= unsafe)), so I’d better favor doing it without thought of whether it’s a good idea.
Plus
the media runs with exactly the sort of story you’d expect it to run with
I haven’t closely followed it, but my impression is that media coverage has been less unfavorable than one might have predicted / than this suggests, tending to recognize the Zizians as a rejected weird fringe phenomenon and sympathetically quote rationalists saying so.
I don’t know best practice here at all, but putting the overall target actually seems reasonable to me; Lightcone did so using $1M (their first goal) and Oli might have thoughts.
Our old workshop had a hyper/agitated/ungrounded energy running through it: “do X and you can be cool and rational like HPMOR!Harry”; “do X and you can maybe help with whether we’ll all die.”
This also seems like an important factor in making it easier for alumni to get pulled into cults — upvoting an urgency/desperation to Fix Something ⇒ finding more appeal in questionable exotic offers of power. (Not unique to CFAR, of course — that urgency/desperation is a deeper thread in rationalist culture + something that people might come in with from the start — but I would think CFAR / this energy acted as a vector for it.)
… actually, the rest of that reply is a good comment on “ambiguous impact on health”:
That said, I think that the rationality project broadly construed has often fallen into a failure mode of trying to do radically ambitious stuff without first solidly mastering the boring and bog standard basics. This led to often undershooting, not just our ambitions, but the more boring baselines.
We aimed to be faster than science. But, in practice, I think we often didn’t meet the epistemic standards of a reasonably healthy scientific subfield.
If I invest substantial effort in rationality development in the future, I intend to first focus on doing the basics really well before trying for superhuman rationality.
Adele argued recently that a rationality curriculum worthy of the name would leave folks less vulnerable to psychosis, and that many current rationalists (CFAR alums and otherwise) are appallingly vulnerable to psychosis. After thinking about it some, I agree.
I want to quote (and endorse, and claim as important) the start of @Eli Tyre’s reply at the time:
For what it’s worth, I think this is directionally correct, and important, but I don’t necessarily buy it as worded.
Sometimes advanced techniques / tools allow power users to do more than they otherwise would be able to, but also break basic-level stuff for less advanced users. There are some people that are able to get a lot more out of their computers with a Linux install, and also, for most people, trying to use and work with Linux can totally interfere with pretty basic stuff that that “just worked” when using windows, or (if you do it wrong) just break your machine, without having the tools to fix it.
It’s correspondingly not that surprising to me if power tools for making big changes to people’s epistemologies sometimes have the effect of making some people worse at the basics. (Though obviously, if this is the case, a huge priority needs to be attending to and mitigating this dynamic.)
You wouldn’t notice a change in apparent gravity, but you would (at least I would) notice the angular acceleration, like when entering or exiting a banked turn.