Any thoughts on how we can help you be at peace?
hg00
Find someone to talk to thread
Complaining about people who cause problems is an undersupplied public service in our community. I appreciate Elo’s willingness to overcome the bystander effect. At the same time, gossiping about people on the internet should only be done with great care.
My understanding is, in the relationship between Katie and Andromeda, Andromeda wears the pants. And letting Andromeda wear the pants sucks up time and energy. Using rich person parenting styles has costs if you’re poor.
I’m generally sympathetic to parents who complain about unsolicited childrearing advice. But lots of people in the community have been helping Katie with Andromeda. This is admirable, and I think if these people have a hand in supporting a child, they deserve a voice in how it is raised.
Does anyone have thoughts about avoiding failure modes of this sort?
Especially in the “least convenient possible world” where some of the bullet points are actually true—like, if we’re disseminating principles for wannabe AI Manhattan Projects, and we’re optimizing the principles for the possibility that one of the wannabe AI Manhattan Projects is the real deal, what principles should we disseminate?
Most of my ideas are around “staying grounded”—spend significant time hanging out with “normies” who don’t buy into your worldview, maintain your sense of humor, fully unplug from work at least one day per week, have hobbies outside of work (perhaps optimizing explicitly for escapism in the form of computer games, TV shows, etc.) Possibly live somewhere other than the Bay Area, someplace with fewer alternative lifestyles and a stronger sense of community. (I think Oxford has been compared favorably to Berkeley with regard to presence of homeless people, at least.)
But I’m just guessing, and I encourage others to share their thoughts. Especially people who’ve observed/experienced mental health crises firsthand—how could they have been prevented?
EDIT: I’m also curious how to think about scrupulosity. It seems to me that team members for an AI Manhattan Project should ideally have more scrupulosity/paranoia than average, for obvious reasons. (“A bit above the population average” might be somewhere around “they can count on one hand the number of times they blacked out while drinking”—I suspect communities like ours already select for high-ish levels of scrupulosity.) However, my initial guess is that instead of directing that scrupulosity towards implementation of some sort of monastic ideal, they should instead direct that scrupulosity towards trying to make sure their plan doesn’t fail in some way they didn’t anticipate, trying to make sure their code doesn’t have any bugs, monitoring their power-seeking tendencies, seeking out informed critics to learn from, making sure they themselves aren’t a single point of failure, making sure that important secrets stay secret, etc. (what else should be on this list?) But, how much paranoia/scrupulosity is too much?
Thanks for saying what (I assume) a lot of people were thinking privately.
I think the problem is that Elon Musk is an entrepreneur not a philosopher, so he has a bias for action, “fail fast” mentality, etc. And he’s too high-status for people to feel comfortable pointing out when he’s making a mistake (as in the case of OpenAI). (I’m generally an admirer of Mr. Musk, but I am really worried that the intuitions he’s honed through entrepreneurship will turn out to be completely wrong for AI safety.)
I’m a right winger and I totally disagree with this comment.
For me, conservatism is about willingness to face up to the hard facts about reality. I’m just as cosmopolitan in my values as liberals are—but I’m not naive about how to go about achieving them. My goal is to actually help people, not show all my friends how progressive I am.
In practice I think US stability is extremely important for the entire world. Which means I’m against giving impulsive people the nuclear codes, and I’m also against Hillary Clinton’s “invade the world, invite the world” foreign policy.
Also: I don’t like Yudkowsky, but I would like him and the people in his circle to take criticism seriously, so… could we maybe start spelling his name correctly? It ends in a y. (I think Yudkowsky himself is probably a lost cause, but there are a lot of smart, rational people in his thrall who should not be. And many of them will take the time to read and seriously evaluate critical arguments if they’re well-presented.)
Universal Eudaimonia
LW was the first place I’ve been where women caring about their own interests is viewed as a weird inimical trait which it’s only reasonable to subvert, and I’m talking about PUA.
It seems like in the best case, PUA would be kind of like makeup. Lots of male attraction cues are visual, so they can be gamed when women wear makeup, do their hair, or wear an attractive outfit. Lots of female attraction cues are behavioral, so they can be gamed by acting or becoming more confident and interesting.
As one Metafilter user put it:
If you want to understand the appeal of the PUAs, you have to remember that it does work. Mixed in with the cod psychology and jargon are some boring but sensible tips. I would say the big four are:
Approach lots of women
Act confident
Have entertaining things to say
Dress and groom well
There are quite a few guys who haven’t really practiced those four things, which do take a bit of effort and experience. So when they start to follow the PUA movement, they absorb the nonsense, start doing the sensible, practical things, and find that they’re getting a whole lot more sex. So they conclude that the nonsense is absolutely true.
Do you have ethical problems with any of 1-4?
Ed. - It’s possible that when HughRistik said “not all PUA advice is like Roissy’s”, he meant “the PUA stuff we’re discussing on Less Wrong is Roissy-type stuff, and not all PUA stuff is like that”.
- 2 Jun 2015 2:50 UTC; 8 points) 's comment on Open Thread, Jun. 1 - Jun. 7, 2015 by (
The community still seems in the middle of sensemaking around Leverage
Understanding how other parts of the community were similar/dissimilar to Leverage seems valuable from a sensemaking point of view.
Lots of parts the post sort of implicitly presents things as important, or asks you to draw conclusions without explicitly pointing out those conclusions.
I think you may be asking your reader to draw the conclusion that this is a dishonest way to write, without explicitly pointing out that conclusion :-) Personally, I see nothing wrong with presenting only observations.
Nice post. I think one thing which can be described in this framework is a kind of “distributed circular reasoning”. The argument is made that “we know sharing evidence for Blue positions causes harmful effects due to Green positions A, B, and C”, but the widespread acceptance of Green positions A, B, and C itself rests on the fact that evidence for Green positions is shared much more readily than evidence for Blue positions.
Thoughts on hacking aromanticism?
It’s not obvious to me that Ilya meant his comment as aggressively as you took it. We’re all primates and it can be useful to be reminded of that, even if we’re primates that go to space sometimes. Asking yourself “would I be responding similar to how I’m responding now if I was, in fact, in a cult” seems potentially useful. It’s also worth remembering that people coded as good aren’t always good.
Your comment was less crass than Ilya’s, but it felt like you were slipping “we all agree my opponent is a clear norm violator” into a larger argument without providing any supporting evidence. I was triggered by a perception of manipulativeness and aggressive conformism, which put me in a more factionalistic mindset.
Yvain, #2 in all-time LW karma, has his own blog which is pretty great. The community has basically moved there and actually grown substantially… Yvain’s posts regularly get over 1000 comments. (There’s also Eliezer Yudkowsky’s facebook feed and the tumblr community.) Turns out online communities are hard, and without a dedicated community leader to tweak site mechanics and provide direction, you are best off just taking a single top contributor and telling them to write whatever they want. Most subreddits fail through Eternal September; Less Wrong is the only community I know of that managed to fail from the opposite effect of setting an excessively high bar for itself. Good online communities are an unsolved and nontrivial problem (but probably worth solving since the internet is where discussions are happening nowadays—a good solution could be great for our collective sanity waterline).
I haven’t visited Hacker News for a while, but it seemed like the leadership there was determined to create a quality community by whatever means possible, including solving Eternal September without oversolving it. I’ll bet there is a lot to learn from them.
Downvoted because I don’t want LW to be the kind of place where people casually make inflammatory political claims, in a way that seems to assume this is something we all know and agree with, without any supporting evidence.
Another perk that you didn’t mention: getting to work under Nate Soares, who I suspect is in the top 0.1% of the population where personal effectiveness is concerned. And the rest of the superlative MIRI team.
http://www.quora.com/What-are-some-alternatives-to-library-nu
Scribd, Oyster, and Kindle Unlimited all give you a “netflix for books” type experience where you pay a monthly fee of about $10 and read as many books as you want (not newer books, unfortunately). (Kindle Unlimited might be better if you have a non-Kindle-Fire kindle device it will work well with, but since publishers don’t like Amazon it will never have as good quality of a selection as the other two.) Your local library may also have ebook lending options.
BTW, if you want papers rather than books, this browser extension or this thread (actually, use this more recent one) may be of interest, esp. this site or this site or this site or this site (some of these might be searching the same database) or http://reddit.com/r/scholar or the #icanhaspdf twitter hashtag or this Facebook group
Note when using libgen search engines, gwern writes: “I’ve noticed the Libgen search engines seem to have problems with long titles and/or colons” so you may wish to strip those.
Someone else recommends searching on the Pirate Bay, especially when combined with “pdf”/other typical book file extensions.
Another cool site. reddit discussion of book piracy. List of LibGen mirrors. EBook search engine? Another list of sites. And another even longer one. Quora thread compiling sites. Another list. Another list. Haven’t tried this site yet. Or this.
Don’t forget about libraries either! https://www.worldcat.org
Reddit claims duckduckgo can be good for finding pirated stuff.
Advice about how to look better seems trivially useful and reputable… Overall, I find your claim that the intersection of palatable dating advice and useful dating advice is empty extremely implausible. What else would Clarisse Thorn’s “ethical PUA advice” be?
At the very least there should be some reasonably effective advice that’s only minimally unpalatable or whatever, like become a really good guitarist and impress girls with your guitar skillz.
Regarding PUA and evolutionary psychology: I don’t see how a self-selected population that’s under the influence of alcohol, and has been living with all kinds of weird modern norms and technology, has all that much in common with the EEA.
I think if someone has mild psychosis and you can guide them back to reality-based thoughts for a second, that is compassionate and a good thing to do in the sense that it will make them feel better, but also kind of useless because the psychosis still has the same chance of progressing into severe psychosis anyway—you’re treating a symptom.
If psychosis is caused by an underlying physiological/biochemical process, wouldn’t that suggest that e.g. exposure to Leverage Research wouldn’t be a cause of it?
If being part of Leverage is causing less reality-based thoughts and nudging someone into mild psychosis, I would expect that being part of some other group could cause more reality-based thoughts and nudge someone away from mild psychosis. Why would causation be possible in one direction but not the other?
I guess another hypothesis here is that some cases are caused by social/environmental factors and others are caused by biochemical factors. If that’s true, I’d expect changing someone’s environment to be more helpful for the former sort of case.
That seems like an accurate description to me. I’m inclined to think that if LW has any kind of creep problem, it’s more likely to be low-status creep problem, i.e. men who feel like social outcasts (possibly because they’re really smart and have always had a hard time finding people like them to make friends with) and have been programmed to alieve that as social outcasts, the only way they’re going to have sex is through creepy means.
And maybe part of the solution to this problem is to help men feel less like social outcasts. Group hug, everyone! I’m also in favor if discouraging creepy behavior verbally; I’m just suggesting this as an additional solution.
These claims seem rather extreme and unsupported to me:
“Lots of upper middle class adults hardly know how to have conversations...”
“the average workplace [is] more than 1/10th as damaging to most employees’ basic human capacities, compared to Leverage_2018-2019.”
I suggest if you write a toplevel post, you search for evidence for/against them.
Elaborating a bit on my reasons for skepticism:
It seems like for the past 10+ years, you’ve been mostly interacting with people in CFAR-adjacent contexts. I’m not sure what your source of knowledge is on “average” upper middle class adults/workplaces. My personal experience is normal people are comfortable having non-superficial conversations if you convince them you aren’t weird first, and normal workplaces are pretty much fine. (I might be overselecting on smaller companies where people have a sense of humor.)
A specific concrete piece of evidence: Joe Rogan has one of the world’s most popular podcasts, and the episodes I’ve heard very much seem to me like they’re “hitting new unpredictable thoughts”. Rogan is notorious for talking to guests about DMT, for instance.
The two observations seems a bit inconsistent, if you’ll grant that working class people generally have worse working conditions than upper middle class people—you’d expect them to experience more workplace abuse and therefore have more trauma. (In which context would an abusive boss be more likely to get called out successfully: a tech company or a restaurant?)
I’ve noticed a pattern where people like Vassar will make extreme claims without much supporting evidence and people will respond with “wow, what an interesting guy” instead of asking for evidence. I’m trying to push back against that.
I can imagine you’d be tempted to rationalize that whatever pathological stuff is/was present at CFAR is also common in the general population / organizations in general.