Honestly, I’m hardly solving this for myself. Just trying to shape the community in such a way that others are doing a bit better. I’d expect a lot of good to come from that. So let’s not get into the frame of emotionally supporting me. That’s not the outcome I’m looking for.
What does influence on the social environment look like to you?
It’s fuzzy, but it means not being left in the dark when you’re in need in some way. People maybe checking in if you’ve been feeling bad. People paying attention to your opinion when you think there’s something that needs to change, and actually changing their behavior accordingly if they find themselves agreeing.
I think a key concept is leverage.
I suspect major progress would be made if someone managed to define this better. I think it’s the hamming problem of this issue.
I notice you don’t talk at all about the outcomes of the volunteering projects you did. What did you think of them, apart from the effect on status?
That’s a bit of a broad question. Not sure what you’re looking for. The project in question is this one. It’s moving forward, but quite a bit slower than anticipated.
Does it seem to you like the EA volunteer efforts are organized to allow for the flakiness you describe, or does it seem like they are being impacted negatively?
Except for organisational overhead, they’re relatively robust. Been running for a few months now, and this one guy has kept showing up, so that’s kept it going.
There’s a possibility for corruption here, as I briefly mentioned, if people get so deprived that they will sacrifice their other needs or values for the sake of status alone.
I considered that to be obvious in writing this. I’m not necessarily talking about the problem of getting status regardless of everything else. I’m also not talking about how to get status as an individual. I’m rather talking about getting the whole community a sense of status while keeping our other values intact.
“Focus on creating value” might be a great individual solution if you’re talented enough. People recognize you’re not goodharting as much and they’re promoting you accordingly. But it doesn’t help everyone. It doesn’t scale. If it works for you that just means you’ve been able to win these competitions so far. Good for you.
As for the collective version: judging from the fact that we’ve taken some meaningful progress with this at LW Netherlands, there’s clearly more traction to be made.
Yes, yes. All of this.
On the other hand, it also means that there’s another sense in which “we can all be high-status”: within our respective local communities. I’m curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.
This is what we’ve built with LessWrong Netherlands. We call it the Home Bayes and it’s a group of 15ish people with tight bonds and formal membership. It works like a charm.
On a broader level, one actionable idea I’ve been thinking about is to talk less about existential risk being “talent constrained”, so that people who can’t get full-time jobs in the field don’t feel like they’re not talented. A more accurate term in my eyes is “field-building constrained”.
I’m glad someone else had this idea.
Coming from my own startup with plenty of talent around but so far not a lot of funding, I think the problem isn’t initiative. It’s getting the funding to the right initiatives. This is why 80K has listed grantmaking as one of their highest impact careers, because the money is there, but given the CEA assumption that random cause has 0 expected value, they have to single out the good ones, and that’s happening so slowly that a lot of ideas are stranding before they even got “whitelisted”.
I suspect there may actually be a function to that
Yep. Let’s be wary of hubris. Let’s not dismiss things we don’t fully understand.
What do you mean with ego?
The definition is debated, but most people in EA agree it’s about utilitarianism, which is essentially just counting up the happiness of everyone together, including yourself. There are different versions of it, but as far as I know none of them ignore your own happiness.
So buying yourself an ice cream may not be “altruistic” in the common sense, but it is utilitarian.
As a community, organising yourself as a hierarchy might be utilitarian when, despite the suffering it may cause, it resolves more suffering outside of the community than it causes. This is probably true to some extent because hierarchies might cause a community to get more done, with the smartest people making the decisions.
Lately when I’m confronted with extreme thought experiments that are repugnant on both sides, my answer has been “mu”. No I can’t give a good answer, and I’m skeptical that anyone can.
Balboa park to West Oakland is our established world. We have been carefully leaning into it’s edge, slowly crafting extensions of our established moral code, adding bits to it and refactoring old parts to make it consistent with the new stuff.
It’s been a mythical effort. People above our level have spent their 1000 year long lifetimes mulling over their humble little additions to the gigantic established machine that is our morality.
And this machine has created Mediocristan. A predictable world, with some predictable features, within which there is always a moral choice available. Without these features our moral programming would be completely useless. We can behave morally precisely because the cases in which there is no moral answer, don’t happen so much.
So please, stop asking me whether I’d kill myself to save 1000 babies from 1000 years of torture. Both outcomes are repugnant and the only good answer I have is “get out of Extremistan”.
The real morality is to steer the world towards a place where we don’t need morality. Extend the borders of Mediocristan to cover a wider set of situations. Bolster it internally so that the intelligence required for a moral choice becomes lower—allowing more people to make it.
No morality is world-independent. If you think you have a good answer to morality, you have to provide it with a description of the worlds in which it works, and a way to make sure we stay within those bounds.
In our WEIRD culture, unilateral is probably better. But it also reinforces that culture, and I have my qualms with it. I think we’re choosing rabbit in a game of stag. You’re essentially advocating for rabbit (which may or may not be a good thing)
In a highly individualistic environment you can’t work things out *as a community* because there aren’t any proper coherent communities, and people aren’t going to sync their highly asynchronous lives with yours.
In a highly collectivist environment you can work things out alone, but it’s not as effective as moving in a coordinated fashion because you actually do have that strictly superior option available to you.
I believe the latter has more upside potential, was the default in our ascendral environment, and has the ability to resolve equilibria of defection. The former is more robust because it’s resistant to entropic decay, scales beyond dunbar’s number, and doesn’t rely on good coordinators.
So I would say “unilateral or GTFO” is a bit too cynical. I’d say “be aware of which options (unilateral or coordinated) are available to you”. In a low-trust corporate environment it’s certainly unilateral. In a high-trust community it is probably coordinated, and let’s keep it that way.
IMO this is a disagreement of topic, not a disagreement of style. Klein is answering the question “what social truth is convenient?” and Harris is answering the question “what natural truth is accurate?“. Seems like simply another failure of proper operationalisation.
Thank you for your criticism. We need more of that.
I am not aiming to get a formal diploma here, and I don’t think you plan on awarding me any.
A pipeline has 2 purposes: training people and identifying good students. We want to do the latter as much as the former. Not just for the sake of the institutions we ultimately wish to recommend candidates to, but also for the sake of the candidates that want to know whether they are up to the task. We recently did a poll on Facebook asking “what seems to be your biggest bottleneck to becoming a researcher” and “I’m not sure I’m talented enough” was the most popular option by far (doubling the next one).
I agree that it looks silly right now because we’re a tiny startup that uploaded 2 videos and a few guides to some textbooks, and it will probably be this small for at least a year to come. You got me to consider using something more humble in the meantime. I’ll bring it up in our next meeting.
LessWrong is a movement that seriously tries to better the world by a significant margin, not shying away from the most unconventional strategies. Most notably, we believe in the prime importance of securing AI Safety, and we subscribe to the values of transhumanism. Knowing that nature is not a fair enemy, we put in a great effort to grow as individuals and as a community, hoping to gather enough strength to live up to the task. We do this in various ways, applying epistemic standards at least as rigorous as that of science, thinking hard about late advances in philosophy and how to put it’s lessons into practice, while keeping an open mind to the benefits of subjective wisdom like spirituality and our intuitions.
Would you share your model? My intuition is that there are no topics or opinions that should be shunned, because if tolerating a topic leads to bad outcomes, then you just have bad epistemics. i.e. it’s a bandaid solution for your average conflict-theorist internet community that I think the thoroughly mistake-theorist LW doesn’t need.
There is honor in it if we could handle this.
Now I feel bad for going quiet. Still love you guys!
Appreciate your attempt to address a touchy subject. Do keep in mind that epistemic humility applies tenfold here. The subject is littered with blindspots and motivated reasoning, and I haven’t come across anyone with a remotely satisfying answer yet.
And it’s never enough; their appetite is endless.
That’s an assumption, and I think it’s wrong. I think apple seekers are satisficers, like everyone else. I, for one, don’t suffer from the brandishing. Got access to enough apples.
My model is that it’s a problem of inequality. You see, apple holders get a large part of their status from which apple eater they associate with. Now when it comes to status, one naturally wants to be in the upper regions:
Imagine a world where, every few years, 90% of it’s highest status inhabitants are selected to replace the remaining 10%. If you’d want to remain in this world indefinitely, how much status would you need? Indeed, from the perspective of our genes, only the maximum is good enough.
Over the decades, Inequality in apple eaters has greatly increased (another assumption). Compared to decades before, It’s a lot harder to find an apple eater that is truly on top of their shit. And so, apple holders are more reluctant to share their apples with someone of comparative (sexual) status, especially in the lower regions.
But it could be something else entirely. In any case, brandishing doesn’t have to be a problem for apple eaters.
As it stands now, I can’t accept this solution, simply because it doesn’t inform the right decision.
Imagine you were Beauty and q(y) was 1, and you were offered that bet. What odds would you take?
Our models exist to serve our actions. There is no such thing as a good model that informs the wrong action. Probability must add up to winning.
Or am I interpreting this wrong, and is there some practical reason why taking 1⁄2 odds actually does win in the q(y) = 1 case?