Did not, despite this offer which offers a quite large social reward. Seems like people aren’t interested.
Going forward, I think there’s a “revert to draft” feature! Or at least I noticed that option on the EA forum
This part feels underdefined:
A program P is more useful than Hugh for X if, for every project using H to accomplish X, we can efficiently transform it into a new project which uses P to accomplish X. The new project shouldn’t be much more expensive—it shouldn’t take much longer, use much more computation or many additional resources, involve much more human labor, or have significant additional side-effects.
Why quantify over projects? Why is it not sufficient to say that P is as useful as H if it can also accomplish X?
Seems like you want to say that P can achieve X in more ways, but I fail to see why that is obviously relevant. What is even a project?
Or is this some kind of built in measure to prevent side effects, by making P achieve X in a humanlike way? Still doesn’t feel obvious enough.
Honestly, I’m hardly solving this for myself. Just trying to shape the community in such a way that others are doing a bit better. I’d expect a lot of good to come from that. So let’s not get into the frame of emotionally supporting me. That’s not the outcome I’m looking for.
What does influence on the social environment look like to you?
It’s fuzzy, but it means not being left in the dark when you’re in need in some way. People maybe checking in if you’ve been feeling bad. People paying attention to your opinion when you think there’s something that needs to change, and actually changing their behavior accordingly if they find themselves agreeing.
I think a key concept is leverage.
I suspect major progress would be made if someone managed to define this better. I think it’s the hamming problem of this issue.
I notice you don’t talk at all about the outcomes of the volunteering projects you did. What did you think of them, apart from the effect on status?
That’s a bit of a broad question. Not sure what you’re looking for. The project in question is this one. It’s moving forward, but quite a bit slower than anticipated.
Does it seem to you like the EA volunteer efforts are organized to allow for the flakiness you describe, or does it seem like they are being impacted negatively?
Except for organisational overhead, they’re relatively robust. Been running for a few months now, and this one guy has kept showing up, so that’s kept it going.
There’s a possibility for corruption here, as I briefly mentioned, if people get so deprived that they will sacrifice their other needs or values for the sake of status alone.
I considered that to be obvious in writing this. I’m not necessarily talking about the problem of getting status regardless of everything else. I’m also not talking about how to get status as an individual. I’m rather talking about getting the whole community a sense of status while keeping our other values intact.
“Focus on creating value” might be a great individual solution if you’re talented enough. People recognize you’re not goodharting as much and they’re promoting you accordingly. But it doesn’t help everyone. It doesn’t scale. If it works for you that just means you’ve been able to win these competitions so far. Good for you.
As for the collective version: judging from the fact that we’ve taken some meaningful progress with this at LW Netherlands, there’s clearly more traction to be made.
Yes, yes. All of this.
On the other hand, it also means that there’s another sense in which “we can all be high-status”: within our respective local communities. I’m curious how you feel about that, because that was quite adequate for me for a long time, especially as a student.
This is what we’ve built with LessWrong Netherlands. We call it the Home Bayes and it’s a group of 15ish people with tight bonds and formal membership. It works like a charm.
On a broader level, one actionable idea I’ve been thinking about is to talk less about existential risk being “talent constrained”, so that people who can’t get full-time jobs in the field don’t feel like they’re not talented. A more accurate term in my eyes is “field-building constrained”.
I’m glad someone else had this idea.
Coming from my own startup with plenty of talent around but so far not a lot of funding, I think the problem isn’t initiative. It’s getting the funding to the right initiatives. This is why 80K has listed grantmaking as one of their highest impact careers, because the money is there, but given the CEA assumption that random cause has 0 expected value, they have to single out the good ones, and that’s happening so slowly that a lot of ideas are stranding before they even got “whitelisted”.
I suspect there may actually be a function to that
Yep. Let’s be wary of hubris. Let’s not dismiss things we don’t fully understand.
What do you mean with ego?
The definition is debated, but most people in EA agree it’s about utilitarianism, which is essentially just counting up the happiness of everyone together, including yourself. There are different versions of it, but as far as I know none of them ignore your own happiness.
So buying yourself an ice cream may not be “altruistic” in the common sense, but it is utilitarian.
As a community, organising yourself as a hierarchy might be utilitarian when, despite the suffering it may cause, it resolves more suffering outside of the community than it causes. This is probably true to some extent because hierarchies might cause a community to get more done, with the smartest people making the decisions.
Lately when I’m confronted with extreme thought experiments that are repugnant on both sides, my answer has been “mu”. No I can’t give a good answer, and I’m skeptical that anyone can.
Balboa park to West Oakland is our established world. We have been carefully leaning into it’s edge, slowly crafting extensions of our established moral code, adding bits to it and refactoring old parts to make it consistent with the new stuff.
It’s been a mythical effort. People above our level have spent their 1000 year long lifetimes mulling over their humble little additions to the gigantic established machine that is our morality.
And this machine has created Mediocristan. A predictable world, with some predictable features, within which there is always a moral choice available. Without these features our moral programming would be completely useless. We can behave morally precisely because the cases in which there is no moral answer, don’t happen so much.
So please, stop asking me whether I’d kill myself to save 1000 babies from 1000 years of torture. Both outcomes are repugnant and the only good answer I have is “get out of Extremistan”.
The real morality is to steer the world towards a place where we don’t need morality. Extend the borders of Mediocristan to cover a wider set of situations. Bolster it internally so that the intelligence required for a moral choice becomes lower—allowing more people to make it.
No morality is world-independent. If you think you have a good answer to morality, you have to provide it with a description of the worlds in which it works, and a way to make sure we stay within those bounds.
In our WEIRD culture, unilateral is probably better. But it also reinforces that culture, and I have my qualms with it. I think we’re choosing rabbit in a game of stag. You’re essentially advocating for rabbit (which may or may not be a good thing)
In a highly individualistic environment you can’t work things out *as a community* because there aren’t any proper coherent communities, and people aren’t going to sync their highly asynchronous lives with yours.
In a highly collectivist environment you can work things out alone, but it’s not as effective as moving in a coordinated fashion because you actually do have that strictly superior option available to you.
I believe the latter has more upside potential, was the default in our ascendral environment, and has the ability to resolve equilibria of defection. The former is more robust because it’s resistant to entropic decay, scales beyond dunbar’s number, and doesn’t rely on good coordinators.
So I would say “unilateral or GTFO” is a bit too cynical. I’d say “be aware of which options (unilateral or coordinated) are available to you”. In a low-trust corporate environment it’s certainly unilateral. In a high-trust community it is probably coordinated, and let’s keep it that way.
IMO this is a disagreement of topic, not a disagreement of style. Klein is answering the question “what social truth is convenient?” and Harris is answering the question “what natural truth is accurate?“. Seems like simply another failure of proper operationalisation.
Thank you for your criticism. We need more of that.
I am not aiming to get a formal diploma here, and I don’t think you plan on awarding me any.
A pipeline has 2 purposes: training people and identifying good students. We want to do the latter as much as the former. Not just for the sake of the institutions we ultimately wish to recommend candidates to, but also for the sake of the candidates that want to know whether they are up to the task. We recently did a poll on Facebook asking “what seems to be your biggest bottleneck to becoming a researcher” and “I’m not sure I’m talented enough” was the most popular option by far (doubling the next one).
I agree that it looks silly right now because we’re a tiny startup that uploaded 2 videos and a few guides to some textbooks, and it will probably be this small for at least a year to come. You got me to consider using something more humble in the meantime. I’ll bring it up in our next meeting.