What’s the minimum set of powers (besides ability to kick a user off the site) that would make being a Moderator non-frustrating? One-off feature requests as part of a “restart LW” focus seem easier than trying to guarantee tech support responsiveness.
atucker
“Strong LW diaspora writers” is a small enough group that it should be straightforward to ask them what they think about all of this.
Yes. This meetup is at the citadel.
My impression is that the OP says that history is valuable and deep without needing to go back as far as the big bang—that there’s a lot of insight in connecting the threads of different regional histories in order to gain an understanding of how human society works, without needing to go back even further.
The second and most already-implemented way is to jump outside the system and change the game to a non-doomed one. If people can’t share the commons without defecting, why not portion it up into private property? Or institute government regulations? Or iterate the game to favor tit-for-tat strategies? Each of these changes has costs, but if the wage of the current game is ‘doom,’ each player has an incentive to change the game.
This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.
Moloch has discovered reciprocal altruism since iterated prisoner’s dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.
Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it’s unlikely that groups will accidentally start playing the better game.
Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it’s easy to imagine that you’re hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people—i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.
Also, most people don’t know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example—if they don’t know game theory and you’re saying that game theory says you’re right, and evaluating arguments is costly and noisy, and they don’t trust you at the start of the interaction, it’s reasonable to distrust you even after the explanation, and not switch games.
I tend to think of downvoting as a mechanism to signal and filter low-quality content rather than as a mechanism to ‘spend karma’ on some goal or another. It seems that mass downvoting doesn’t really fit the goal of filtering content—it just lets you know that someone is either trolling LW in general, or just really doesn’t like someone in a way that they aren’t articulating in a PM or response to a comment/article.
That just means that the sanity waterline isn’t high enough that casinos have no customers—it could be the case that there used to be lots of people who went to casinos, and the waterline has been rising, and now there are fewer people who do.
Extending the literally worst part of most people’s lives for as long as you can, to the tune of over 20% of medical spending in the US.
I have the same, though it seems to be stronger when the finger is right in front of my nose. It always stops if the finger touches me.
Hobbes uses a similar argument in Leviathan—people are inclined towards not starting fights unless threatened, but if people feel threatened they will start fights. But people disagree about what is and isn’t threatening, and so (Hobbes argues) there needs to be a fixed set of definitions that all of society uses in order to avoid conflict.
See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don’t do so at a particularly high rate.
Also, it’s easier to move specific people to a country than it is to raise the standard of living of entire countries. If you’re doing raising-living-standards as an x-risk strategy, are you sure you shouldn’t be spending money on locating people interested in x-risk instead?
My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people’s values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.
I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn’t suffer from the “we’re figuring out what other people value” problem as much as other things, but I also think that that’s almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.
I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.
It seems that “donate to a guide dog charity” and “buy me a guide dog” are pretty different w/r/t the extent that it’s motivated cognition. EAs are still allowed to do expensive things for themselves, or even as for support in doing so.
Though, I can come up with a pretty convincing argument for the opposite.
Diseases only become drug-resistant as a result of natural selection in an environment in which drugs which try to treat the disease are used.
Third world countries have issues with distributing drugs/treatments to everyone in the society, and so it is likely that diseases will not be completely eradicated, but instead exist in an environment with drugs in use. Even in individuals, there are problems with consistently treating the disease, and so it’s likely to pressure the disease without curing it.
On the other hand, diseases rarely become drug-resistant when they’re not exposed to the drugs.
Therefore, treating people in third-world countries increases the probability of producing drug-resistant strains of existing diseases.
It seems easier to evaluate “is trying to be relevant” than “has XYZ important long-term consequence”. For instance, investing in asteroid detection may not be the most important long-term thing, but it’s at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.
Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn’t matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.
Insofar as current society isn’t involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.
(Not that I care particularly much about asteroids, but it’s a particularly easy example to think about.)
Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.
Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it’s less of a social feedback hit.
This is partially good, because it makes it easier to “get into” trying to implement utilitarianism, but it’s also bad because it means that newer EAs need to care about utilitarianism relatively less.
It seems that saying that incentives don’t matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.
It’s also unclear what’s left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn’t be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.
My guess is just that the original reason was that there were societal hierarchies pretty much everywhere in the past, and they wanted some way to have nobles/high-status people join the army and be obviously distinguished from the general population, and to make it impossible to be demoted far down enough so as to be on the same level. Armies without the officer/non-officer distinction just didn’t get any buy-in from the ruling class, and so they wouldn’t exist.
I think there’s also a pretty large difference in training—becoming an officer isn’t just about skills in war, but also involves socialization to the officer culture, through the different War Colleges and whatnot.
You would want your noticing that something is bad to, in some way, indicate what would be a better way to make the thing better. You want to know what in particular is bad and can be fixed, rather than the less informative “everything”. If your classifier triggers on everything, it tells you less on average about any given thing.
My personal experience (going to Harvard, talking to students and admissions counselors) suggests that at one of the following is true:
Teacher recommendations and the essays that you submit to the colleges are also important in admissions, and the main channel through which human capital not particularly captured by grades, and personal development are signaled.
There are particularly known-to-be-good schools that colleges disproportionately admit students from, and for slightly different reasons that they admit students from other schools.
I basically completely ignored signalling while in high school, and often prioritized taking more interesting non-AP classes over AP classes, and focused on a couple of extracirricular relationships rather than diversifying and taking many. My grades and standardized test scores also suffered as a result of my investment in my robotics team.
I think that crux is doing a lot of work in that it forces the conversation to be about something more specific than the main topic, and because it makes it harder to move the goal posts partway through the conversation. If you’re not talking about a crux then you can write off a consideration as “not really the main thing” after talking about it.