«Boundaries», Part 2: trends in EA’s handling of boundaries

This is Part 2 of my «Boundaries» Sequence on LessWrong, and is also available on the EA Forum.

Summary: Here I attempt to constructively outline various helpful and harmful trends I see in the EA and rationality communities, which I think arise from a tendency to ignore the boundaries of social systems (rather than a generic tendency to violate norms). Taken too far, this tendency can manifest as a ‘lack of respect’ for boundaries, by which I mean a mixture of

  1. not establishing boundaries where they’d be warranted,

  2. crossing boundaries in clumsy/​harmful ways, and

  3. not following good procedures for deciding whether to cross a boundary.

I propose that if some version of «respecting boundaries» were a more basic tenet of EA — alongside or within the principles of neglect, importance, and tractability — then EA would have fewer problems and do more good for the world.

The trends

Below are the (good and bad) trends I’d like to analyze from the perspective of respecting boundaries. Each of these trends has also been remarked by at least a dozen self-identified EAs I know personally:

  1. expansive thinking: lots of EAs are good at ‘thinking outside the box’ about how to do good for the world, e.g., expanding one’s moral circle (example: Singer, 2011).

  2. niche-finding: EA helps people find impactful careers that are a good fit for their own strengths and weaknesses (example: 80k, 2014).

  3. work/​life balance: EA has sometimes struggled with people working too hard for the cause areas, in ways that harm them as individuals in an unsustainable way.

  4. romances at work: people dating their co-workers are an example of a professional boundary being crossed by personal affairs.

  5. social abrasiveness: EA culture, and perhaps more so rationalist culture, is often experienced by newcomers or outsiders as abrasive or harsh. (Hypocrisy flag: I think I’ve been guilty of this, though hopefully less over the past few years as I’ve gotten older and reflected on these topics.)

  6. pivotal acts: numerous EAs seriously consider pivotal acts — i.e., a potential unilateral acts by a powerful, usually AI-enabled actor, to make the world permanently safer — as the best way to do good for humanity (as opposed to pivotal processes carried out multilaterally).

  7. resistance from AI labs: well-established AI labs are resistant to adopting EA culture as a primary guiding influence.

I’m going to analyze each of the above trends in terms of boundaries, partly to illustrate the importance of the boundary concept, partly to highlight some cool things the EA movement seems to have done with boundaries, and partly to try helping somewhat with the more problematic trends above.

1. Expansive thinking

Consider a fictional person named Alex. A “job” for Alex is a scope of affairs (features of the world) that Alex is considered responsible for observing and handling. Alex might have multiple roles that we can think of as jobs, e.g. “office manager”, “husband”, “neighbor”.

Alex probably thinks about the world beyond the scope of his job(s). But usually, Alex doesn’t take actions outside the scope of his job(s):

The Effective Altruism movement has provided a lot of discourse and social context that helps people extend their sense of “job” to include important and neglected problems in the world that might be tractable to them-personally, e.g., global poverty (tractable via donations to Give Directly).

In other words, EA has helped people to expand both their circle of compassion and their scope of responsibility to act. See also “The Self-Expansion Model of Motivation in Close Relationships” (Aron, 2023).

2. Niche-finding

Identifying areas of competence and vulnerability are important for scoping out Alex’s job(s). This is just standard career advice of the kind that 80,000 Hours might share for helping people find a good job (Todd, 2014; Todd, 2021), but I’d like to think about it from the perspective of boundaries, so let me spell out some nuances.

By competence I just mean the ability to do a good job with things from the perspective of relevant stakeholders (which might depend on the stakeholders). By vulnerabilities, I mean ways in which a person’s functioning or wellbeing can be damaged or harmed. A person is is, after all, a physical system, and physical systems can pretty much always be damaged in some way, which needn’t be limited to his body. For a hypothetical person named “Alex”, we can imagine instances of:

  • (high vulnerability, low competence) Alex is bad at managing his finances. His friend Betty helps him pay his taxes every year and plan for retirement. If Betty messes up, Alex could end up poor or in trouble with the government. So, for Alex this is an area of high vulnerability and low competence.

  • (high vulnerability, high competence) Alex is good at keeping his car maintained. This is also a point of vulnerability, because if his car breaks on the highway, he could get hurt. But he’s competent in this area, so he can take care of it himself.

  • (low vulnerability, high competence) Alex is also good at graphic design work. And, occasionally when he designs some graphics that people turn out not to like, he’s totally fine. This is an area of low vulnerability and high competence.

In an ideal world, Alex’s job has him engaging with affairs that he’s good at handling, and that won’t harm him if mishandled, i.e., areas of high competence and low vulnerability:

3. Work/​life balance

More recently, leaders in EA have been trying to talk more about boundary-setting (Freedman, 2021; GWWC). I think a big reason this discussion needed to happen was that, early on, EA was very much about expanding and breaking down boundaries, which may have led some people to dissolve or ignore what would otherwise be “work/​life” or “personal/​professional” boundaries that protect their wellbeing as individuals. In other words, people started to burn out (‘Elizabeth’, 2018; Toner, 2019).

Since jobs change over time, Alex may need to establish boundaries — in this case, socially reinforced constraints — to protect vulnerable aspects of his personal life from the day-to-day dealings of his work:

Whether the “even more ideal” or “slightly trickier” career path will be better for Alex in the end, the boundaries he sets or doesn’t set will be a major factor determining his experience.

4. Romances at work

Effective altruism’s “break lots of boundaries” attitude may have also contributed to — or arisen from — a frequent breakdown of professional/​social distinctions, and might have lead to frequent intertwining of romantic relationships with work (Wise, 2022), and/​or dating across differences in power dynamics despite taboos to the contrary (Wise, 2022). Whatever position one takes on these issues, at a meta level we can observe that the topic is about where boundaries should or should not be drawn between people.

5. Social abrasiveness

EAs can sometimes come across as “abrasive”, which, taken literally, basically means “creating friction between boundaries”. Stefan Schubert writes, in his post “Naive Effective Altruism and Conflict (2020)”:

… people often have unrealistic expectations of how others will react to criticism. Rightly or wrongly, people tend to feel that their projects are their own, and that others can only have so much of a say over them. They can take a certain amount of criticism, but if they feel that you’re invading their territory too much, they will typically find you abrasive. And they will react adversely.

Notice the language about ‘territory’, a sense of boundary that is being crossed. This observation isn’t just about an arbitrary social norm; it’s about a sense of space where something is being protected and, despite that, invaded. Irrespective of whether EA should be more or less observant of social niceties, a sense of boundaries is key to the pattern he’s describing.

Relatedly, Duncan Sabien recently wrote a LessWrong post entitled Benign Boundary Violations, where he advocates for harmless boundary violations as actively healthy for social and cultural dynamics. Again, irrespective of the correctness of the post, the «boundary» concept is crucial to the social dynamics in question (and post title!).

6. Pivotal Acts

(This part is more so characteristic of rationalist discourse than EA discourse, but EA is definitely heavily influenced by rationalist memes.)

I recently wrote about a number of problems that arise from the intention to carry out a “pivotal act”, i.e., a major unilateral act from a single agent or institution that makes the world safer (–, 2022). I’ll defer to that post for a mix of observations and value judgement on that topic. For this post, it suffices to say that such a “pivotal act” plan would involve violating a lot of boundaries.

7. Resistance from AI labs

A lot of EAs have the goal of influencing AI labs to care more about EA principles and objectives. Many have taken jobs in big labs to push or maintain EA as a priority in some way. I don’t have great written sources on this, but it seems to me that those people often end up frustrated that they can’t convince their employers to be more caring and ambitious about saving the world. At the very least, one can publicly observe that

  • DeepMind and OpenAI both talk about trying to do a lot of good for humanity, and

  • both have employed numerous self-identified EAs who intended to promote EA culture within the organization, but

  • neither lab has publicly espoused the EA principles of importance, tractability, and neglect.

  • FaceBook AI Research has not, as far as I know, employed any self-identified EAs in a full-time capacity (except maybe as interns). I expect this to change at some point, but for now EA representation in FAIR is weak relative to OpenAI and DeepMind.

There are many reasons why individual institutions might not take it on as their job to make the whole world safe, but I posit that a major contributing factor is that sense that it would violate a lot of boundaries. (This is kind of the converse to the observation about ‘pivotal acts’.) By contrast, in my personal experience I’ve found it easier to argue for people to play a part in a pivotal process, i.e., a distributed process whereby the world is made safer but where no single institution or source of agency has full control to make it happen.

8. Thought experiments

EAs think a lot about thought experiments in ethics. A lot of the thought experiments involve norm violations that are more specifically boundary violations; for a recent example, see “Consequentialists (in society) should self-modify to have side constraints”(R, Cotton-Baratt 2022).

What to do?

Proposing big changes to how EA should work is beyond the scope of this post. I’m mostly just advocating for more thinking about boundaries as important determinants of what happens in the world and how to do good.

What’s the best way for EA to accommodate this? I’m not sure! Perhaps in the trio of “importance, neglect, and tractability”, we could replace “tractability” with “approachability” to highlight that social and sociotechnical systems need to be “approached” in a way that somehow handles their boundaries, rather than simply being “treated” (the root of ‘tractable’) like illnesses.

Irrespective of how to implement a change, I do think that “boundaries” should probably be treated as first-class objects in our philosophy of do-gooding, alongside and distinct from both “beliefs” (as treated in the epistemic parts of the LessWrong sequences) and “values|objectives|preferences”.

Recap

In this post, I described some trends where thinking about boundaries could be helpful to understanding and improving the EA movement, specifically, patterns of boundary expansion and violation in expansive thinking, niche-finding, work/​life balance, romances at work, social abrasiveness, pivotal act intentions, and AI labs seeming somewhat resistant to EA rhetoric and ideology. I haven’t done much to clarify what, if anything, should change as a result of these observations, although I’m fairly confident that making «boundaries» a more central concept in EA discourse would be a good idea, such as by replacing the idea of “tractability” with “approachability” or another term more evocative of spatial mataphor.

This was Part 2 of my «Boundaries» Sequence on LessWrong, and is also available on the EA Forum.