My off-the-cuff, high level response to the Givewell independence section + final conslusions (without having fully digested them) is:
Ben seems to be arguing that Givewell should either become much more independent from Good Ventures and OpenPhil (and probably move to a separate office), so that it can actually present the average donor will unbiased, relevant information (rather than information entangled with Good Venture’s goals/models)
or
The other viable option is for GiveWell to give up for now on most public-facing recommendations and become a fully-funded branch of Good Ventures, to demonstrate to the world what GiveWell-style methods can do when applied to a problem where it is comparatively easy to verify results.
I can see both of these as valid options to explore, and that going to either extreme would probably maximize particular values.
But it’s not obvious either of those maximize area-under-the-curve-of-total-values.
There’s value to people with deep models being able to share those models. Bell Labs worked by having people being able to bounce ideas off each other, casually run into each other, and explain things to each other iteratively. My current sense is that I wish there was more opportunity for people in the EA landscape to share models more deeply with each other on a casual, day-to-day basis, rather than less (while still sharing as much as possible with the general public so people in the general public can also get engaged)
This does come with tradeoffs of neither maximizing independent judgment nor maximizing output nor most easily avoiding particular epistemic and integrity pitfalls, but it’s where I expect the most total value to lie.
There’s value to people with deep models being able to share those models. Bell Labs worked by having people being able to bounce ideas off each other, casually run into each other, and explain things to each other iteratively. My current sense is that I wish there was more opportunity for people in the EA landscape to share models more deeply with each other on a casual, day-to-day basis, rather than less (while still sharing as much as possible with the general public so people in the general public can also get engaged)
Trying to build something kind of like Bell Labs would be great! I don’t see how it’s relevant to the current discussion, though.
Right now, we (maybe? I’m not sure) have something like a few different mini-Bell-labs, that each have their own paradigm (and specialists within that paradigm).
The world where Givewell, Good Ventures and OpenPhil share an office is more Bell Labs like than one where they all have different offices. (FHI and UK CEA is a similar situation, as is CFAR/MIRI/LW). One of your suggestions in the blogpost was specifically that they split up into different, fully separate entities.
I’m proposing that Bell Labs exists on a spectrum, that sharing office space is a mechanism to be more Bell Labs like, and that generally being more Bell Labs like is better (at least in a vacuum)
(My shoulder Benquo now says something like “but if you’re models are closely entangled with those of your funders, don’t pretend like you are offering neutral services.” Or maybe “it’s good to share office space with people thinking about physics, because that’s object level. It’s bad to share office space with the people funding you.” Which seems plausible but not overwhelmingly obvious given the other tradeoffs at play)
People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn’t seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects—they have more to do with narrative alignment (checking whether you’re selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
Okay. I’m not sure whether I agree precisely but agree that that’s the valid hypothesis, which I hadn’t considered before in quite these terms, and updates my model a bit.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
The version of this that I’d more obviously endorse goes:
Collocation of groups representing conflicting interests represents increased opportunity for corruption.
Collocation of people who are building models represents increased opportunity for generative collaboration.
Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well)
I also think “sharing a narrative” and “building technical social models” are different, although easily confused (both from the outside and inside – I’m not actually sure which confusion is easier). But you do actually need social models if you’re tackling social domains, which do actually benefit from interpersonal generativity.
My shoulder Benquo now says something like “but if you’re models are closely entangled with those of your funders, don’t pretend like you are offering neutral services.” Or maybe “it’s good to share office space with people thinking about physics, because that’s object level. It’s bad to share office space with the people funding you.”
I think these are a much stronger objection jointly than separately. If Cari Tuna wants to run her own foundation, then it’s probably good for her to collocate with the staff of that foundation.
My off-the-cuff, high level response to the Givewell independence section + final conslusions (without having fully digested them) is:
Ben seems to be arguing that Givewell should either become much more independent from Good Ventures and OpenPhil (and probably move to a separate office), so that it can actually present the average donor will unbiased, relevant information (rather than information entangled with Good Venture’s goals/models)
or
I can see both of these as valid options to explore, and that going to either extreme would probably maximize particular values.
But it’s not obvious either of those maximize area-under-the-curve-of-total-values.
There’s value to people with deep models being able to share those models. Bell Labs worked by having people being able to bounce ideas off each other, casually run into each other, and explain things to each other iteratively. My current sense is that I wish there was more opportunity for people in the EA landscape to share models more deeply with each other on a casual, day-to-day basis, rather than less (while still sharing as much as possible with the general public so people in the general public can also get engaged)
This does come with tradeoffs of neither maximizing independent judgment nor maximizing output nor most easily avoiding particular epistemic and integrity pitfalls, but it’s where I expect the most total value to lie.
Trying to build something kind of like Bell Labs would be great! I don’t see how it’s relevant to the current discussion, though.
Right now, we (maybe? I’m not sure) have something like a few different mini-Bell-labs, that each have their own paradigm (and specialists within that paradigm).
The world where Givewell, Good Ventures and OpenPhil share an office is more Bell Labs like than one where they all have different offices. (FHI and UK CEA is a similar situation, as is CFAR/MIRI/LW). One of your suggestions in the blogpost was specifically that they split up into different, fully separate entities.
I’m proposing that Bell Labs exists on a spectrum, that sharing office space is a mechanism to be more Bell Labs like, and that generally being more Bell Labs like is better (at least in a vacuum)
(My shoulder Benquo now says something like “but if you’re models are closely entangled with those of your funders, don’t pretend like you are offering neutral services.” Or maybe “it’s good to share office space with people thinking about physics, because that’s object level. It’s bad to share office space with the people funding you.” Which seems plausible but not overwhelmingly obvious given the other tradeoffs at play)
People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn’t seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects—they have more to do with narrative alignment (checking whether you’re selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
Okay. I’m not sure whether I agree precisely but agree that that’s the valid hypothesis, which I hadn’t considered before in quite these terms, and updates my model a bit.
The version of this that I’d more obviously endorse goes:
Collocation of groups representing conflicting interests represents increased opportunity for corruption.
Collocation of people who are building models represents increased opportunity for generative collaboration.
Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well)
These are all true, and indeed in tension.
I also think “sharing a narrative” and “building technical social models” are different, although easily confused (both from the outside and inside – I’m not actually sure which confusion is easier). But you do actually need social models if you’re tackling social domains, which do actually benefit from interpersonal generativity.
I think these are a much stronger objection jointly than separately. If Cari Tuna wants to run her own foundation, then it’s probably good for her to collocate with the staff of that foundation.