Right now, we (maybe? I’m not sure) have something like a few different mini-Bell-labs, that each have their own paradigm (and specialists within that paradigm).
The world where Givewell, Good Ventures and OpenPhil share an office is more Bell Labs like than one where they all have different offices. (FHI and UK CEA is a similar situation, as is CFAR/MIRI/LW). One of your suggestions in the blogpost was specifically that they split up into different, fully separate entities.
I’m proposing that Bell Labs exists on a spectrum, that sharing office space is a mechanism to be more Bell Labs like, and that generally being more Bell Labs like is better (at least in a vacuum)
(My shoulder Benquo now says something like “but if you’re models are closely entangled with those of your funders, don’t pretend like you are offering neutral services.” Or maybe “it’s good to share office space with people thinking about physics, because that’s object level. It’s bad to share office space with the people funding you.” Which seems plausible but not overwhelmingly obvious given the other tradeoffs at play)
People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn’t seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects—they have more to do with narrative alignment (checking whether you’re selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
Okay. I’m not sure whether I agree precisely but agree that that’s the valid hypothesis, which I hadn’t considered before in quite these terms, and updates my model a bit.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
The version of this that I’d more obviously endorse goes:
Collocation of groups representing conflicting interests represents increased opportunity for corruption.
Collocation of people who are building models represents increased opportunity for generative collaboration.
Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well)
I also think “sharing a narrative” and “building technical social models” are different, although easily confused (both from the outside and inside – I’m not actually sure which confusion is easier). But you do actually need social models if you’re tackling social domains, which do actually benefit from interpersonal generativity.
My shoulder Benquo now says something like “but if you’re models are closely entangled with those of your funders, don’t pretend like you are offering neutral services.” Or maybe “it’s good to share office space with people thinking about physics, because that’s object level. It’s bad to share office space with the people funding you.”
I think these are a much stronger objection jointly than separately. If Cari Tuna wants to run her own foundation, then it’s probably good for her to collocate with the staff of that foundation.
Right now, we (maybe? I’m not sure) have something like a few different mini-Bell-labs, that each have their own paradigm (and specialists within that paradigm).
The world where Givewell, Good Ventures and OpenPhil share an office is more Bell Labs like than one where they all have different offices. (FHI and UK CEA is a similar situation, as is CFAR/MIRI/LW). One of your suggestions in the blogpost was specifically that they split up into different, fully separate entities.
I’m proposing that Bell Labs exists on a spectrum, that sharing office space is a mechanism to be more Bell Labs like, and that generally being more Bell Labs like is better (at least in a vacuum)
(My shoulder Benquo now says something like “but if you’re models are closely entangled with those of your funders, don’t pretend like you are offering neutral services.” Or maybe “it’s good to share office space with people thinking about physics, because that’s object level. It’s bad to share office space with the people funding you.” Which seems plausible but not overwhelmingly obvious given the other tradeoffs at play)
People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn’t seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects—they have more to do with narrative alignment (checking whether you’re selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
Okay. I’m not sure whether I agree precisely but agree that that’s the valid hypothesis, which I hadn’t considered before in quite these terms, and updates my model a bit.
The version of this that I’d more obviously endorse goes:
Collocation of groups representing conflicting interests represents increased opportunity for corruption.
Collocation of people who are building models represents increased opportunity for generative collaboration.
Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well)
These are all true, and indeed in tension.
I also think “sharing a narrative” and “building technical social models” are different, although easily confused (both from the outside and inside – I’m not actually sure which confusion is easier). But you do actually need social models if you’re tackling social domains, which do actually benefit from interpersonal generativity.
I think these are a much stronger objection jointly than separately. If Cari Tuna wants to run her own foundation, then it’s probably good for her to collocate with the staff of that foundation.