People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn’t seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects—they have more to do with narrative alignment (checking whether you’re selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
Okay. I’m not sure whether I agree precisely but agree that that’s the valid hypothesis, which I hadn’t considered before in quite these terms, and updates my model a bit.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
The version of this that I’d more obviously endorse goes:
Collocation of groups representing conflicting interests represents increased opportunity for corruption.
Collocation of people who are building models represents increased opportunity for generative collaboration.
Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well)
I also think “sharing a narrative” and “building technical social models” are different, although easily confused (both from the outside and inside – I’m not actually sure which confusion is easier). But you do actually need social models if you’re tackling social domains, which do actually benefit from interpersonal generativity.
People working at Bell Labs were trying to solve technical problems, not marketing or political problems. Sharing ideas across different technical disciplines is potentially a good thing, and I can see how FHI and MIRI in particular are a little bit like this, though writing white papers is a very different even within a technical field from figuring out how to make a thing work. But it doesn’t seem like any of the other orgs substantially resemble Bell Labs at all, and the benefits of collocation for nontechnical projects are very different from the benefits for technical projects—they have more to do with narrative alignment (checking whether you’re selling the same story), and less to do with opportunities to learn things of value outside the context of a shared story.
Collocation of groups representing (others’) conflicting interests represents increased opportunity for corruption, not for generative collaboration.
Okay. I’m not sure whether I agree precisely but agree that that’s the valid hypothesis, which I hadn’t considered before in quite these terms, and updates my model a bit.
The version of this that I’d more obviously endorse goes:
Collocation of groups representing conflicting interests represents increased opportunity for corruption.
Collocation of people who are building models represents increased opportunity for generative collaboration.
Collocation of people who are strategizing together represents increased opportunity for working on complex goals that require shared complex models, and/or shared complex plans. (Again, as said elsethread, I agree that plans are models are different, but I think they are subject to a lot of the same forces, with plans being subject to some additional forces as well)
These are all true, and indeed in tension.
I also think “sharing a narrative” and “building technical social models” are different, although easily confused (both from the outside and inside – I’m not actually sure which confusion is easier). But you do actually need social models if you’re tackling social domains, which do actually benefit from interpersonal generativity.