If an overwhelming majority of civil society plus the USG was pressuring OpenAI in this direction, I think it would have a substantial effect. If only a few non-profits did it, I think it would have little effect.
To make your analogy work, we need to tell whether the relationship between OpenBrain employees and their AIs is more like “USG + civil society vs. OpenAI” or more like “a few non-profits vs. OpenAI”. I’d say “OpenBrain vs. their AIs” is more like “USG + civil society vs. OpenAI”. So if all of OpenBrain is on board with d/acc and doing the thing Tom said, I think it would have a substantial effect on the AIs.
OK, fair. Well, if all of OpenBrain is on board with d/acc and doesn’t trust Agent5+, that’s a different situation. I was imagining e.g. that leadership trusts Agent5+ and thinks the status quo trajectory is fine (they are worried about other things like competitors and terrorists and china) and maybe a few lower-level employees are suspicious/fearful of Agent5+.
If an overwhelming majority of civil society plus the USG was pressuring OpenAI in this direction, I think it would have a substantial effect. If only a few non-profits did it, I think it would have little effect.
To make your analogy work, we need to tell whether the relationship between OpenBrain employees and their AIs is more like “USG + civil society vs. OpenAI” or more like “a few non-profits vs. OpenAI”. I’d say “OpenBrain vs. their AIs” is more like “USG + civil society vs. OpenAI”. So if all of OpenBrain is on board with d/acc and doing the thing Tom said, I think it would have a substantial effect on the AIs.
OK, fair. Well, if all of OpenBrain is on board with d/acc and doesn’t trust Agent5+, that’s a different situation. I was imagining e.g. that leadership trusts Agent5+ and thinks the status quo trajectory is fine (they are worried about other things like competitors and terrorists and china) and maybe a few lower-level employees are suspicious/fearful of Agent5+.