Not sure if i’m following the argument here, sorry.
I agree there wouldn’t be big external pressure on OpenBrain for not doing d-acc, just like there wouldn’t be in your example.
But my claim was that the OpenBrain employees will choose to do this bc they don’t want to die. Not sure what your response is to that. Maybe just that i’m being overly optimistic and the employees won’t bother
I’m saying the relationship of the public to OpenAI today is similar to the relationship of OpenBrain employees to Consensus-1, Agent-5+, etc. in 2028 of AI 2027. It’s an analogy. Your argument would be that OpenBrain employees who don’t trust Agent-5+ will be able to command Agent-5+ to build all sorts of d/acc tech and that if it doesn’t, they’ll get suspicious and shut down Agent-5+. I’m saying that’s not going to work for similar reasons to why e.g. the public / Congress aren’t demanding that OpenAI do all sorts of corporate governance reforms and getting suspicious when they just do safetywashing and applause lights. The public today doesn’t want OpenAI to amass huge amounts of unaccountable power, and the OpenBrain employees won’t want to die, but in neither case will they be able to distinguish between “OpenAI/Agent5 is behaving reasonably if somewhat different than I’d like” and “Holy shit it’s evil” and even though some % will indeed conclude “it’s evil” they won’t be able to build enough consensus / get enough buy-in.
Not sure if i’m following the argument here, sorry.
I agree there wouldn’t be big external pressure on OpenBrain for not doing d-acc, just like there wouldn’t be in your example.
But my claim was that the OpenBrain employees will choose to do this bc they don’t want to die. Not sure what your response is to that. Maybe just that i’m being overly optimistic and the employees won’t bother
I’m saying the relationship of the public to OpenAI today is similar to the relationship of OpenBrain employees to Consensus-1, Agent-5+, etc. in 2028 of AI 2027. It’s an analogy. Your argument would be that OpenBrain employees who don’t trust Agent-5+ will be able to command Agent-5+ to build all sorts of d/acc tech and that if it doesn’t, they’ll get suspicious and shut down Agent-5+. I’m saying that’s not going to work for similar reasons to why e.g. the public / Congress aren’t demanding that OpenAI do all sorts of corporate governance reforms and getting suspicious when they just do safetywashing and applause lights. The public today doesn’t want OpenAI to amass huge amounts of unaccountable power, and the OpenBrain employees won’t want to die, but in neither case will they be able to distinguish between “OpenAI/Agent5 is behaving reasonably if somewhat different than I’d like” and “Holy shit it’s evil” and even though some % will indeed conclude “it’s evil” they won’t be able to build enough consensus / get enough buy-in.