I don’t think discussing whether someone really wants to do good or whether there is some (possibly unconscious?) status-optimization process is going to help us align AI.
Two comments:
[wanting to do good] vs. [one’s behavior being affected by an unconscious optimization for status/power] is a false dichotomy.
Don’t you think that unilateral interventions within the EA/AIS communities to create/fund for-profit AGI companies, or to develop/disseminate AI capabilities, could have a negative impact on humanity’s ability to avoid existential catastrophes from AI?
First point: by “really want to do good” (the really is important here) I mean someone who would be fundamentally altruistic and would not have any status/power desire, even subconsciously.
I don’t think Conjecture is an “AGI company”, everyone I’ve met there cares deeply about alignment and their alignment team is a decent fraction of the entire company. Plus they’re funding the incubator.
I think it’s also a misconception that it’s an unilateralist intervension. Like, they’ve talked to other people in the community before starting it, it was not a secret.
First point: by “really want to do good” (the really is important here) I mean someone who would be fundamentally altruistic and would not have any status/power desire, even subconsciously.
Then I’d argue the dichotomy is vacuously true, i.e. it does not generally pertain to humans. Humans are the result of human evolution. It’s likely that having a brain that (unconsciously) optimizes for status/power has been very adaptive.
Regarding the rest of your comment, this thread seems relevant.
Two comments:
[wanting to do good] vs. [one’s behavior being affected by an unconscious optimization for status/power] is a false dichotomy.
Don’t you think that unilateral interventions within the EA/AIS communities to create/fund for-profit AGI companies, or to develop/disseminate AI capabilities, could have a negative impact on humanity’s ability to avoid existential catastrophes from AI?
First point: by “really want to do good” (the really is important here) I mean someone who would be fundamentally altruistic and would not have any status/power desire, even subconsciously.
I don’t think Conjecture is an “AGI company”, everyone I’ve met there cares deeply about alignment and their alignment team is a decent fraction of the entire company. Plus they’re funding the incubator.
I think it’s also a misconception that it’s an unilateralist intervension. Like, they’ve talked to other people in the community before starting it, it was not a secret.
Then I’d argue the dichotomy is vacuously true, i.e. it does not generally pertain to humans. Humans are the result of human evolution. It’s likely that having a brain that (unconsciously) optimizes for status/power has been very adaptive.
Regarding the rest of your comment, this thread seems relevant.