I am confused about how what you said is related to what you quoted. I don’t take any of this to be speaking to employees of the AGI companies, since none of them have any agency with respect to making AI go well.
AGI companies exist to create AGI, and their structures don’t allow for voluntary courses of action where they intentionally do that more slowly or not at all. Whether they have access to reasonable-sounding justifications for doing so is not a causal factor, and whether individual employees care about making a good human future is not a causal factor in whether that occurs.
I am confused about how what you said is related to what you quoted. I don’t take any of this to be speaking to employees of the AGI companies, since none of them have any agency with respect to making AI go well.
AGI companies exist to create AGI, and their structures don’t allow for voluntary courses of action where they intentionally do that more slowly or not at all. Whether they have access to reasonable-sounding justifications for doing so is not a causal factor, and whether individual employees care about making a good human future is not a causal factor in whether that occurs.
Even assuming this is true (it’s of course not 100% true),
they may not view it that way, so Greenblatt’s comment and/or the OP could still be addressing them and their decision-making,
they decided to work there and continue to decide to work there, which is a causal agency path thing.
This also applies to e.g. people supporting / funding these companies for supposed X-risk reduction reasons.