this runs into the “assumes powerful ai will be low/non agentic” fallacy
or “assumes ai’s that can massively assist in long horizon alignment research will be low/non agentic”
They can be low/non agentic, because current ones are. I’m not seeing the fallacy.
this runs into the “assumes powerful ai will be low/non agentic” fallacy
or “assumes ai’s that can massively assist in long horizon alignment research will be low/non agentic”
They can be low/non agentic, because current ones are. I’m not seeing the fallacy.