Separately, I don’t think the anticipation of possible second-order benefits, like using AIs for human augmentation so the humans can solve alignment, is worth letting labs continue either
I don’t generally think “should labs continue” is very cruxy from my perspective and I don’t think of myself as trying to argue about this. I’m trying to argue that marginal effort directly towards the broad hope I’m painting substantially reduces risk.
I don’t generally think “should labs continue” is very cruxy from my perspective and I don’t think of myself as trying to argue about this. I’m trying to argue that marginal effort directly towards the broad hope I’m painting substantially reduces risk.