seconding this. I’m not entirely sure a fourth bullet point is needed. if a fourth bullet is used, i think all it really needs to do is tie the first three together. my attempts at a fourth point would look something like:
the combination of these three things seems ill advised.
there’s no reason to expect the combination of these three things to go well by default, and human extinction isn’t off the table in a particularly catastrophic scenario.
current practices around ai development is insufficiently risk-averse, given the first three points.
seconding this. I’m not entirely sure a fourth bullet point is needed. if a fourth bullet is used, i think all it really needs to do is tie the first three together. my attempts at a fourth point would look something like:
the combination of these three things seems ill advised.
there’s no reason to expect the combination of these three things to go well by default, and human extinction isn’t off the table in a particularly catastrophic scenario.
current practices around ai development is insufficiently risk-averse, given the first three points.