I like this compression but it felt like it sort of lost steam in the last bullet. It doesn’t have very much content, and so the claim feels pretty wooly. I think there’s probably a stronger claim that’s similarly short, that should be there.
Here’s a different attempt...
Minds with their own goals will compete with humans for resources, and minds much better than humans will outcompete humans for resources totally and decisively.
Unless the AIs explicitly allocate resources for human survival, this will result in human extinction.
...which turns out a bit longer, but maybe it can be simplified down.
seconding this. I’m not entirely sure a fourth bullet point is needed. if a fourth bullet is used, i think all it really needs to do is tie the first three together. my attempts at a fourth point would look something like:
the combination of these three things seems ill advised.
there’s no reason to expect the combination of these three things to go well by default, and human extinction isn’t off the table in a particularly catastrophic scenario.
current practices around ai development is insufficiently risk-averse, given the first three points.
I like this compression but it felt like it sort of lost steam in the last bullet. It doesn’t have very much content, and so the claim feels pretty wooly. I think there’s probably a stronger claim that’s similarly short, that should be there.
Here’s a different attempt...
Minds with their own goals will compete with humans for resources, and minds much better than humans will outcompete humans for resources totally and decisively.
Unless the AIs explicitly allocate resources for human survival, this will result in human extinction.
...which turns out a bit longer, but maybe it can be simplified down.
seconding this. I’m not entirely sure a fourth bullet point is needed. if a fourth bullet is used, i think all it really needs to do is tie the first three together. my attempts at a fourth point would look something like:
the combination of these three things seems ill advised.
there’s no reason to expect the combination of these three things to go well by default, and human extinction isn’t off the table in a particularly catastrophic scenario.
current practices around ai development is insufficiently risk-averse, given the first three points.