What about an AI which cares a tiny bit about keeping humans alive, but still kills lots of people in the process of taking over? It could care about this terminally or could be paid off via acausal trade.
What about an AI which cares a tiny bit about keeping humans alive
This is quite believable in the short (a few generations of humans) term. It’s much less so in the long term. I model “caring about a lot of things” as an equilibrium case of competing reasons for action, and almost nothing that is “care a little bit” is going to be stable.
I suppose “care a lot about maintaining a small quantity of biologicals” is possible as well for inscrutable AI reasons, but that doesn’t bring any civilization back.
What about an AI which cares a tiny bit about keeping humans alive, but still kills lots of people in the process of taking over? It could care about this terminally or could be paid off via acausal trade.
This is quite believable in the short (a few generations of humans) term. It’s much less so in the long term. I model “caring about a lot of things” as an equilibrium case of competing reasons for action, and almost nothing that is “care a little bit” is going to be stable.
I suppose “care a lot about maintaining a small quantity of biologicals” is possible as well for inscrutable AI reasons, but that doesn’t bring any civilization back.