You might argue that each individual service must be dangerous, since it is superintelligent at its particular task. However, since the service is optimizing for some bounded task, it is not going to run a long-term planning process [...]
Does this assume that we’ll be able to build generally intelligent systems (e.g. the service-creating-service) that optimize for a bounded task?
Is there a more recent writeup on the history of AI safety anywhere?