I don’t think that anoynone but insane (or dumb) people are thinking about the scenario of “Superintelligent AI contained in a computer unable to interact with the outside world outside of being given inputs and outputing simple text/media”.
The real risk comes when you have loads of systems build by thousands of agents controlling everything from nukes, to drones, to all the text, video and audio anyone on Earth is reading, to cars, to power plants, to judges, to police and armed forces deployment… which is kind of the current case.
Even in that case I’d argue the “takeoff” idea is stupid, and the danger is posed by humans with unaligned incentives not the systems they built to accomplish their goals
But the “”smartest”″ systems in the world are and will be very much connected to a lot of physical potential.
I don’t think that anoynone but insane (or dumb) people are thinking about the scenario of “Superintelligent AI contained in a computer unable to interact with the outside world outside of being given inputs and outputing simple text/media”.
The real risk comes when you have loads of systems build by thousands of agents controlling everything from nukes, to drones, to all the text, video and audio anyone on Earth is reading, to cars, to power plants, to judges, to police and armed forces deployment… which is kind of the current case.
Even in that case I’d argue the “takeoff” idea is stupid, and the danger is posed by humans with unaligned incentives not the systems they built to accomplish their goals
But the “”smartest”″ systems in the world are and will be very much connected to a lot of physical potential.