A third possibility is that AGI becomes the next big scare.
There’s always a market for the next big scare, and a market for people who’ll claim putting them in control will save us from the next big scare.
Having the evil machines take over has always been a scare. When AI gets more embodied, and start working together autonomously, people will be more likely to freak, IMO.
Getting beat on Jeopardy is one thing, watching a fleet of autonomous quad copters doing their thing is another. It made me a little nervous, and I’m quite pro AI. When people see machines that seem like they’re alive, like they think, communicate among themselves, and cooperate in action, many will freak, and others will be there to channel and make use of that fear.
That’s where I disagree with EY. He’s right that a smarter talking box will likely just be seen as an nonthreatening curiosity. Watson 2.0, big deal. But embodied intelligent things that communicate and take concerted action will press our base primate “threatening tribe” buttons.
“Her” would have had a very different feel if all those AI operating systems had bodies, and got together in their own parallel and much more quickly advancing society. Kurzweil is right in pointing out that with such advanced AI, Samantha could certainly have a body. We’ll be seeing embodied AI well before any human level of AI. That will be enough for a lot of people to get their freak out on.
Yeah, this becomes plausible if some analogue of Chernobyl happens. Maybe self-driving cars cause some kind of horrible accident due to algorithms behaving unexpectedly.
In that scenario, most people who are surprised to meet one in action, and many who hear of it, cannot help but wonder “how long till we’re the mosquitoes?”
That’d be a smart move to secure some MIRI funding...
MIRI literally claims to want to build an overlord for the universe and has actively solicited donations with that goal explicitly stated. I’d call that a variant on the theme where they get money via people who think they’re putting them in charge.
We’ll be seeing embodied AI well before any human level of AI.
By “embodied” do you mean “humanoid”? It seems like there’s more demand for humanoid robots in some parts of the world (Japan) than other parts (US). Or by “embodied” do you mean detached autonomous robots like the Roomba?
Yes, “embodied” was simply physical, capable of motion and physical action.
Reactions will depend on the physical capabilies and culture too. For me, I’ve never liked bugs that fly, so the quadcopters press my buttons. Watching Terminator may have something to do with it too.
A third possibility is that AGI becomes the next big scare.
There’s always a market for the next big scare, and a market for people who’ll claim putting them in control will save us from the next big scare.
Having the evil machines take over has always been a scare. When AI gets more embodied, and start working together autonomously, people will be more likely to freak, IMO.
Getting beat on Jeopardy is one thing, watching a fleet of autonomous quad copters doing their thing is another. It made me a little nervous, and I’m quite pro AI. When people see machines that seem like they’re alive, like they think, communicate among themselves, and cooperate in action, many will freak, and others will be there to channel and make use of that fear.
That’s where I disagree with EY. He’s right that a smarter talking box will likely just be seen as an nonthreatening curiosity. Watson 2.0, big deal. But embodied intelligent things that communicate and take concerted action will press our base primate “threatening tribe” buttons.
“Her” would have had a very different feel if all those AI operating systems had bodies, and got together in their own parallel and much more quickly advancing society. Kurzweil is right in pointing out that with such advanced AI, Samantha could certainly have a body. We’ll be seeing embodied AI well before any human level of AI. That will be enough for a lot of people to get their freak out on.
Yeah, this becomes plausible if some analogue of Chernobyl happens. Maybe self-driving cars cause some kind of horrible accident due to algorithms behaving unexpectedly.
I imagine someone puts an autonomous mosquito-zapping laser (to fight malaria) on a drone.
In that scenario, most people who are surprised to meet one in action, and many who hear of it, cannot help but wonder “how long till we’re the mosquitoes?”
That’d be a smart move to secure some MIRI funding...
Besides just being freaking awesome. Imagine that floating around the backyard barbecue. Pew. Pew. Pew.
That’s how I’ve always viewed SIAI/MIRI, at least in terms of a significant subset of those who send them money...
Perhaps, but AFAIK, minus the “putting them in control” part.
MIRI literally claims to want to build an overlord for the universe and has actively solicited donations with that goal explicitly stated. I’d call that a variant on the theme where they get money via people who think they’re putting them in charge.
By “embodied” do you mean “humanoid”? It seems like there’s more demand for humanoid robots in some parts of the world (Japan) than other parts (US). Or by “embodied” do you mean detached autonomous robots like the Roomba?
The reference to autonomous fleets of quadcopters would suggest the latter.
Yes, “embodied” was simply physical, capable of motion and physical action.
Reactions will depend on the physical capabilies and culture too. For me, I’ve never liked bugs that fly, so the quadcopters press my buttons. Watching Terminator may have something to do with it too.