I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.
SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.
So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.
The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.
Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.
Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
In FAI, the central question is what a program wants (which is a certain kind of question about what the program means), and not what a program does.
Computer science will tell lots about what which programs can do how, and how to construct a program that does what you need, but less about what a program means (the sort of computer science that does is already a fair distance towards mathematics). This is also a problem with statistics/machine learning, and the reason they are not particularly useful for FAI: they teach certain tools, and how these tools work, but understanding they provide isn’t portable enough.
Mathematical logic, on the other hand, contains lots of wisdom in the right direction: what kinds of mathematical structures can be defined how, which structures a given definition defines, what concepts are definable, and so on. And to understands the concepts themselves one needs to go further.
Unfortunately, I can’t give a good positive argument for the importance of math; that would require a useful insight (arrived at through use of mathematical tools). At the least, I can attest finding a lot of confusion in my past thinking about FAI as a result of each “level up” in understanding of mathematics, and that counts for something.
I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.
SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.
So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.
The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.
Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.
Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.
A truly devious plan.
That’s interesting. What’s your line of thought?
In FAI, the central question is what a program wants (which is a certain kind of question about what the program means), and not what a program does.
Computer science will tell lots about what which programs can do how, and how to construct a program that does what you need, but less about what a program means (the sort of computer science that does is already a fair distance towards mathematics). This is also a problem with statistics/machine learning, and the reason they are not particularly useful for FAI: they teach certain tools, and how these tools work, but understanding they provide isn’t portable enough.
Mathematical logic, on the other hand, contains lots of wisdom in the right direction: what kinds of mathematical structures can be defined how, which structures a given definition defines, what concepts are definable, and so on. And to understands the concepts themselves one needs to go further.
Unfortunately, I can’t give a good positive argument for the importance of math; that would require a useful insight (arrived at through use of mathematical tools). At the least, I can attest finding a lot of confusion in my past thinking about FAI as a result of each “level up” in understanding of mathematics, and that counts for something.
I think that’s a clever idea that deserves more eyeballs.
Nothing whatsoever is a bit strong. About as much as preventing tiger attacks and fighting malaria, perhaps?
Saving tigers from killer robots.