I’m pretty sure usable suggestions for improvement are welcome. About ten years ago there was only the irrational version of Eliezer who just recently understood that the problem existed, while right now we have some non-crazy introductory and scholary papers, and a community that understands the problem. The progress seems to be in the right direction.
If you asked the same people about the idea of FAI fifteen years ago, say, they’d label it crazy just the same. SIAI gets labeled automatically, by association with the idea. Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you’d need to at least point out specific actions to attempt this argument).
Good point—I will write to SIAI about this matter.
I actually agree that up until this point progress has been in the right direction, I guess my thinking is that the SIAI has attracted a community consisting of a very particular kind of person, may have achieved near-saturation within this population, and that consequently SIAI as presently constituted may have outlived the function that you mention. This is the question of room for more funding
Agree with
Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you’d need to at least point out specific actions to attempt this argument).
There are things that I have in mind but I prefer to contact SIAI about them directly before discussing them in public.
I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.
SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.
So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.
The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.
Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.
Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
In FAI, the central question is what a program wants (which is a certain kind of question about what the program means), and not what a program does.
Computer science will tell lots about what which programs can do how, and how to construct a program that does what you need, but less about what a program means (the sort of computer science that does is already a fair distance towards mathematics). This is also a problem with statistics/machine learning, and the reason they are not particularly useful for FAI: they teach certain tools, and how these tools work, but understanding they provide isn’t portable enough.
Mathematical logic, on the other hand, contains lots of wisdom in the right direction: what kinds of mathematical structures can be defined how, which structures a given definition defines, what concepts are definable, and so on. And to understands the concepts themselves one needs to go further.
Unfortunately, I can’t give a good positive argument for the importance of math; that would require a useful insight (arrived at through use of mathematical tools). At the least, I can attest finding a lot of confusion in my past thinking about FAI as a result of each “level up” in understanding of mathematics, and that counts for something.
I’m pretty sure usable suggestions for improvement are welcome. About ten years ago there was only the irrational version of Eliezer who just recently understood that the problem existed, while right now we have some non-crazy introductory and scholary papers, and a community that understands the problem. The progress seems to be in the right direction.
If you asked the same people about the idea of FAI fifteen years ago, say, they’d label it crazy just the same. SIAI gets labeled automatically, by association with the idea. Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you’d need to at least point out specific actions to attempt this argument).
Good point—I will write to SIAI about this matter.
I actually agree that up until this point progress has been in the right direction, I guess my thinking is that the SIAI has attracted a community consisting of a very particular kind of person, may have achieved near-saturation within this population, and that consequently SIAI as presently constituted may have outlived the function that you mention. This is the question of room for more funding
Agree with
There are things that I have in mind but I prefer to contact SIAI about them directly before discussing them in public.
I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.
SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.
So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.
The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.
Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.
Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.
A truly devious plan.
That’s interesting. What’s your line of thought?
In FAI, the central question is what a program wants (which is a certain kind of question about what the program means), and not what a program does.
Computer science will tell lots about what which programs can do how, and how to construct a program that does what you need, but less about what a program means (the sort of computer science that does is already a fair distance towards mathematics). This is also a problem with statistics/machine learning, and the reason they are not particularly useful for FAI: they teach certain tools, and how these tools work, but understanding they provide isn’t portable enough.
Mathematical logic, on the other hand, contains lots of wisdom in the right direction: what kinds of mathematical structures can be defined how, which structures a given definition defines, what concepts are definable, and so on. And to understands the concepts themselves one needs to go further.
Unfortunately, I can’t give a good positive argument for the importance of math; that would require a useful insight (arrived at through use of mathematical tools). At the least, I can attest finding a lot of confusion in my past thinking about FAI as a result of each “level up” in understanding of mathematics, and that counts for something.
I think that’s a clever idea that deserves more eyeballs.
Nothing whatsoever is a bit strong. About as much as preventing tiger attacks and fighting malaria, perhaps?
Saving tigers from killer robots.