Not all publicity is good publicity. The majority of people who I’ve met off of Less Wrong who have heard of SIAI think that the organization is full of crazy people. A lot of these people are smart. Some of these people have Ph.D.’s from top tier universities in sciences.
I think that SIAI should be putting way more emphasis on PR, networking within academic, etc. This is in consonance with a comment by Holden Karnofsky here
To the extent that your activities will require “beating” other organizations (in advocacy, in speed of innovation, etc.), what are the skills and backgrounds of your staffers that are relevant to their ability to do this?
I’m worried that SIAI’s poor ability to make a good public impression may poison the cause of existential risk in the mind of the public and dissuade good researchers from studying existential risk. There are some very smart people who it would be good to have working on Friendly AI who, despite their capabilities, care a lot about their status in broader society. I think that it’s very important that an organization that works toward Friendly AI at least be well regarded by a sizable minority people in the scientific community.
In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.
We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.
Agreed. I’m often somewhat embarrassed to mention SIAI’s full name, or the Singularity Summit, because of the term “singularity” which, in many people’s minds—to some extent including my own—is a red flag for “crazy”.
Honestly, even the “Artificial Intelligence” part of the name can misrepresent what SIAI is about. I would describe the organization as just “a philosophy institute researching hugely important fundamental questions.”
Agreed. I’m often somewhat embarrassed to mention SIAI’s full name, or the Singularity Summit, because of the term “singularity” which, in many people’s minds—to some extent including my own—is a red flag for “crazy”.
Agreed; I’ve had similar thoughts. Given recent popular coverage of the various things called “the Singularity”, I think we need to accept that it’s pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil’s predictions.
Honestly, even the “Artificial Intelligence” part of the name can misrepresent what SIAI is about. I would describe the organization as just “a philosophy institute researching hugely important fundamental questions.”
I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that’s because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it’s fair to describe SIAI as still being fundamentally about FAI (at least to anyone who’s adequately prepared to think about FAI).
Describing it as “a philosophy institute researching hugely important fundamental questions” may give people the wrong impressions, if it’s not quickly followed by more specific explanation. When people think of “philosophy” + “hugely important fundamental questions”, their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. (“Philosophy” is another term I’m inclined toward avoiding these days.) When I’ve had to describe SIAI in one phrase to people who have never heard of it, I’ve been calling it an “artificial intelligence think-tank”. Meanwhile, Michael Vassar’s Twitter describes SIAI as a “decision theory think-tank”. That’s probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where “decision theory” already refers to an interesting established field that’s relevant to AI but doesn’t share with “artificial intelligence” the connotations of missed goals, science fiction geekery, anthropomorphism, etc.
Ah, I think I can guess who you are. You work under a professor called Josh and have an umlaut in your surname. Shame that the others in that great research group don’t take you seriously.
I’m pretty sure usable suggestions for improvement are welcome. About ten years ago there was only the irrational version of Eliezer who just recently understood that the problem existed, while right now we have some non-crazy introductory and scholary papers, and a community that understands the problem. The progress seems to be in the right direction.
If you asked the same people about the idea of FAI fifteen years ago, say, they’d label it crazy just the same. SIAI gets labeled automatically, by association with the idea. Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you’d need to at least point out specific actions to attempt this argument).
Good point—I will write to SIAI about this matter.
I actually agree that up until this point progress has been in the right direction, I guess my thinking is that the SIAI has attracted a community consisting of a very particular kind of person, may have achieved near-saturation within this population, and that consequently SIAI as presently constituted may have outlived the function that you mention. This is the question of room for more funding
Agree with
Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you’d need to at least point out specific actions to attempt this argument).
There are things that I have in mind but I prefer to contact SIAI about them directly before discussing them in public.
I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.
SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.
So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.
The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.
Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.
Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
In FAI, the central question is what a program wants (which is a certain kind of question about what the program means), and not what a program does.
Computer science will tell lots about what which programs can do how, and how to construct a program that does what you need, but less about what a program means (the sort of computer science that does is already a fair distance towards mathematics). This is also a problem with statistics/machine learning, and the reason they are not particularly useful for FAI: they teach certain tools, and how these tools work, but understanding they provide isn’t portable enough.
Mathematical logic, on the other hand, contains lots of wisdom in the right direction: what kinds of mathematical structures can be defined how, which structures a given definition defines, what concepts are definable, and so on. And to understands the concepts themselves one needs to go further.
Unfortunately, I can’t give a good positive argument for the importance of math; that would require a useful insight (arrived at through use of mathematical tools). At the least, I can attest finding a lot of confusion in my past thinking about FAI as a result of each “level up” in understanding of mathematics, and that counts for something.
Not all publicity is good publicity. The majority of people who I’ve met off of Less Wrong who have heard of SIAI think that the organization is full of crazy people. A lot of these people are smart. Some of these people have Ph.D.’s from top tier universities in sciences.
I think that SIAI should be putting way more emphasis on PR, networking within academic, etc. This is in consonance with a comment by Holden Karnofsky here
To the extent that your activities will require “beating” other organizations (in advocacy, in speed of innovation, etc.), what are the skills and backgrounds of your staffers that are relevant to their ability to do this?
I’m worried that SIAI’s poor ability to make a good public impression may poison the cause of existential risk in the mind of the public and dissuade good researchers from studying existential risk. There are some very smart people who it would be good to have working on Friendly AI who, despite their capabilities, care a lot about their status in broader society. I think that it’s very important that an organization that works toward Friendly AI at least be well regarded by a sizable minority people in the scientific community.
In my experience, academics often cannot distinguish between SIAI and Kurzweil-related activities such as the Singularity University. With its 25k tuition for two months, SU is viewed as some sort of scam, and Kurzweilian ideas of exponential change are seen as naive. People hear about Kurzweil, SU, the Singularity Summit, and the Singularity Institute, and assume that the latter is behind all those crazy singularity things.
We need to make it easier to distinguish the preference and decision theory research program as an attempt to solve a hard problem from the larger cluster of singularity ideas, which, even in the intelligence explosion variety, are not essential.
Agreed. I’m often somewhat embarrassed to mention SIAI’s full name, or the Singularity Summit, because of the term “singularity” which, in many people’s minds—to some extent including my own—is a red flag for “crazy”.
Honestly, even the “Artificial Intelligence” part of the name can misrepresent what SIAI is about. I would describe the organization as just “a philosophy institute researching hugely important fundamental questions.”
Agreed; I’ve had similar thoughts. Given recent popular coverage of the various things called “the Singularity”, I think we need to accept that it’s pretty much going to become a connotational dumping ground for every cool-sounding futuristic prediction that anyone can think of, centered primarily around Kurzweil’s predictions.
I disagree somewhat there. Its ultimate goal is still to create a Friendly AI, and all of its other activities (general existential risk reduction and forecasting, Less Wrong, the Singularity Summit, etc.) are, at least in principle, being carried out in service of that goal. Its day-to-day activities may not look like what people might imagine when they think of an AI research institute, but that’s because FAI is a very difficult problem with many prerequisites that have to be solved first, and I think it’s fair to describe SIAI as still being fundamentally about FAI (at least to anyone who’s adequately prepared to think about FAI).
Describing it as “a philosophy institute researching hugely important fundamental questions” may give people the wrong impressions, if it’s not quickly followed by more specific explanation. When people think of “philosophy” + “hugely important fundamental questions”, their minds will probably leap to questions which are 1) easily solved by rationalists, and/or 2) actually fairly silly and not hugely important at all. (“Philosophy” is another term I’m inclined toward avoiding these days.) When I’ve had to describe SIAI in one phrase to people who have never heard of it, I’ve been calling it an “artificial intelligence think-tank”. Meanwhile, Michael Vassar’s Twitter describes SIAI as a “decision theory think-tank”. That’s probably a good description if you want to address the current focus of their research; it may be especially good in academic contexts, where “decision theory” already refers to an interesting established field that’s relevant to AI but doesn’t share with “artificial intelligence” the connotations of missed goals, science fiction geekery, anthropomorphism, etc.
Ah, I think I can guess who you are. You work under a professor called Josh and have an umlaut in your surname. Shame that the others in that great research group don’t take you seriously.
I’m pretty sure usable suggestions for improvement are welcome. About ten years ago there was only the irrational version of Eliezer who just recently understood that the problem existed, while right now we have some non-crazy introductory and scholary papers, and a community that understands the problem. The progress seems to be in the right direction.
If you asked the same people about the idea of FAI fifteen years ago, say, they’d label it crazy just the same. SIAI gets labeled automatically, by association with the idea. Perceived craziness is the default we must push the public perception away from, not something initiated by actions of SIAI (you’d need to at least point out specific actions to attempt this argument).
Good point—I will write to SIAI about this matter.
I actually agree that up until this point progress has been in the right direction, I guess my thinking is that the SIAI has attracted a community consisting of a very particular kind of person, may have achieved near-saturation within this population, and that consequently SIAI as presently constituted may have outlived the function that you mention. This is the question of room for more funding
Agree with
There are things that I have in mind but I prefer to contact SIAI about them directly before discussing them in public.
I think there are many people who worry about AI in one form or another. They may not do very informed worrying and they may be anthropomorphising, but they still worry and that might be harnessable. See Stephen Hawkings on AI.
SIAIs emphasis on the singularity aspect of the possible dangers of AI is unfortunate as it requires people to get their heads around this. So it alienates the people who just worry about the robot uprising or their jobs being stolen and being outcompeted evolutionarily.
So lets say instead of SIAI you had IRDAI (Institute to research the Dangers of AI). It could look at each potential AI and assess the various risks each architecture posed. It could practice on things like feed forward neural networks and say what types of danger they might pose (job stealing, being rooted and used by a hacker, or going FOOM), based on their information theoretical ability to learn from different information sources, security model and the care being take to make sure human values are embedded in it. In the process of doing that it would have to develop theories of FAI in order to say whether a system was going to have human-like values stably.
The emphasis placed upon very hard take off just makes it less approachable and look more wacky to the casual observer.
Safe robots have nothing whatsoever to do with FAI. Saying otherwise would be incompetent, or a lie. I believe that there need not be an emphasis of hard takeoff, but likely for reasons not related to yours.
Agreed. My dissertation is on moral robots, and one of the early tasks was examining SIAI and FAI and determining that the work was pretty much unrelated (I presented a pretty bad conference paper on the topic).
Apart from they both need a fair amount of computer science to predict their capabilities and dangers?
Call your research institute something like the Institute for the prevention of Advanced Computational Threats, and have separate divisions for robotics and FAI. Gain the trust of the average scientist/technology aware person by doing a good job on robotics and they are more likely to trust you when it comes to FAI.
I recently shifted to believing that pure mathematics is more relevant for FAI than computer science.
A truly devious plan.
That’s interesting. What’s your line of thought?
In FAI, the central question is what a program wants (which is a certain kind of question about what the program means), and not what a program does.
Computer science will tell lots about what which programs can do how, and how to construct a program that does what you need, but less about what a program means (the sort of computer science that does is already a fair distance towards mathematics). This is also a problem with statistics/machine learning, and the reason they are not particularly useful for FAI: they teach certain tools, and how these tools work, but understanding they provide isn’t portable enough.
Mathematical logic, on the other hand, contains lots of wisdom in the right direction: what kinds of mathematical structures can be defined how, which structures a given definition defines, what concepts are definable, and so on. And to understands the concepts themselves one needs to go further.
Unfortunately, I can’t give a good positive argument for the importance of math; that would require a useful insight (arrived at through use of mathematical tools). At the least, I can attest finding a lot of confusion in my past thinking about FAI as a result of each “level up” in understanding of mathematics, and that counts for something.
I think that’s a clever idea that deserves more eyeballs.
Nothing whatsoever is a bit strong. About as much as preventing tiger attacks and fighting malaria, perhaps?
Saving tigers from killer robots.