Including any that you guys would make. If not SIAI, who will do this at all?
SIAI can’t do both safety and production, it’s perverse.
The people who MAKE the AI should NOT be the same people who JUDGE the AI for the same reason that I would not purchase medical treatments from a doctor who claimed to test them himself but did not do a scientific study. You cannot peer review, independently test or check and balance your own project. Those that want to make an AI should plan to be part of a different organization, either now or in the future.
You’ll get speed and quality advantages this way.
If you build the AI first, you will certainly consider safety problems that hadn’t been obvious before because you’ll be in the thick of all those details and they’ll give you new ideas. But those can always be added to the list of safety guidelines at that time. There is no reason to do that part first. If you make safety guidelines first, you can build the AI with safety in mind from the ground up. As you know, reprogramming something that has a flaw in a critical spot can be very, very time-consuming. By focusing on safety first, you will have a speed advantage while coding as well as a quality advantage. Others will make dangerous AIs and be forced to recall them and start over. So, this is a likely advantage.
Re: “improve laws and institutions”
You need to understand the risks thoroughly before you will be able to recommend good laws, and before people will listen to you and push for any legislation. After you understand the risks, you’d need to work on improving laws so that when the interested people go to build their AI there’s a legal framework there.
Re: “computer security”
This should be included under “risk research and prevention.” because it’s definitely a risk. There are likely to be interactions between security, and other risks that you’d want to know about at the same time as working on the security, it’s all connected, and you may not discover these if you don’t think about them at the same time.
Re: “stop worrying about the Singularity and work on more mundane goals”
Considering the personalities, capabilities and prior investments of those involved, this simply isn’t likely to happen. They need to be ambitious. Ambitious people need the assistance of others who are more specialized in mundane tasks and would be happy to help them so that they can focus on ambitions—we all specialize.
Focusing on risk research and prevention is also the first step to everything else:
How will you get funding for something people see as risky?
How will you develop AI in a world destroyed by other’s AI projects that SIAI didn’t take the time to stop?
How will SIAI develop credibility and trust if it doesn’t prove it’s capable of intellectual rigor by doing a thorough job of risk prevention? This entire industry has no trust. Even as an AI project, you’ll have no trust for that reason.
How will SIAI prove it is effective in the world if it doesn’t do something before making an AI such as change some laws, and do risk prevention?
Who is going to be there to independently test your AI project if you choose to do that instead?
I don’t think the solution is “Do some, not others.” I think it is “Do them in the right order.” and for which type of AI project to chose, wouldn’t it be safer to decide AFTER you research risks as thoroughly as possible?
Additionally, if SIAI chooses to dedicate itself to risk research and prevention and agrees that the AI building activities should be split off into a different group, I’d be interested in doing some volunteer work for the risk research and prevention group, especially regarding preventing an AGI arms race. I think the ideas I explain there or similar ones would be a really good way for you to prove that SIAI is capable of actually doing something, which addresses a common objection to funding SIAI.
See any way to break the above line of reasoning and argue for a different route? If so, I will attempt to resolve those conflicts also.
Kill all those birds with one stone: Work on understanding and preventing the risks. Here’s why that stone kills each bird:
Re: Work on some aspect of building an AI
Even the sequences are against this. “Do not propose solutions until the problem has been discussed as thoroughly as possible” Working on risk should go first. Also:
Somebody needs to judge the safety of AIs.
Including any that you guys would make. If not SIAI, who will do this at all?
SIAI can’t do both safety and production, it’s perverse.
The people who MAKE the AI should NOT be the same people who JUDGE the AI for the same reason that I would not purchase medical treatments from a doctor who claimed to test them himself but did not do a scientific study. You cannot peer review, independently test or check and balance your own project. Those that want to make an AI should plan to be part of a different organization, either now or in the future.
You’ll get speed and quality advantages this way.
If you build the AI first, you will certainly consider safety problems that hadn’t been obvious before because you’ll be in the thick of all those details and they’ll give you new ideas. But those can always be added to the list of safety guidelines at that time. There is no reason to do that part first. If you make safety guidelines first, you can build the AI with safety in mind from the ground up. As you know, reprogramming something that has a flaw in a critical spot can be very, very time-consuming. By focusing on safety first, you will have a speed advantage while coding as well as a quality advantage. Others will make dangerous AIs and be forced to recall them and start over. So, this is a likely advantage.
Re: “improve laws and institutions”
You need to understand the risks thoroughly before you will be able to recommend good laws, and before people will listen to you and push for any legislation. After you understand the risks, you’d need to work on improving laws so that when the interested people go to build their AI there’s a legal framework there.
Re: “computer security”
This should be included under “risk research and prevention.” because it’s definitely a risk. There are likely to be interactions between security, and other risks that you’d want to know about at the same time as working on the security, it’s all connected, and you may not discover these if you don’t think about them at the same time.
Re: “stop worrying about the Singularity and work on more mundane goals”
Considering the personalities, capabilities and prior investments of those involved, this simply isn’t likely to happen. They need to be ambitious. Ambitious people need the assistance of others who are more specialized in mundane tasks and would be happy to help them so that they can focus on ambitions—we all specialize.
Focusing on risk research and prevention is also the first step to everything else:
How will you get funding for something people see as risky?
How will you develop AI in a world destroyed by other’s AI projects that SIAI didn’t take the time to stop?
How will SIAI develop credibility and trust if it doesn’t prove it’s capable of intellectual rigor by doing a thorough job of risk prevention? This entire industry has no trust. Even as an AI project, you’ll have no trust for that reason.
How will SIAI prove it is effective in the world if it doesn’t do something before making an AI such as change some laws, and do risk prevention?
Who is going to be there to independently test your AI project if you choose to do that instead?
I don’t think the solution is “Do some, not others.” I think it is “Do them in the right order.” and for which type of AI project to chose, wouldn’t it be safer to decide AFTER you research risks as thoroughly as possible?
Additionally, if SIAI chooses to dedicate itself to risk research and prevention and agrees that the AI building activities should be split off into a different group, I’d be interested in doing some volunteer work for the risk research and prevention group, especially regarding preventing an AGI arms race. I think the ideas I explain there or similar ones would be a really good way for you to prove that SIAI is capable of actually doing something, which addresses a common objection to funding SIAI.
See any way to break the above line of reasoning and argue for a different route? If so, I will attempt to resolve those conflicts also.