An open source project might prevent this problem, not because having an open source AGI is safe, but because 1.) open source projects are open, so anybody can influence it, including people who are knowledgeable about risks and 2.) the people involved in open source projects probably tend to have a pretty strong philanthropic streak and they’re more likely to listen to the dangers than a risk-taking capitalist. The reason it may stop them is this: If an open source project gets there first, it won’t be seen as a juicy target to capitalists anymore. It will be a niche that’s already filled for free. If they wanted to make an AGI they’d have to make one that was so much better than the existing one that it makes sense to charge, or fail at business.
Making an open source AGI, in order to compete with a business might cause the open source programmers to rush. However, imagine what would happen if customers got the following messages around the time that the closed source AGI was going to be released: If you wait a while longer, an AGI will come out for free, plus, the open source AGI is going to be thoroughly tested to discover dangers before you run it. The closed source AGI is very risky.” That would deter a lot of people from buying, which would at least reduce the exposure to the closed source AGI—and the open source group would not have to release the AGI until they had tested thoroughly. If, during the course of their tests, they discover hideous risks, these could serve as warnings about AGI in general, make those risks feel real, and prevent people from running risky AGIs. Assuming that the open source project had good PR and advertising / public education campaigns.
Why open source might have a competitive advantage:
Open source people may be more willing to merge, especially if our future depends on it, whereas companies tend to behave in self-interested ways and work separately for the most part. They’re already divided, so open source could conquer them.
I was told by a Microsoft employee that he thought Linux would eventually win. Considering the influence that corporate culture can have on software design (the rushing to make deadlines which results in code debt), I don’t disagree with him one bit. One concept here that could turn out to be really important is that any company working on AGI that does not put safety first may also have a short-term culture, which means they might actually take much, much longer to release their project, or to have recalls that force them to start over, than an organization of programmers that is allowed to do things the right way. An open source project has that potential benefit on it’s side.
People who work on open source projects are probably more altruistic. They may be able to be persuaded that working on AI is so much more important to the future of humanity that they jump out of their current open source project and get involved.
For those three reasons, I think an open source project has a good chance of getting there first.
The obvious argument against this would be “An open source AGI!!! Won’t bad people write their own versions?” My counter argument is: In a world where pirates routinely crack software within days of it coming out, and corporate espionage is a real possibility for a target this juicy, what makes you think the code won’t get stolen THE VERY NEXT DAY? In that event, the best tool to save us from rogue AGIs would be if every open source programmer has access to editable copies of a friendly AGI, don’t you think?
Analogizing AGI mainly to existing software projects probably isn’t a good starting point for an useful contribution. The big problems are mostly tied to the unique features an actual AGI would have, not to making a generic software project with some security implications work out right.
For a different analogy, think about a software that fits on a floppy disk that somehow turns any laptop into an explosive device with a nuclear bomb level yield (maybe it turns out you can set up a very specific oscillation pattern in a multicore CPU silicon that will trigger a localized false vacuum collapse). I’m not sure I’d be happy to settle with “code gets stolen anyway, so let’s make sure everyone gets access to it”. An actual working AGI could be extremely weaponizable both for very cheap and into something much more dangerous than any software engineering analogy gives reason to suppose, and significantly less useful as a defensive than as an offensive measure.
For a different analogy, think about a software that fits on a floppy disk that somehow turns any laptop into an explosive device with a nuclear bomb level yield.
Okay. I get that AGI would be this powerful. What I don’t get is that the code for it would fit onto a floppy disk. When you say I am making a mistake analogizing AGI to existing software projects, what precisely do you mean to say? Is it that it really wouldn’t need very many programmers? Is it that problems with sloppy, rushed coding would be irrelevant? I’m not sure exactly how this counters my point.
I’m not sure I’d be happy to settle with “code gets stolen anyway, so let’s make sure everyone gets access to it”.
I’m not happy with it. I think it’s better than the alternative. See next point.
An actual working AGI could be extremely weaponizable both for very cheap and into something much more dangerous than any software engineering analogy gives reason to suppose, and significantly less useful as a defensive than as an offensive measure.
Agreed. That is precisely why everyone should have it. Because it’s “the one ring”. They say, “absolute power corrupts absolutely” because there are a billion examples of humans abusing power throughout history. You can’t trust anybody with that much power. It will ruin the checks and balances between governments and the people they’re supposed to serve, it will ruin the checks and balances between branches of governments and it will make hackers, spies and any criminal or criminal organization who are capable of stealing the software (this might be terrorists, the mafia, gangs, corrupt government leaders, cult leaders, etc.) into superpowers.
To check and balance the power there needs to be a mutually assured destruction type threat between the following:
The people and the governments they serve.
Each branch of governments and the other branches of those governments.
The pirates, hackers, spies and criminals and the good people in the world.
The reason the US government was set up the way it was—with the right to bear arms and with and balances between branches of government—is because power corrupts and mutually assured destruction keeps the humans accountable, and this type of accountability is necessary to keep the system healthy. In a world where AGI exists, the right to bear arms needs to include AGI, or power imbalances will probably ruin everything.
We can’t assume the AGIs will all be friendly. Even if we succeed in the incredibly hard task of making sure every AGI released is initially friendly, this won’t guarantee they won’t be hacked or fooled into being unfriendly. To think that there’s a way to ensure that they won’t be hacked is foolish.
What would solve the problem of the power of AGI corrupting people if not checks and balances?
Help good guys beat the race:
Please provide constructive criticism.
An open source project might prevent this problem, not because having an open source AGI is safe, but because 1.) open source projects are open, so anybody can influence it, including people who are knowledgeable about risks and 2.) the people involved in open source projects probably tend to have a pretty strong philanthropic streak and they’re more likely to listen to the dangers than a risk-taking capitalist. The reason it may stop them is this: If an open source project gets there first, it won’t be seen as a juicy target to capitalists anymore. It will be a niche that’s already filled for free. If they wanted to make an AGI they’d have to make one that was so much better than the existing one that it makes sense to charge, or fail at business.
Making an open source AGI, in order to compete with a business might cause the open source programmers to rush. However, imagine what would happen if customers got the following messages around the time that the closed source AGI was going to be released: If you wait a while longer, an AGI will come out for free, plus, the open source AGI is going to be thoroughly tested to discover dangers before you run it. The closed source AGI is very risky.” That would deter a lot of people from buying, which would at least reduce the exposure to the closed source AGI—and the open source group would not have to release the AGI until they had tested thoroughly. If, during the course of their tests, they discover hideous risks, these could serve as warnings about AGI in general, make those risks feel real, and prevent people from running risky AGIs. Assuming that the open source project had good PR and advertising / public education campaigns.
Why open source might have a competitive advantage:
Open source people may be more willing to merge, especially if our future depends on it, whereas companies tend to behave in self-interested ways and work separately for the most part. They’re already divided, so open source could conquer them.
I was told by a Microsoft employee that he thought Linux would eventually win. Considering the influence that corporate culture can have on software design (the rushing to make deadlines which results in code debt), I don’t disagree with him one bit. One concept here that could turn out to be really important is that any company working on AGI that does not put safety first may also have a short-term culture, which means they might actually take much, much longer to release their project, or to have recalls that force them to start over, than an organization of programmers that is allowed to do things the right way. An open source project has that potential benefit on it’s side.
People who work on open source projects are probably more altruistic. They may be able to be persuaded that working on AI is so much more important to the future of humanity that they jump out of their current open source project and get involved.
For those three reasons, I think an open source project has a good chance of getting there first.
The obvious argument against this would be “An open source AGI!!! Won’t bad people write their own versions?” My counter argument is: In a world where pirates routinely crack software within days of it coming out, and corporate espionage is a real possibility for a target this juicy, what makes you think the code won’t get stolen THE VERY NEXT DAY? In that event, the best tool to save us from rogue AGIs would be if every open source programmer has access to editable copies of a friendly AGI, don’t you think?
An even faster solution: How just the threat of having to compete with a massive open source project may stop them.
See Also “Sabotage would not work”
Analogizing AGI mainly to existing software projects probably isn’t a good starting point for an useful contribution. The big problems are mostly tied to the unique features an actual AGI would have, not to making a generic software project with some security implications work out right.
For a different analogy, think about a software that fits on a floppy disk that somehow turns any laptop into an explosive device with a nuclear bomb level yield (maybe it turns out you can set up a very specific oscillation pattern in a multicore CPU silicon that will trigger a localized false vacuum collapse). I’m not sure I’d be happy to settle with “code gets stolen anyway, so let’s make sure everyone gets access to it”. An actual working AGI could be extremely weaponizable both for very cheap and into something much more dangerous than any software engineering analogy gives reason to suppose, and significantly less useful as a defensive than as an offensive measure.
Okay. I get that AGI would be this powerful. What I don’t get is that the code for it would fit onto a floppy disk. When you say I am making a mistake analogizing AGI to existing software projects, what precisely do you mean to say? Is it that it really wouldn’t need very many programmers? Is it that problems with sloppy, rushed coding would be irrelevant? I’m not sure exactly how this counters my point.
I’m not happy with it. I think it’s better than the alternative. See next point.
Agreed. That is precisely why everyone should have it. Because it’s “the one ring”. They say, “absolute power corrupts absolutely” because there are a billion examples of humans abusing power throughout history. You can’t trust anybody with that much power. It will ruin the checks and balances between governments and the people they’re supposed to serve, it will ruin the checks and balances between branches of governments and it will make hackers, spies and any criminal or criminal organization who are capable of stealing the software (this might be terrorists, the mafia, gangs, corrupt government leaders, cult leaders, etc.) into superpowers.
To check and balance the power there needs to be a mutually assured destruction type threat between the following:
The people and the governments they serve.
Each branch of governments and the other branches of those governments.
The pirates, hackers, spies and criminals and the good people in the world.
The reason the US government was set up the way it was—with the right to bear arms and with and balances between branches of government—is because power corrupts and mutually assured destruction keeps the humans accountable, and this type of accountability is necessary to keep the system healthy. In a world where AGI exists, the right to bear arms needs to include AGI, or power imbalances will probably ruin everything.
We can’t assume the AGIs will all be friendly. Even if we succeed in the incredibly hard task of making sure every AGI released is initially friendly, this won’t guarantee they won’t be hacked or fooled into being unfriendly. To think that there’s a way to ensure that they won’t be hacked is foolish.
What would solve the problem of the power of AGI corrupting people if not checks and balances?