The idea of an international collaboration reminds me of this article I read a while ago about the difficulties coordinating international efforts to create nuclear fusion: http://www.newyorker.com/magazine/2014/03/03/a-star-in-a-bottle As a software developer, I tend to think that the best software is produced by small teams of elite software developers who know each other well, work together well, have been working together for a long time, work out of a single office, and are all native or extremely fluent speakers of the same language (English being the best language by a wide margin, because almost all programming languages are based on it and the majority of tool documentation is written in it, especially for the most cutting edge development tools and libraries). This is the rough model that you see used in Silicon Valley, and it seems to have won out over other models like outsourcing half your team to a foreign country where developers are not extremely fluent in English and hiring managers aren’t ruthlessly obsessed with finding the most brilliant and qualified people possible. (There are a few differences, such as the fact that Silicon Valley workers change jobs rather often and that Silicon Valley companies are now being forced to hire people who aren’t quite as brilliant or fluent as they would like. But I think I’ve described the type of team that many or most of the best CTOs in the valley would like to have.)
An international collaboration pattern matches to one of those horror stories you read about in a book like The Mythical Man-Month about a project that takes way longer than expected, goes way over budget, and might succeed in delivering a poorly designed, bug-ridden piece of software if it isn’t cancelled or started over from scratch first. Writing great software is a big topic that I don’t feel very qualified to speak on, but it does worry me that Bostrom’s plan doesn’t pass my sniff test; it makes me worry that he spent too much time theorizing from first principles and not enough having discussion with domain experts.
Either way, I think this discussion might benefit from surveying the literature on software development best practices, international research collaborations, safety-critical software development, etc. There might be some strategy besides an international collaboration that accomplishes the same thing, e.g. a core development team in a single location writing all of the software, with external teams monitoring its development, taking the time to understand it, and checking for flaws. This would both give those external teams domain expertise in producing AGIs if it turns out they’re only very powerful rather than extremely powerful, and serves the additional role of having an additional layer of safety checks. (To provide proper incentives, perhaps any monitoring team that succeeded in identifying a bug in the work of the main team would have the prestige of writing the AI revert to it. Apparently something like this adversarial structure works for a company writing safety-critical space shuttle software: http://www.fastcompany.com/28121/they-write-right-stuff )
Another idea I’ve been toying with recently is the idea that some people who are concerned with AI safety should go off and start a company that writes safety-critical AI software now, say for piloting killer drones. That would give them the opportunity to develop the soft skills and expertise necessary to write really high quality, bug-free AI software. In the ideal case they might spend half their time writing code and the other half of their time improving processes to reduce the incidence of bugs. Then we’d have a team in place to build FAI when it became possible.
The idea of an international collaboration reminds me of this article I read a while ago about the difficulties coordinating international efforts to create nuclear fusion: http://www.newyorker.com/magazine/2014/03/03/a-star-in-a-bottle As a software developer, I tend to think that the best software is produced by small teams of elite software developers who know each other well, work together well, have been working together for a long time, work out of a single office, and are all native or extremely fluent speakers of the same language (English being the best language by a wide margin, because almost all programming languages are based on it and the majority of tool documentation is written in it, especially for the most cutting edge development tools and libraries). This is the rough model that you see used in Silicon Valley, and it seems to have won out over other models like outsourcing half your team to a foreign country where developers are not extremely fluent in English and hiring managers aren’t ruthlessly obsessed with finding the most brilliant and qualified people possible. (There are a few differences, such as the fact that Silicon Valley workers change jobs rather often and that Silicon Valley companies are now being forced to hire people who aren’t quite as brilliant or fluent as they would like. But I think I’ve described the type of team that many or most of the best CTOs in the valley would like to have.)
An international collaboration pattern matches to one of those horror stories you read about in a book like The Mythical Man-Month about a project that takes way longer than expected, goes way over budget, and might succeed in delivering a poorly designed, bug-ridden piece of software if it isn’t cancelled or started over from scratch first. Writing great software is a big topic that I don’t feel very qualified to speak on, but it does worry me that Bostrom’s plan doesn’t pass my sniff test; it makes me worry that he spent too much time theorizing from first principles and not enough having discussion with domain experts.
Either way, I think this discussion might benefit from surveying the literature on software development best practices, international research collaborations, safety-critical software development, etc. There might be some strategy besides an international collaboration that accomplishes the same thing, e.g. a core development team in a single location writing all of the software, with external teams monitoring its development, taking the time to understand it, and checking for flaws. This would both give those external teams domain expertise in producing AGIs if it turns out they’re only very powerful rather than extremely powerful, and serves the additional role of having an additional layer of safety checks. (To provide proper incentives, perhaps any monitoring team that succeeded in identifying a bug in the work of the main team would have the prestige of writing the AI revert to it. Apparently something like this adversarial structure works for a company writing safety-critical space shuttle software: http://www.fastcompany.com/28121/they-write-right-stuff )
Another idea I’ve been toying with recently is the idea that some people who are concerned with AI safety should go off and start a company that writes safety-critical AI software now, say for piloting killer drones. That would give them the opportunity to develop the soft skills and expertise necessary to write really high quality, bug-free AI software. In the ideal case they might spend half their time writing code and the other half of their time improving processes to reduce the incidence of bugs. Then we’d have a team in place to build FAI when it became possible.