Note: This is about the IDE extension software that runs on your computer, not about the backend service running behind the API.
Context
I’m a developer at MS. Copilot Internals have been reverse engineered before, but the code remains closed source. I support this decision on grounds of “differentially advance safety/alignment over capabilities”, but there was a thread on twitter arguing otherwise.
Ask
I don’t have a rigorous model for the impact on x-risks. If someone has thought about this in more depth, I would appreciate some help reducing my confusion. I looked at previous discussions but couldn’t find a thorough analysis of the costs and benefits.
On one hand, better code writing assistants accelerate the self-improvement loop which shortens timelines. On the other, open sourcing only the frontend part of the interface allows other engineers to look at its internals and add extra safety guards or testing.
Short answer: Yes.
One of the key powers of open source code is that it can (and will) be reviewed by thousands of extra pairs of eyes compared with its proprietary counterpart. Each reviewer will have a slightly different approach and philosophy from all the others. As a result, deeper and more obscure issues are naturally exposed (and therefore made available for correction) sooner with open source than they are with any program whose code cannot be freely examined.
The point you make is not wrong, but it is swamped by stronger effects. In this case one of the stronger effects is the fact that making it easier to create and maintain complex software artifacts tends to decrease how much time humanity has till AI research wipes us out (because the dangerous kinds of AI research programs entail creating and maintaining complex software), so Microsoft should not open-source the extension. (Faster hardware and better compilers have the same effect.)
This effect that I just described that swamps the effect you described is in turn swamped by effects such as effective regulation of AI research or creating conditions that would discourage bright young people from becoming AI researchers, but as far as I can tell the decision we are contemplating here has no bearing on these very strong effects.
https://www.lesswrong.com/posts/u5Lydbd5JWPbmE2bQ/in-favor-of-accelerating-problems-you-re-trying-to-solve?commentId=mTYHuRNiA4TDriFRL