I think a good follow-up article could be one that continues the analogy by examining software development concepts that have evolved to address the “nobody cares about security enough to do it right” problem.
I’m thinking of two things in particular: the Rust programming language, and capability-oriented programming.
The Rust language is designed to remove entire classes of bugs and exploits (with some caveats that don’t matter too much in practice). This does add some constraints to how you can build you program; for some developers, this is a dealbreaker, so Rust adoption isn’t an automatic win. But many (I don’t really have the numbers to quantify better) developers thrive within those limitations, and even find them helpful to better structure their program.
This selection effect has also lead to the Rust ecosystem having a culture of security by design. Eg a pentest team auditing the rustlst crate “considered the general code quality to be exceptional and can attest to a solid impression left consistently by all scope items”.
Capability oriented is a more general idea. The concept is pretty old, but still sound: you only give your system as many resources as it plausibly needs to perform its job. If your program needs to take some text and eg count the number of words in that text, you only give the program access to an input channel and an output channel; if the program tries to open a network socket or some file you didn’t give it access to, it automatically fails.
Capability-oriented programming has the potential to greatly reduce the vulnerability of a system, because now, to leverage a remote execution exploit, you also need a capability escalation / sandbox escape exploit. That means the capability system must be sound (with all the testing and red-teaming that implies), but “the capability system” is a much smaller attack surface than “every program on your computer”.
There hasn’t really been a popular OS that was capability-oriented from the ground up. Similar concepts have been used in containers, WebAssembly, app permissions on mobile OSes, and some package formats like flatpak. The in-development Google OS “Fuschia” (or more precisely, its kernel Zirkon) is the most interesting project I know of on that front.
I’m not sure what the equivalent would be for AI. I think there was a LW article mentioning a project the author had of building a standard “AI sandbox”? I think as AI develops, toolboxes that figure out a “safe” subset of AIs that can be used without risking side effects, while still getting the economic benefits of “free” AIs might also be promising.
Good article.
I think a good follow-up article could be one that continues the analogy by examining software development concepts that have evolved to address the “nobody cares about security enough to do it right” problem.
I’m thinking of two things in particular: the Rust programming language, and capability-oriented programming.
The Rust language is designed to remove entire classes of bugs and exploits (with some caveats that don’t matter too much in practice). This does add some constraints to how you can build you program; for some developers, this is a dealbreaker, so Rust adoption isn’t an automatic win. But many (I don’t really have the numbers to quantify better) developers thrive within those limitations, and even find them helpful to better structure their program.
This selection effect has also lead to the Rust ecosystem having a culture of security by design. Eg a pentest team auditing the rustlst crate “considered the general code quality to be exceptional and can attest to a solid impression left consistently by all scope items”.
Capability oriented is a more general idea. The concept is pretty old, but still sound: you only give your system as many resources as it plausibly needs to perform its job. If your program needs to take some text and eg count the number of words in that text, you only give the program access to an input channel and an output channel; if the program tries to open a network socket or some file you didn’t give it access to, it automatically fails.
Capability-oriented programming has the potential to greatly reduce the vulnerability of a system, because now, to leverage a remote execution exploit, you also need a capability escalation / sandbox escape exploit. That means the capability system must be sound (with all the testing and red-teaming that implies), but “the capability system” is a much smaller attack surface than “every program on your computer”.
There hasn’t really been a popular OS that was capability-oriented from the ground up. Similar concepts have been used in containers, WebAssembly, app permissions on mobile OSes, and some package formats like flatpak. The in-development Google OS “Fuschia” (or more precisely, its kernel Zirkon) is the most interesting project I know of on that front.
I’m not sure what the equivalent would be for AI. I think there was a LW article mentioning a project the author had of building a standard “AI sandbox”? I think as AI develops, toolboxes that figure out a “safe” subset of AIs that can be used without risking side effects, while still getting the economic benefits of “free” AIs might also be promising.