I understand the arguments, but I’m not sure the suggested solutions make sense from the perspective of security:
As your prototypical nerd, I used to be really into FOSS, the EFF, blob-free GNU/Linux distros, XKeyScore, Echelon, INDECT, PRISM and other names most of us have forgotten what they were. Then, like most people, I gradually stopped caring, and now I’m leaving a trail of personal data wherever I go.
I guess it’s time for me to go back to my pre-2015 technoparanoia.
Very popular vetted open-source software can be more secure than closed software, but it by no means has to be. And software by volunteers can’t necessarily react as quickly to newly discovered security vulnerabilities. Using a privacy-conscious FOSS browser, for example, might improve your privacy but leave you temporarily more vulnerable to newly published security vulnerabilities, and the overall sign of such a trade-off is unclear to me.
Or: If secure code is sufficiently hard to write, the AI task of crawling the web to profile individual users seems way harder and less useful than unleashing exploit-finding AIs on Github. Can secure open-source software even exist in such a world?
I agree on the point that open source software doesn’t have to be more secure. My understanding is that they are less likely to send user data to third parties as they’re not trying to make ad money (or you could just remove that part from the source).
For the exploits-finding AI, I can only hope that the white hats will outnumber the black hats.
I understand the arguments, but I’m not sure the suggested solutions make sense from the perspective of security:
Very popular vetted open-source software can be more secure than closed software, but it by no means has to be. And software by volunteers can’t necessarily react as quickly to newly discovered security vulnerabilities. Using a privacy-conscious FOSS browser, for example, might improve your privacy but leave you temporarily more vulnerable to newly published security vulnerabilities, and the overall sign of such a trade-off is unclear to me.
Or: If secure code is sufficiently hard to write, the AI task of crawling the web to profile individual users seems way harder and less useful than unleashing exploit-finding AIs on Github. Can secure open-source software even exist in such a world?
I agree on the point that open source software doesn’t have to be more secure. My understanding is that they are less likely to send user data to third parties as they’re not trying to make ad money (or you could just remove that part from the source). For the exploits-finding AI, I can only hope that the white hats will outnumber the black hats.