I listened to the book This Is How They Tell Me the World Ends by Nicole Perlroth, a book about cybersecurity and the zero-day market. It describes in detail the early days of bug discovery, the social dynamics and moral dilemma of bug hunts.
(It was recommended to me by some EA-adjacent guy very worried about cyber, but the title is mostly bait: the tone of the book is alarmist, but there is very little content about potential catastrophes.)
My main takeaways:
Vulnerabilities used to be dirt-cheap (~$100) but are still relatively cheap (~$1M even for big zero-days);
If you are very good at cyber and extremely smart, you can hide vulnerabilities in 10k-lines programs in a way that less smart specialists will have trouble discovering even after days of examination—code generation/analysis is not really defense favored;
Bug bounties are a relatively recent innovation, and it felt very unnatural to tech giants to reward people trying to break their software;
A big lever companies have on the US government is the threat that overseas competitors will be favored if the US gov meddles too much with their activities;
The main effect of a market being underground is not making transactions harder (people find ways to exchange money for vulnerabilities by building trust), but making it much harder to figure out what the market price is and reducing the effectiveness of the overall market;
Being the target of an autocratic government is an awful experience, and you have to be extremely careful if you put anything they dislike on a computer. And because of the zero-day market, you can’t assume your government will suck at hacking you just because it’s a small country;
It’s not that hard to reduce the exposure of critical infrastructure to cyber-attacks by just making companies air gap their systems more—Japan and Finland have relatively successful programs, and Ukraine is good at defending against that in part because they have been trying hard for a while—but it’s a cost companies and governments are rarely willing to pay in the US;
Electronic voting machines are extremely stupid, and the federal gov can’t dictate how the (red) states should secure their voting equipment;
Hackers want lots of different things—money, fame, working for the good guys, hurting the bad guys, having their effort be acknowledged, spite, … and sometimes look irrational (e.g. they sometimes get frog-boiled).
The US government has a good amount of people who are freaked out about cybersecurity and have good warning shots to support their position. The main difficulty in pushing for more cybersecurity is that voters don’t care about it.
Maybe the takeaway is that it’s hard to build support behind the prevention of risks that 1. are technical/abstract and 2. fall on the private sector and not individuals 3. have a heavy right tail. Given these challenges, organizations that find prevention inconvenient often succeed in lobbying themselves out of costly legislation.
Overall, I don’t recommend this book. It’s very light on details compared to The Hacker and the State despite being longer. It targets an audience which is non-technical and very scope insensitive, is very light on actual numbers, technical details, real-politic considerations, estimates, and forecasts. It is wrapped in an alarmist journalistic tone I really disliked, covers stories that do not matter for the big picture, and is focused on finding who is in the right and who is to blame. I gained almost no evidence either way about how bad it would be if the US and Russia entered a no-holds-barred cyberwar.
If you are very good at cyber and extremely smart, you can hide vulnerabilities in 10k-lines programs in a way that less smart specialists will have trouble discovering even after days of examination—code generation/analysis is not really defense favored;
[...] The NSA invited James Gosler to spend some time at their headquarters in Fort Meade, Maryland in 1987, to teach their analysts [...] about software vulnerabilities. None of the NSA team was able to detect Gosler’s malware, even though it was inserted into an application featuring only 3,000 lines of code. [...]
[Taken from this summary of this passage of the book. The book was light on technical detail, I don’t remember having listened to more details than that.]
I didn’t realize this was so early in the story of the NSA, maybe this anecdote teaches us nothing about the current state of the attack/defense balance.
One example, found by browsing aimlessly through recent high-severity CVE, is CVE-2023-41056. I chose that one by browsing through recent CVEs for one that sounded bad, and was on a project that has a reputation for having clean, well-written, well-tested code, backed by a serious organization. You can see the diff that fixed the CVE here. I don’t think the commit that introduced the vulnerability was intentional… but it totally could have been, and nobody would have caught it despite the Redis project doing pretty much everything right, and there being a ton of eyes on the project.
As a note, CVE stands for “Common Vulnerabilities and Exposures”. The final number in the CVE identifier (i.e.CVE-2023-41056 in this case) is a number that increments sequentially through the year. This should give you some idea of just how frequently vulnerabilities are discovered.
The dirty open secret in the industry is that most vulnerabilities are never discovered, and many of the vulns that are discovered are never publicly disclosed.
Maybe the takeaway is that it’s hard to build support behind the prevention of risks that 1. are technical/abstract and 2. fall on the private sector and not individuals 3. have a heavy right tail. Given these challenges, organizations that find prevention inconvenient often succeed in lobbying themselves out of costly legislation.
Which is also something of a problem for popularising AI alignment. Some aspects of AI (in particular AI art) do have their detractors already, but that won’t necessarily result in policy that helps vs. x-risk.
If you are very good at cyber and extremely smart, you can hide vulnerabilities in 10k-lines programs in a way that less smart specialists will have trouble discovering even after days of examination—code generation/analysis is not really defense favored
I think the first part of the sentence is true, but “not defense favored” isn’t a clear conclusion to me. I think that backdoors work well in closed-source code, but are really hard in open-source widely used code − just look at the amount of effort that went into the recent xz / liblzma backdoor, and the fact that we don’t know of any other backdoor in widely used OSS.
The main effect of a market being underground is not making transactions harder (people find ways to exchange money for vulnerabilities by building trust), but making it much harder to figure out what the market price is and reducing the effectiveness of the overall market
Note this doesn’t apply to all types of underground markets: the ones that regularly get shut down (like darknet drug markets) do have a big issue with trust.
Being the target of an autocratic government is an awful experience, and you have to be extremely careful if you put anything they dislike on a computer. And because of the zero-day market, you can’t assume your government will suck at hacking you just because it’s a small country
This is correct. As a matter of personal policy, I assume that everything I write down somewhere will get leaked at some point (with a few exceptions, like − hopefully − disappearing signal messages).
The reason why xz backdoor was discovered is increased latency, which is textbook side channel. If attacker had more points in security mindset skill tree, it wouldn’t happen.
I listened to the book This Is How They Tell Me the World Ends by Nicole Perlroth, a book about cybersecurity and the zero-day market. It describes in detail the early days of bug discovery, the social dynamics and moral dilemma of bug hunts.
(It was recommended to me by some EA-adjacent guy very worried about cyber, but the title is mostly bait: the tone of the book is alarmist, but there is very little content about potential catastrophes.)
My main takeaways:
Vulnerabilities used to be dirt-cheap (~$100) but are still relatively cheap (~$1M even for big zero-days);
If you are very good at cyber and extremely smart, you can hide vulnerabilities in 10k-lines programs in a way that less smart specialists will have trouble discovering even after days of examination—code generation/analysis is not really defense favored;
Bug bounties are a relatively recent innovation, and it felt very unnatural to tech giants to reward people trying to break their software;
A big lever companies have on the US government is the threat that overseas competitors will be favored if the US gov meddles too much with their activities;
The main effect of a market being underground is not making transactions harder (people find ways to exchange money for vulnerabilities by building trust), but making it much harder to figure out what the market price is and reducing the effectiveness of the overall market;
Being the target of an autocratic government is an awful experience, and you have to be extremely careful if you put anything they dislike on a computer. And because of the zero-day market, you can’t assume your government will suck at hacking you just because it’s a small country;
It’s not that hard to reduce the exposure of critical infrastructure to cyber-attacks by just making companies air gap their systems more—Japan and Finland have relatively successful programs, and Ukraine is good at defending against that in part because they have been trying hard for a while—but it’s a cost companies and governments are rarely willing to pay in the US;
Electronic voting machines are extremely stupid, and the federal gov can’t dictate how the (red) states should secure their voting equipment;
Hackers want lots of different things—money, fame, working for the good guys, hurting the bad guys, having their effort be acknowledged, spite, … and sometimes look irrational (e.g. they sometimes get frog-boiled).
The US government has a good amount of people who are freaked out about cybersecurity and have good warning shots to support their position. The main difficulty in pushing for more cybersecurity is that voters don’t care about it.
Maybe the takeaway is that it’s hard to build support behind the prevention of risks that 1. are technical/abstract and 2. fall on the private sector and not individuals 3. have a heavy right tail. Given these challenges, organizations that find prevention inconvenient often succeed in lobbying themselves out of costly legislation.
Overall, I don’t recommend this book. It’s very light on details compared to The Hacker and the State despite being longer. It targets an audience which is non-technical and very scope insensitive, is very light on actual numbers, technical details, real-politic considerations, estimates, and forecasts. It is wrapped in an alarmist journalistic tone I really disliked, covers stories that do not matter for the big picture, and is focused on finding who is in the right and who is to blame. I gained almost no evidence either way about how bad it would be if the US and Russia entered a no-holds-barred cyberwar.
Do you have concrete examples?
I remembered mostly this story:
[Taken from this summary of this passage of the book. The book was light on technical detail, I don’t remember having listened to more details than that.]
I didn’t realize this was so early in the story of the NSA, maybe this anecdote teaches us nothing about the current state of the attack/defense balance.
The full passage in this tweet thread (search for “3,000”).
One example, found by browsing aimlessly through recent high-severity CVE, is CVE-2023-41056. I chose that one by browsing through recent CVEs for one that sounded bad, and was on a project that has a reputation for having clean, well-written, well-tested code, backed by a serious organization. You can see the diff that fixed the CVE here. I don’t think the commit that introduced the vulnerability was intentional… but it totally could have been, and nobody would have caught it despite the Redis project doing pretty much everything right, and there being a ton of eyes on the project.
As a note, CVE stands for “Common Vulnerabilities and Exposures”. The final number in the CVE identifier (i.e.
CVE-2023-41056
in this case) is a number that increments sequentially through the year. This should give you some idea of just how frequently vulnerabilities are discovered.The dirty open secret in the industry is that most vulnerabilities are never discovered, and many of the vulns that are discovered are never publicly disclosed.
Which is also something of a problem for popularising AI alignment. Some aspects of AI (in particular AI art) do have their detractors already, but that won’t necessarily result in policy that helps vs. x-risk.
Same for governments, afaik most still don’t have bug bounty programs for their software66%.Nevermind, a short google shows multiple such programs, although others have been hesitant to adopt them.
I think the first part of the sentence is true, but “not defense favored” isn’t a clear conclusion to me. I think that backdoors work well in closed-source code, but are really hard in open-source widely used code − just look at the amount of effort that went into the recent xz / liblzma backdoor, and the fact that we don’t know of any other backdoor in widely used OSS.
Note this doesn’t apply to all types of underground markets: the ones that regularly get shut down (like darknet drug markets) do have a big issue with trust.
This is correct. As a matter of personal policy, I assume that everything I write down somewhere will get leaked at some point (with a few exceptions, like − hopefully − disappearing signal messages).
The reason why xz backdoor was discovered is increased latency, which is textbook side channel. If attacker had more points in security mindset skill tree, it wouldn’t happen.