Known unknowns - zero-days in the wild
This past week Google’s Project Zero disclosed an unfixed security issue in Microsoft’s Edge browser. This is not the first time Microsoft failed to patch an issue within Project Zero’s disclosure timeline. This produces strong feelings in the information security community, generally in one of three categories:
- praising Google’s vulnerability research
- criticizing Microsoft’s response
- criticizing Google for publicly disclosing the vulnerability before it was patched
The debate over what the correct vulnerability disclosure policy is is not a new one. Oftentimes this debate focuses on how much risk is generated by potentially giving new attackers access to this vulnerability vs. how well defenders will be able to protect themselves given knowledge of the vulnerability. But this ignores a third group: attackers who are already aware of the vulnerability, and are actively exploiting it.
Read any online discussion over vulnerability disclosure policy and someone will ask, “What is the urgency around this vulnerability, there’s no evidence of use in the wild?” I believe this is both a common sentiment, and one based on a dangerous misunderstanding of vulnerability exploitation. It fundamentally assumes that we have good visibility into what vulnerabilities are being exploited.
We don’t. While mass malware like WannaCry necessarily makes itself visible, targeted attacks like Trident can go undetected for many years – the Trident malware was discovered attacking iOS 9, but disassembly showed that it was designed to work at least as far back as iOS 7.
For sophisticated attackers, for example national intelligence agencies, stealth is a feature. They put as much effort into making sure their attacks are undetected as they do into ensuring they work in the first place. As a result, they’re probably pretty good at it. In RAND’s “Zero Days, Thousands of Nights” report they spoke with numerous exploit developers and, “None we spoke to believed that their vulnerabilities or exploits died or were discovered due to use by a customer in some operational campaign, or by information leakage”. As an industry we need to evolve our thinking beyond mass malware if we want to protect users against sophisticated attackers. We must adapt to the idea that many governments have exploits that go from malicious website to kernel code execution against every major browser/operating system, and that we have no visibility into how they’re used.
When we occasionally do get insights into what targeted exploitation looks like in the real world, it generally confirms this perspective: when the attacker is concerned about stealth, vulnerabilities and exploits can be used in the wild for long periods of time without detection. Both the Trident exploit as well as the exploits found in the Hacking Team and ShadowBrokers dumps demonstrate this.
One of the other ways we learn about these dynamics is bug collisions: when two researchers discover the same bug. If we look across the entirety of a browser, there are so many vulnerabilities that collisions aren’t high probability. However Project Zero’s research methodology specifically focuses on “high contention” attack surfaces, things everyone writing an exploit will need, for example sandbox escapes or ways to turn a heap-buffer-overflow into arbitrary code execution. That’s exactly what the bug in question is: a bypass for Arbitrary Code Guard, a security feature in Edge. Any attacker looking to exploit Edge will need something like this (or to encode their entire payload as ROP), so it’s significantly more likely than usual that another attacker is aware of this vulnerability, supporting the idea that disclosure is the right move.
We shouldn’t design our approach to security based on the idea that we will know when and how it is being exploited. When we find out about bugs, we should fix them promptly – if anything, Project Zero’s 90 day disclosure timeline may be too long, we never know who might be exploiting them.