A vulnerability by any other name

Heartbleed, POODLE, Shellshock. Giving vulnerabilities names may be controversial, but there’s no doubt it’s effective. These, and many other, vulnerabilities attracted widespread awareness and drove tons of work improving ecosystem security. Heartbleed drew attention to OpenSSL’s small team of maintainers and drove funding and code quality improvements. POODLE led to SSLv3 being disabled on clients and servers nearly overnight. Shellshock directed researchers' attention to bash and resulted in a series of vulnerabilities being discovered.

If one vulnerability can cause this much damage, and drive this much industry change, surely a problem which generates thousands of vulnerabilities should be able to galvanize our industry around much needed improvements?

The most recent macOS release fixed 32 vulnerabilities caused by memory unsafety. The most recent Google Chrome release fixed 10 vulnerabilities caused by memory unsafety (not including ones found by Google’s internal fuzzing and auditing efforts). The most recent Firefox release fixed 38 vulnerabilities caused by memory unsafety. Google’s OSS-Fuzz has found more than 750 memory unsafety vulnerabilities in popular open source projects. Google’s Project Zero has found more than 1,000 vulnerabilities, many of them memory unsafety (70 of the 86 critical vulnerabilities are memory corruption).

It’s time to admit we have a problem.

Memory unsafety is a scourge plaguing our industry. But because it results in thousands of vulnerabilities, instead of one flashy one, we don’t give it nearly the attention it deserves. This is, of course, entirely backwards, if we were to perform a root cause analysis on all of these bugs, we’d find the same thing over and over again. Just because it doesn’t have a name, doesn’t make it any less devastating.

The last few years have produced significant new tools such as AFL and libFuzzer which make fuzzing more accessible and ASAN which makes detecting memory corruption easier. But none of these address the root cause: the near impossibility of writing code in C and C++ which does not have these vulnerabilities, and the extreme popularity of these languages in security-sensitive contexts.

So what should we do? If you’re working on an OS kernel or core service, a web browser, or a network server: start gradually porting your code to a memory safe language. My personal bet is on Rust, and I’m extremely proud that my employer has been investing both in the language and in porting parts of Firefox to it. If you’re starting a new project, build it in a memory safe language from day one. The programming languages community should invest in research in how to build more accessible and ergonomic memory safe languages. And people are welcome to continue researching how to make C/C++ safer; but two decades after “Smashing the Stack for Fun and Profit” a stack-buffer-overflow in Chrome is still a critical vulnerability, so I’m not holding my breath.

The first step in solving a problem is admitting you have one. The developers who build macOS, Firefox, Chrome, and other C/C++ codebases (both open and closed) are not dumb, they’re not bad developers, but they do have a bad tool. There are still large portions of our industry who believe C and C++ can be safe, if developers are simply smarter and work harder. I think the data makes clear this isn’t true. We must address the root cause, and migrate our critical projects to other, safer, programming languages. This does not need to happen overnight, each module or library ported is a big win, because this needs to happen.