Risky Business

There are a variety of professions whose job it is to help businesses manage risks, for example, information security, accounting, and legal. And we generally expect those professions to reduce risk. But we also know that businesses take risks all the time, because while these professions reduce risk, and may even eliminate some risks, they don’t eliminate all risks. What is the responsibility of a professional in one of these fields when a business is taking a risk in their domain? There are, broadly, two possible answers to that question: their job could be descriptive, or it could be normative.

If your theory is that these roles are supposed to be descriptive, then its the job of a lawyer to provide their views on how much risk some action involves, and how that compares to the risks involved in other courses of action. If your theory is normative, then you believe that its the job of a lawyer to tell people what they cannot do because it is too risky or illegal. While not universal, conventional wisdom is that good risk practitioners are descriptive: they help a business understand its risks, and give them the tools to decide which risks the business want to take.

In some sense, this is obviously correct, neither security nor accounting nor legal owns the business, their ability to stop anything only goes as far as the business allows it. In a larger sense, however, the idea that practitioners should only ever be descriptive of risks is very incomplete and incorrectly implies that all risks are equal. Specifically, it’s correct that when facing a risk decision on the margin, risk practitioners' jobs are descriptive. But not all risk decisions are on the margin, and when they’re not, risk practitioners have a responsibility to speak normatively and prescriptively.

Risk management is about an investment of resources now, to reduce the possibility of outcomes that are substantially worse later. And so we can integrate over time and say what the costs and benefits of some risk management approach will be (with varying degrees of precision). However, this also means that on a short time scale, almost any risk management investment has a poor return-on-investment. Which means that a business may believe it has an incentive to ignore a risk, and behave in a way that’s irresponsible on even a modestly longer time horizon.

Choosing to launch a product that will store substantial amounts of sensitive personal information with no MFA (multi-factor authentication), and no plan to introduce MFA, for administrative users may allow you to launch faster, but on even a moderate time scale it’s an indefensible amount of risk taking with people’s personal data. On the flip side, whether or not CAA records need to be in place for all domains is a decision that’s on the frontier of cost/benefit trade-off, even on a longer timescale.

To analogize to another profession, a lawyer advising a client on risks related to a merger might tell the client that there’s a 50% chance a competition agency would try to block the merger, advise them on their likelihood of prevailing in court, and allow the business to decide if the merger still makes sense given that risk. But, the same lawyer would advise their client that deleting responsive emails after receiving a preservation order is categorically unacceptable and they must not do it. The idea that a lawyer might tell the client, “here are the risks if you do delete the emails, it’s your call” is anathema to any notion of professional responsibility.

Those of us who deal in information security risks must develop a similar understanding, that sometimes our job is to describe risks, and other times it’s our job to declare certain practices unacceptably risky or dangerous. Recognizing which scenario is which is, of course, the kind of thing one gets better at with experience (ideally, we’d also have a strong empirical literature to base our risk decisions on, but while we have data in a few areas1, in far too many areas we don’t). It also should be stated that declaring a risk far from the margin isn’t the end of a professional’s responsibility, they then need to help the business identify alternate paths that do not have unacceptable risks.

Security engineers need to take their role in shaping what practices are and are not acceptable more seriously. In part this means accepting that “let the business decide” is only applicable to decisions on the margin: sometimes a risk is simply unacceptable. However, doing that also means that security engineers need to be right about which things are in that category. I have seen far too many security teams, once empowered with a veto, become lazy and ineffectual. A veto insulated them from needing to justify their policies, so the requirements they imposed on the organization became increasingly inefficient at improving security. Being able to effectively describe risks for businesses to make decisions about is a necessary precondition to having the credibility to say, “this is an unacceptable risk with our users' data, we must not do it”.

Finally, nearly always the best tool we have at our disposal to improve security is to find pareto optimal solutions: ways to improve security and reduce costs at the same time. While at the limit, there are often trade-offs between security and costs, in our industry today, we are almost never operating at the limit. Finding these types of improvements that don’t have trade-offs allows us to change where the cost-benefit frontier is, moving risk decisions away from being on the margin. We need to find those opportunities and make them ubiquitous.


  1. For example, memory safety and phishing resistant MFA. ↩︎