Many of the models used to quantify information risk are based upon estimates of models of estimates. Clearly precision is a challenge, and there is a school of thought that true risk to an organization cannot be quantified because what ends up truly hurting us is unknown (or else we'd have an answer for it).
Of course, you are in the financial business, so you are skeptical. You are not convinced you can truly quantify information risk, but you also need convincing that you can't.
Riddle me this: Why haven't the big brains with the big models been able to predict things like the Internet bubble popping, the unwinding of the private equity fiesta/cheap credit, and the sub-prime mortgage business? For one, negativity is bad for business. But even the traders, who have their own metrics, were caught flat-footed because we are crummy at predicting things, so counting things meant to predict is a fruitless exercise.
Yes, I've read The Black Swan, and a lot of what it discusses, such as the nature of uncertainty, really resonates with me regarding the discussion of security metrics. In the absence of a truly relevant predictive model, practitioners have spent a lot of time counting all sorts of things. The good news is that there are lots of things to count, but that doesn't mean we should count them.
That isn't an indictment of sophisticated risk modeling and quantification approaches like FAIR. I do think for very advanced organizations that have significant data and understand what they are trying to protect, models like this can be useful. But to be clear, I think less than 5% of the financial institutions fall into that category. Sorry to burst your respective bubbles, but before you get into PhD-level risk models, a little bit of blocking and tackling is a good thing to focus on.
I break up the idea of useful metrics into three categories:
- Relevance to the business
- Responding to incidents
- Tracking operational effectiveness
Relevance to the business
These issues are more qualitative than truly quantitative, but help the senior executives understand how and where you are spending your time. The idea here is to use these metrics to gain credibility and show you are in control of the security program. It's not about tracking operational excellence. We get to those later.
Metrics in this bucket include downtime due to security issues, number of devices rebuilt, percentage of application code that has been reviewed, etc. Remember that senior executives don't want detail, unless they need it. They want you to highlight for them what you've been doing and where the areas of concern are.
Responding to incidents
This is where the rubber meets the road, since how a security professional responds to an incident has everything to do with whether they have a job tomorrow. We all know that incidents happen, but our job is to contain the damage and reduce the liability the organization is saddled with. Of course, you can't really count anything in this bucket until something bad happens, but you need to figure out what you'll do when your number comes up and what you'll count to show value, responsiveness and make the case that you did the best possible, given the situation.
Metrics here include mean time to resolve an incident, average cost of incidents, etc. From a trending standpoint, you like to see the mean time and average cost metrics going down as you get more experience handling incidents. Of course, you'd also like to have fewer incidents, but, if anything, financial professionals are realists.
Ultimately a large part of being a security professional is doing the stuff that you know works. For example, deploying devices with secure configurations, patching within a reasonable amount of time, and monitoring networks and systems. To reiterate what I said above, these metrics are operational in nature and lend themselves to counting, trending and improving over time.
Just understand that this set of metrics is for you, not for them. Your management doesn't really care whether it took two or three days to patch the servers, as long as nothing was exploited. These metrics are useful to improve your performance and efficiency. Since we have to do this stuff, we may as well do it well.
So how do you sell this type of metrics hierarchy to the senior executives? Don't they want to see those sophisticated models like the ones the traders have? How can you get by with some high-level business oriented metrics that present just the basics of what your security program is about?
It's based on credibility. You have to be credible in their eyes and you do that by doing what you say you are going to do and not screwing up. You may end up counting a lot of silly stuff for a little while (since your predecessor did), but as you gain credibility – you'll be able to evolve your metrics program to count the things that are important, not just the things that can be counted.
About the author:
Mike Rothman is president and principal analyst of Security Incite, an industry analyst firm in Atlanta, and the author of The Pragmatic CSO: 12 Steps to Being a Security Master. Get more information about the Pragmatic CSO at http://www.pragmaticcso.com, read his blog at http://blog.securityincite.com, or reach him via e-mail at mike.rothman (at) securityincite (dot) com.