Your Boss Doesn’t Care About Vulnerability Counts
... And if your boss is asking for vulnerability counts, their boss doesn't actually care about how many problems you're finding. Crafting metrics is the fine art of getting information that decision-makers need so they can make a decision. If your metrics spark more questions than insights, you're probably not reporting metrics: You're reporting data, like the raw count of problems your tools are churning out.
Good metrics make the ineffable understandable. The weakness of cybersecurity is that it’s just so darn abstract. You can’t walk up to a Mean-Time-To-Remediate and poke at it with a finger. It’s difficult to communicate abstract concepts like residual risk carried across the application portfolio, and decision-makers struggle to make the right call in the face of unknown unknowns like that. Good metrics should aim to take the abstract and ethereal goodness, badness, progress, and problems and communicate them in a way that even the most technophobic could grasp. Armed with these newfound insights, decision-makers get to make decisions.
What types of decisions should you be feeding with your metrics? Short-term risk management decisions and longer-term strategic decisions. Good metrics also have a couple of additional roles like providing a summary of how things are going or progressing and raising awareness or getting buy-in to help solve problems. When compiling data and building reports, it helps to be deliberate in asking “Why?” of every data point that’s included.
Want to start reporting better metrics? Here are the steps:
Decide if what’s measured actually matters to people. The most common failure mode of metrics is taking the canned reports from a tool and reporting those numbers and graphics up as if they mean something. Nine times out of ten, default metrics dashboards are built to provide information to tool operators. If someone’s not operating that tool, they don’t care what metrics get reported out. Shift to measuring and reporting information that decision-makers need to make informed decisions.
Build a narrative that feeds a decision. Humans are storytellers; we communicate via narratives and consequential threads. It's folly to assume that when someone is promoted to a certain level that they get read into the secret society of risk analyzers and are given the special ability to consume raw data and output human-understandable risk information. As the ground level technical expert, management is looking to you to tell them if things are bad or good, worsening or improving. In my experience, the best way to build metrics is the same way you get a toddler to go to bed. Begin with a story. "Once upon a time, there was a bar chart, and when that bar chart went up..." What decision inputs or context does this metric provide? Was it good? Was it bad? Is the bar chart going up because you're deploying tooling and getting more visibility into risk? That could be a good thing. What happens when the bar chart reaches a certain threshold? Does someone need to step in and change how things are operating? Does there need to be a freeze or a halt?
Collect data from the appropriate data sources. When measuring how a cross-country road trip is going, people don’t report on how the tires are wearing. While it is tangentially related data because the further a car has driven, the more wear the tires have, it’s not actually helpful when phoning ahead and letting people know if you’ll be late for dinner or not. Just because a metric exists and is ready to report doesn’t mean it’s worth reporting. Remember how the default tool metrics might only be useful to tool operators? On the flip side, the data that does matter may not be immediately available. Some metrics might require drafting new scripts that periodically poll an API and stash the data away somewhere so trend-line and time-based metrics can be reported up.
Math it out. How is all the data going to be combined and transformed from raw numbers to knowledge and decision inputs? Build baselines, thresholds, and targets to provide context about what’s normal and what needs intervention. When the numbers cross a threshold or stray too far from the baseline, do something about it. When targets are achieved, pop some bubbly and have a celebration. Have a guide that explains all the data sources a metric relies on, what calculations are made, what important thresholds shouldn’t be crossed, and what each metric means in case everyone forgets why a certain graph is being sent up.
5. Combine the metrics into the whole picture. While simple metrics can be represented as a single number, more complex metrics may need the context built into the graph. This context can take the form of past measurements to show trend lines, thresholds, and baselines to provide a quick check on if things are going well or not, and related metrics that might have a causal relationship. Is there security training that educates developers on how to avoid SQL injection? Put that training consumption metric next to SQL Injection trends in teams that have completed the training vs. those that haven't. Different audiences need different views, and engineering might need one dashboard, developers another, and leadership a third. Take their needs and decisions into account when providing information to them.
Iterate. Metrics should be part of a conversation between the sender and the receiver. Over time, continually improve the metrics that are reported to better reflect the real world.
From here, whenever rolling out something new, treat it like a science experiment. Build a hypothesis, collect data, run trials, and see if the effort bears fruit. Or compile metrics that can tell a scary story to get buy-in for fixing how bad things might be.
There's always more words to spend on a topic like this one, but I've hit my budget for now. Stay secure, and never forget the humans.