This is part one of a four-part series about measuring security at a startup. The others include: Measuring Capacity in Security, Security Frameworks Explained and Assessed For a Startup, and How We Measure Security at CircleCI.
“I need metrics to report up the chain,” my VP said. “Security is notoriously hard to measure. We still need to do it.”
Our team was growing, spending more money and ingesting more people/services than anyone could plausibly keep an eye on. It was a reasonable request. I wanted better insight too.
During the first half of 2020, I tried to squeeze in google research on measuring security. I asked a few folks what they did. I poked around our Jira board, but all I learned is our data wasn’t catalogued in any meaningful way. (Not yet…)
Several people suggested reading How To Measure Anything In Cybersecurity Risk. It’s an insightful book, but I struggled to apply anything to my environment. It felt like a master’s curriculm for conducting the San Francisco Symphony when my job leading security at a startup from 50 → 400 people felt more like managing a dive bar. I could appreciate a gent like Mr. Bayesian Statistics, but my customers like 2FA, unencrypted laptops and procurement would sock him in the nose before he had time to order a Roy Rogers.
Like any good journalist would do, I closed the book without finishing it and reached out to one of the authors Richard Seiersen. He was happy to talk. That turned into moderating a three-person panel sponsored by the Bay Area OWASP Chapter on October 21, 2020. It was amazing.
Not only did my friend Leif Dreizler pull off the Zoom IT goalie position without a single hitch, over a hundred folks showed up and spun off an equal number of ideas. More than any single person could remember. I took notes, downloaded the Zoom chat discussion and wrote most of it below in the Ideas section.
So Where Should Measuring Security Start?
Ultimately the best idea didn’t come from Rich’s book. It didn’t come from any blogs. Nor did it come from the panelists or audience members. My VP brought it up the day after the panel.
Him: “How did it go?”
Me: “I have pages of notes and lots of new contacts. So many it’s gonna be hard to distill.”
Him: “That’s great. I’ve got the recording open in a tab. But you know what happens to tabs like that and we need something asap. Just write down what you already measure.”
Me: Why didn’t I already do that?
The first mistake I made when hearing his original ask for security metrics was to assume he wanted more metrics, new metrics, different metrics or better metrics. That wasn’t the case. Our team at CircleCI, which is SOC 2 certified and FedRAMP authorized, already tracks a ton of stuff for audits. Among other things, we’re required to upload monthly vulnerability scan results to our regulator, conduct quarterly access controls audits and burn down an annual Risk Assessment.
On our audits, are there more findings this year than last? Are any of them Criticals or Highs? How long did it take to resolve them? What about the results of our quarterly penetration tests and the speed we fix issues? How about the different types of security incidents we’ve had over the years? Etc.
It took ten minutes to transfer every existing metric from my head into a spreadsheet. Turns out I wasn’t starting at zero. That was something he could report up and it served as a launching pad for what came next.
Assess That Inventory
With my existing inventory, I assessed each item based on two questions:
- How is this helping me make a decision about the future?
- Does this validate an earlier decision or demonstrate that something needs to change?
This exercise triggered a lot of ideas so I started a note in my phone titled Things I Wish I Knew About The Ranch.
Coming from Wyoming, I think of our environment as a ranch. Living beings roam around doing things, a barb wire fence keeps some things in while allowing others to pass through, data streams pass through, etc. Then there are the far edges of the property that require a 4x4 to reach, under water in the middle of the fishing pond, mineral rights below the surface and air rights above, or even disputed territory with the Hatfields and McCoys (I’m looking at you IT and HR…) on either side of us.
I wrote all that down, what I didn’t understand about each one, and a series of statements that would make me feel confident I did understand them.
Try Out Ideas
With an assessed inventory and set of questions to answer, things felt better. It also gave me an opinion when I examined this unvarnished list of panel ideas, frameworks, questions, topics, metrics, and uncategorized stuff from my own follow-up research. Most of these will be garbage to you because they were for me. That’s okay! A few of these were helpful. Hopefully they’ll be helpful to others. There are no right or wrong answers… though everyone should patch their stuff!
- If you spend money on something, measure it. However, as Ryan McGeehan writes, “We should keep risk measurement separate from performance measurement…” because the issues are infrequent, unexpected, indirect, and unobservable. I totally agree with that.
- Which is a better use of company cash: Spending $100,000 a year on a security analyst or for an extra $20M in cyber insurance? Seiersen said he concluded that buying more insurance produced more value. Breeches are inevitable and another junior analyst won’t stop that. Extra insurance, though, can help the company continue operating after a breech.
- What work should be tracked Kanban style (operations?) vs. measured (new security features) via Agile?
- Within Jira, add a drop-down menu requirement to identify the internal team that a ticket submitter belongs to. This will identify people/teams who proactively reach out to security (awesome) and surface which teams that don’t (boo).
- This is not confirmed, but someone in the panel said Slack increased payouts for their bug bounties to see if that resulted in better results. A review after 18 months found that higher bounties only increased the number low-quality submissions, which in turn created more operational work to review them. (If you worked on this at Slack, reach out. I’d love to learn more.)
- Caroline Wong said one of her previous companies developed a security NPS score where they surveyed every non-security engineer about their sense of security. The scores started at 80 and, based on subsequent security work, follow-up surveys rose to 88 and 93.
- BSIM, MITRE, FAIR, and BOOM frameworks made a lot more sense after creating an existing inventory. Caroline’s book, the YouTube video Navigating Office Politics about adversarial relationship between Product and Security Teams, and the talk Security At The Center of DevOps are also good to poke through.
- The number of security incidents isn’t particularly important, but pay attention to the number of any specific class (are there a strange number of XSS attacks) that are repeated or defy your expectations of what is likely to happen at your company.
- Every security incident should include Time to detect/discover, time to respond, time to contain and time to resolve. Track that.
- Seiersen repeatedly brought up the issue of capability and measuring that. It’s a big enough subject that I’ll break it out in another post. My initial thought went to: Our initial time to response for incidents is rising so, therefore, we need to invest more in incident response or something along those lines. Or our backlog is growing.
- Avoid shoehorning fancy security terms and acronyms onto your internal customers. Use the business terms already in existence within the company.
- Create segmented backlogs and chart their progression. Is it growing or shrinking? That should demonstrate with data that more headcount is needed or, much better, that your program is succeeding.
- Dig into third-party tools like AWS, GCP, Azure, Google Dashboard, GitHub, Slack, and Jira. All of them have some great measurements specific to security.
- If someone forced you to turn a threat model into a visual representation, what would that look like?
- Take the number of development teams at your company, ask each one to grade itself on five security marks based on a 1–10 ranking, task your security team to assess those same teams using the same questions, average the scores and focus on the high-scoring teams.
- Why is it so hard for security leaders and executives to see eye to eye? The first step to solving a problem is properly identifying theirs. What do executives not understand about security, specifically?
- It’s easy to be fooled by vanity metrics. These are the metrics we take for granted and use because everyone else does. How do you use metrics to ensure your security capabilities are working? What would you see occuring that lets you say, “Yes, we are keeping up with the speed of DevOps.” More to the point, how do you know your capabilities are scaling, accelerating or slowing? If you can’t answer these questions, are you even doing modern metrics that matter?
- If your CEO came to you with an acquisition target she wanted to close in two weeks, how would you assess that from a security and risk perspective? What would be a deal killer finding that would cause you to say absolutely no way?
- What is a good risk management model to use for software vendors within customer environments? For example, hypervisors, thick clients, admin products, even hardware, routers, switches etc.
- As a security professional, what 3–5 KPIs/metrics do you personally pay attention to the most when thinking of the health of your SaaS company security program? If you can’t translate those up to management, keep trying. That’s what matters.
- What are your SRE or operations teams using for KPIs/metrics and how could those apply to your domain?
- How do you measure the success or impact of non-automatable cultural or human practices, like the adoption of threat modeling or company-wide security training?
- Identify the number of tickets marked Escalation, the number marked Operations and the number that wind up on Backlogs. That should help with staffing.
- Does your Software Development Lifecycle (SDLC) include security? If so, how could you measure whether people are abiding by the requirements?
- At a company below 500, do metrics need to be short lived rather than long lived like patching?
- What are some specific metrics that take shape above 500 employees? 1,000? Above?
- What measurements have you used to drive hiring a new person?