previous arrow
next arrow
Slider

How To Lean Away From Lagging Indicators In Application Security

 Published: December 4, 2023  Created: December 4, 2023

By Laura Bell Main

The most commonly asked (and infrequently answered) questions faced by application security leaders and CISOs are: “How do we measure this is working? How do we know if the money, time and people we invest in cybersecurity are changing? Are we more secure as a result?”

Unlocking The Desire For Certainty

We hold onto those questions—the longing for certainty with specific yes or no answers—because it comforts us. It makes us feel better.

Executive teams expect this level of decisive evaluation. They want the certainty to act appropriately and strategically to manage risk. They are time-poor and often have many competing concerns; “it depends” rarely makes their job easier.

Sadly, cybersecurity is one of those areas where almost every answer we give starts with “it depends.”

Common Application Security Metrics

Many of us use secondary indicators and metrics to communicate our risk.

These include the number of vulnerabilities found by:

• Penetration Testing

• Source Code Analysis (SCA)

• SAST and DAST

We often look at a combination of the number of results and their severity, aiming for a reduction over time.

While these are important to measure and track, they do not answer the original question and they cannot confirm the absence of security vulnerabilities—only assess the reduction in known vulnerabilities over that period.

Looking Retrospectively: Helpful But Not Predictive

While tooling can be incredibly useful in securing your software, these tools rely on comprehensive rules and logic to find software weaknesses. These rules must be continually updated as new threats emerge.

This is such a well-acknowledged problem that in the software testing community, it is often referred to as the “pesticide paradox,” the reduction of effectiveness of testing mechanisms over time if not continually updated or made context relevant.

As a result, your tools can only detect known attacks. They retroactively look for known indicators of weakness in your current codebase.

As your tooling is not context-specific, you should consider these tools and their findings foundational rather than fundamental to measuring your maturity. They state a baseline for generic known weaknesses.

These are “lagging indicators,” or an assessment that becomes apparent only after another contributing event.

In this case, your contributing event is the discovery and publication of a vulnerability. Your tools can only indicate vulnerability after this point; they cannot predict future unknown vulnerabilities.

Lagging indicators from security assessment tools can give you a false sense of security.

If you see your number of weaknesses going down over time, you may believe that your security is improving. While your security against known public weaknesses is improving, it does not mean you are prepared for or protected against context-specific vulnerabilities or novel attacks.

Moving Toward Leading Indicators

So, what are your alternatives? How can you move away from lagging indicators to measure the likelihood of your software and systems remaining secure?

Originally from economics, leading indicators are measures of the consistency and quality of the inputs into a situation. By looking at the data going into a situation or event, you can often forecast its outcome.

Let’s take an example of that from the world of physical security.

How would you measure that your home is secure?

If you apply the same logic as in software development, you might look for a measure to compare over time. For example, how many times did your home alarm trigger this week?

Say you saw that your alarm triggered three times this month but four times the month before. The number of threats to your home has decreased by 25% over that period.

This measures the number of times the alarm is triggered; it does not assess the effectiveness of the alarm. Has the would-be thief found a way to bypass the sensors? Or you may have stopped setting the alarm on Tuesdays and Thursdays.

In these cases, these lagging indicators are misleading. They don’t mean that the threat has reduced or your security has increased.

In this case, the leading indicators may look quite different:

• The alarm is installed in all of the locations in your house, so nobody can move through your house without being notified by the alarm sensors.

• The alarm is turned on every time the house is empty

• The alarm code is not widely known and is a high-quality code.

While ensuring all of these leading indicators are satisfactory will not prevent all attacks on your house, in combination, they will reduce the likelihood of attack and increase its difficulty and opportunity cost.

Leading Indicators For Secure Software

If you want to move toward a more leading indicator-driven approach to application security in your organization, you may want to consider the following as a starting point:

• The percentage of the software team participating in security

• The percentage of the software included in security projects

• The percentage of software teams that conduct regular threat assessments

• The number of exceptions included in tooling configurations

• Delta security debt change (the difference between the number of vulnerabilities found and the number fixed)

• The percentage of applications that are centrally logged and monitored

While none of these measures are proof of security, by examining each of these metrics over time, you can establish whether your team is taking every step possible to identify new and novel security issues and address them.

Changing Your Perspective And Culture, Not Your Tools

In software security, the time has come to embrace these leading indicators and reduce the focus on lagging indicators such as tool or penetration test results. While these metrics can be helpful, they don’t allow you to answer the fundamental question, “Are we doing enough to reduce the risk of software compromise?”

By shifting your focus to leading indicators, you will assess how you apply cybersecurity practices throughout your SDLC and software teams and as part of every project. Your metrics can then tell a story of how you are continuously working to identify problems and how consistently you are working to address those issues.


https://www.forbes.com/sites/forbestechcouncil/2023/11/30/how-to-lean-away-from-lagging-indicators-in-application-security/?sh=38a5df51327f


No Thoughts on How To Lean Away From Lagging Indicators In Application Security

Leave A Comment