previous arrow
next arrow
Slider

Application Security Is In A Rut; Time To Shake Things Up?

 Published: March 4, 2025  Created: March 4, 2025

By Shahar Man

As technological advancements such as AI-driven tools become more prolific, application security (AppSec) faces unprecedented challenges. Application security isn’t new—it’s been around for decades, with a singular goal: to ensure that applications don’t present risks that could be exploited by attackers.

But over time, a lot has changed. Many of today’s security tools were built 15 to 20 years ago, in an era when software development followed the slow, linear waterfall method. Back then, companies would release software maybe once or twice a year. The process allowed time for thorough security testing before deployment. But with the rise of Agile development and DevOps, software is now released at an unprecedented pace—sometimes multiple times a day. Security teams no longer have the luxury of pressing “pause” to check for vulnerabilities.

Another major shift has been the explosion of open-source software. Today, about 85% of the code in modern applications comes from open-source libraries. While open-source offers advantages, it also introduces security risks. Since the code is public, vulnerabilities are well-documented, making it easier for attackers to exploit them. Once a Common Vulnerabilities and Exposures (CVE) is disclosed, attackers quickly develop scripts to take advantage of it.

AI adds a whole new level of complexity to an already dynamic situation. AI-assisted coding is introducing unprecedented security challenges while the security tools in use today have evolved piecemeal, often failing to address these modern complexities efficiently.

For example, application security is still split into two distinct segments: static application security testing (SAST) for custom code and software composition analysis (SCA) for open-source code. But code is code, and virtually every application uses both, intermingled —why should we need separate tools? The reason is historical rather than practical, and vulnerabilities arise that attackers are all too ready to exploit.

Qualifying—Not Quantifying—Threats

A decade ago, companies aimed for zero known vulnerabilities. Today, bluntly, that’s completely unrealistic. Instead, organizations must focus on risk management—prioritizing which vulnerabilities to fix based on actual exposure and impact.

But how can security teams determine which of the many possible vulnerabilities necessitate their attention? That’s an area where the industry is evolving. Traditionally, vulnerabilities were ranked by severity scores, but those don’t account for an organization’s specific environment. Now, we’re seeing more refined approaches, like reachability analysis—determining whether an application loads or executes a vulnerable package.

More advanced analysis involves a type of assessment we refer to as “triggerability.” This means examining not just whether a vulnerable component is used by the application, but whether it can actually be exploited in a given environment. This approach is based on advanced code modeling and analysis, which provide deep insights without requiring runtime monitoring. (Other approaches, like runtime reachability, rely on production agents to track which parts of the code are actively used, but these come with operational overhead.)

Fundamentally, the focus is shifting from counting vulnerabilities to understanding actual risk exposure. The raw number of vulnerabilities isn’t a useful metric anymore. What matters is whether a certain vulnerability poses a real threat in a given environment. The goal now is to change the conversation from volume to impact and to help teams focus on the vulnerabilities that truly matter.

Reestablishing Trust

That key question—which of these vulnerabilities actually matters?—is where the conversation between security teams and developers needs to change, as it leads to overwhelming frustration on both sides.

When it comes to remediating vulnerabilities in code, developers have to do the work. Security teams can suggest fixes, but they can’t implement them. Developers either need to update their software or modify lines of code, which takes time. This creates friction within organizations because—from developers’ perspective—it’s a massive time drain. If we send developers on a wild goose chase to fix vulnerabilities that don’t pose a risk, we not only waste their time but also do a disservice to the organization. They could be focusing on more critical vulnerabilities or simply writing new code.

It’s imperative that we reestablish trust between these two teams, which simply cannot continue working at odds (as they do in so many organizations). There are several aspects to this:

1. Determine what’s necessary. First, developers need to trust that when security teams ask them to remediate something, it’s truly necessary—not just box-checking or bureaucratic oversight. How do we build that trust? One way is to establish agreement on how much can realistically be fixed. We can’t ignore the number of developers available or their capacity to address vulnerabilities. There has to be a balance.

2. Settle on acceptable risk. Boards are also shifting their perspective. There’s an interesting Gartner report noting that many organizations today are showing a higher tolerance for risk, prioritizing innovation over absolute security. There needs to be alignment between application teams (the business side) and security teams about what constitutes unacceptable versus acceptable risk.

3. Provide evidence. Developers shouldn’t just be treated as remediation contractors—they own the applications. They need to understand why a vulnerability matters. That means not just flagging a severe vulnerability but showing its actual impact on the specific application. With capabilities like reachability and triggerability analysis, it is possible to demonstrate whether a vulnerability is truly exploitable. Adding business context strengthens the message. Instead of saying, “There’s a severe vulnerability in JavaScript package v5.1.3,” it’s far more impactful to say, “There’s a severe vulnerability in the shopping cart application of our main e-commerce site.”

4. Streamline remediation processes. If we can provide clear, actionable steps instead of making developers research solutions, it saves time and effort. Achieving trust, delivering the right context, understanding business impact and setting realistic remediation expectations all contribute to a more effective security process.

Old Problems, New Directions

The problem we’re discussing isn’t new; application security leaders have been dealing with it for years, and many are fatigued by vendors promising the same solutions, such as reducing CVE counts or filtering vulnerabilities differently. What’s new today is that advanced technology, powered by AI, allows us to analyze code in ways that weren’t possible before. And consequently, this old problem isn’t going away—rather, it’s growing exponentially as AI-generated code accelerates development.

Application security isn’t like other areas of cybersecurity. There’s no trophy for stopping an attack, no dramatic movie moment where an attacker is caught red-handed. It’s just the ongoing task of cleaning up code, all within a highly inefficient system. That lack of excitement can lead to cynicism and fatigue. But organizations simply cannot afford to become complacent in the face of existential threats. As technology transforms the industry, vulnerabilities will only become more severe, and the path to relative security ever more difficult to find.


https://www.forbes.com/councils/forbestechcouncil/2025/02/24/application-security-is-in-a-rut-time-to-shake-things-up/a>