Creatively Scaling Application Security Coverage and Depth

Prithvi Bisht
Author: Prithvi Bisht, Senior Manager, Adobe Secure Software Engineering
Date Published: 18 February 2021

Editor’s note: The following is a sponsored blog post from Adobe.

One of the biggest challenges and opportunities for an Application Security (AppSec) team is to scale effectively. The “shift-left” recommendation for security in the software development life cycle (SDLC) emphasizes early course correction to help bake in security controls and to reduce potential cost of changes introduced later in the SDLC (Figure 1). Shifting left then entails finding potential security concerns and the need for security controls by reviewing artifacts produced in requirements, architecture, design and coding phases.

Software_Development_Life_Cycle
Figure 1: Incorporating security earlier (left) in SDLC reduces costs. Early stages of SDLC describe intended systems (e.g., design) as opposed to reality (e.g., code). The automation friendliness also decreases as we move left in the SDLC (e.g., artifacts describing design/requirements are free text documents).

Unfortunately, outside of the coding phase, adding security in earlier phases can be mostly a manual activity. This limits security coverage and depth of exploration of products often manifesting as potential blind spots in product portfolios (Figure 2). As we move through the phases in the left of the SDLC, the artifacts describe “intended” system functionality that may behave differently when implemented. The divergence in translating intentions (e.g., requirements/design) into reality (e.g., code) is how many bugs (including security) are introduced. Finally, the most up-to-date representation of a workflow is the code executing in production as architecture, design documents and their threat models struggle to keep pace.

Figure_2_Creatively_Scaling_Application_Security_Coverage_Depth

Figure 2: Typical coverage of a manual review based AppSec program. Within the pyramid representing all products, green – AppSec coverage, white – blind spots.

Figure_3_Creatively_Scaling_Application_Security_Coverage_Depth

Figure 3: Reducing blind spots through automation projects and improving depth of pre-existing engagements.

For scaling AppSec creatively, we have adopted an “improve the left by learning from the right” mantra. We augment the threat modeling of requirements, architecture and design documents for early course correction with automation focused on code, configuration and logs. Automation helps with two types of coverage. One is the number of projects touched (breadth) and the second is the type of knowledge gained (depth). Also, “shifting left” means that you’re shifting manual focus to earlier in the cycle and abandoning manual testing, which was to the right. Automation helps to not completely abandon the testing phase when you shift human focus to the left.

As shown in Figure 3, automation projects detect a specific weakness/property for all workflows in the pyramid. Apart from catching divergences in translating intentions to reality, it’s an effective method to initiate a dialog and correct the security posture of products that have yet to engage with the security team. While automation may not cover all security areas, a typical manual security review does, while also applying to a larger number of workflows and scaling much better. The automation projects could improve security coverage over time by exploring one security area at a time. The creativity in balancing topics that automation explores greatly helps in bringing pragmatic value to the business. Automation also complements the depth offered by manual AppSec reviews with an issue- or topic-specific coverage at scale.

Automation projects should explore topics related to prioritized risks to a business. Such risks are typically identified by speculative exercises (e.g., threat modeling) and data-driven exercises (e.g., past incidents, security issues). Not all prioritized risks will be good candidates for automation projects. Typically, risks that manifest as identical/similar variants across many workflows are ideal for pragmatic automation projects. For example, helping to eliminate secrets from code repositories boils down to finding keywords matching a collection of regular expressions and is a good candidate for automation projects, whereas the setup/user accounts needed to find inadequate authorization checks may vary across workflows, and automation at scale may be cumbersome.

Both dynamic (live traffic, logs) and static (code, configuration) artifacts are well-suited for automation projects. The way a risk manifests provides pointers to suitability of dynamic or static artifacts for an automation project. For example, a missing security header (say, “Strict-Transport-Security”) can be detected both from live traffic and code, whereas an insecurely embedded credential or use of a weak crypto library can only be found by analyzing code.
Automation should work hand-in-hand with manual reviews. For example, if weak secret management is prevalent, a manual AppSec review can highlight the need to handle secrets correctly – storing in an approved platform, defining/enforcing access control, securely deploying in production, rotation, etc. A tactical automation project could then find secrets in source code, thus identifying concrete instances at scale where either the AppSec engagement needs to improve or the affected products need to adopt stronger security controls and engage with the security team.

As another example, collections of configurations can be lucrative targets. Such collections may encode security properties for many workflows (e.g., configuration files for services at API gateways may capture authentication/authorization intents). Automation on such files then allows a security team to consider desired behaviors on all workflows represented by such collections. Special care is needed to limit false positives reported by automation projects as that directly affects a security team’s credibility and ability to get work done. A few additional runtime checks go a long way in weeding out false positives. For example, a simple POST request could verify that a backend implements its own authentication rather than relying on an API gateway. While these additional steps may reduce the speed at which automation delivers, in the long run such discipline can keep the business focused on actual risks and can be a win-win for product and security teams.

Finally, and perhaps most importantly, security teams ought to keep their eyes on the real prize, i.e., effective risk reduction. While automation identifies risks, it must be followed up effectively to actually improve the security posture (that is a topic for another day, perhaps). Suffice it to say, the responsibility of automation does not stop at identifying risks but extends beyond in most effective AppSec programs. For example, if a human actor needs to verify and close tickets created by automation projects, then the scalability challenge for the AppSec team resurfaces in a different form. We are dealing with this problem by adding intelligence in automation-generated Jira tickets, paving the path for auto-verification and closure of tickets. In the most generic form, we are thinking about the entire lifecycle of automation projects – from inception to end of life, identifying and addressing as many scale challenges as we can. Over the last many years, the Adobe security team has invested in building core capabilities that greatly increase effectiveness and depth of security automation. For example, the Security Automation Framework (SAF) we have developed allows us to run specific checks against web properties while Marinus allows us to list all publicly accessible Adobe web properties. Such capabilities go a long way in enabling automation projects, but also create possible new opportunities (e.g., correlation/prioritization). Last but not least, effective collaboration with product teams and periodic bursts of innovation have helped the AppSec program immensely.

In summary, an AppSec program needs to intelligently find the right balance of focusing on all stages of the software development life cycle. While focus on the left allows early course correction, creatively tapping into the right improves coverage and depth of exploration into risks for a business. Influencing the left by learning from the right allows for course correction of existing workflows as an added benefit and paves the way to avoid potential mistakes in the future.