How secure is your software? Pondering this question can provoke uncertainty (or even fear) depending on the experience level of your software team and the maturity of your control processes. However, this does not need to be the case for most software teams.
Software security is not easy, but neither is it exclusively the domain of thought leaders, CISSP’s, and elite hackers. This post provides an overview of 4 best practices that any development team can immediately implement to increase the security of their code.
What is the goal?
Any software security initiative must start with a clear target state. The fundamental goal of any software security initiative is to provide software stakeholders the ability to make informed decisions about managing risk in the code they ship. The means to accomplish this can look different for each team, but all teams must appropriately leverage finite resources, namely time and expertise, to address a potentially infinite number of risks and threats.
Successful implementations will convert risks you don’t know about (unmanaged) into risks you do know about (managed). Once the unmanaged risks become managed risks, teams can make data-driven decisions about which threats need to be addressed urgently and which threats can be dealt with later, if ever.
The rest of this post will focus primarily on best practices for threat identification. What happens after the threats are identified are a product of business risk appetite, data sensitivity, application exposure, and an array of other concerns that are out of scope for this article. Likewise, your business environment should determine your approach. Teams are recommended to work backward from the goal rather than forward from the tools to identify which of these practices are practical and relevant.
Best Practice #1: Threat Modeling
OWASP provides a decent definition of threat modeling that we will borrow here:
Threat modeling works to identify, communicate, and understand threats and mitigations within the context of protecting something of value.
In our context, the “something of value” is your application, and the work to protect it can be as simple as a conversation between team members who understand the application architecture.
The first step in threat modeling is typically to create an application data flow diagram (broadly, level 2 in the C4 model). This can be a quick diagram on a whiteboard, a Visio diagram, or anything that accurately models the various connection points in the application architecture. Once the diagram exists, teams are invited to gather around the diagram and participate in a critical conversation about how these various connection points might be abused by a threat actor.
- Do any of the application components face the public internet? If yes, then techniques like input sanitization and secure HTTP are first-order concerns.
- Do any parts of the application deal with sensitive data? If yes, then more of your development energy should be directed toward appropriate access controls in the sensitive parts of the application.
- Does the application contain secrets like passwords and API keys? If yes, then appropriate controls and best practices should be established to ensure that these secrets are not committed to source control and hence propagated forever in a centralized, searchable repository.
The initial effort to produce a comprehensive threat model can be a heavy effort, but once it is complete it needs only to be updated during the design phase of any new feature or application connection point. The ongoing updates are a comparatively lightweight process.
This does not need to be a directionless practice. There are many publicly available threat modeling frameworks that put guard rails around the process. One very common approach is the STRIDE model. STRIDE was invented at Microsoft but is broadly applicable to just about any application architecture. Microsoft also provides the freeware Microsoft Threat Modeling Tool that implements the STRIDE framework within a diagramming application similar to Visio. Teams can use this tool to build out a diagram of your application architecture and the tool will automatically apply the STRIDE methodology to the various connection points in your architecture, which can then be reviewed to identify hot spots that require particular attention during the development cycle.
Interested teams are recommended to start at OWASP for further information on threat modeling.
Best Practice #2: Defensive Development
All developers program defensively. Most of the time when we solve a bug (especially one we ourselves wrote) we are inoculated against writing similar bugs in the future or, at the very least, know how to quickly fix it. We write future code in such a way as to avoid these bugs in the first place. This reflex already exists in most development teams and company cultures, it must simply be expanded to cover security concerns as well.
The most efficient way to do this is to familiarize yourself and your team with the best practices that eliminate large cross-sections of potential security vulnerabilities. Stated differently, it is far easier to learn a handful of good habits than it is to study the nearly infinite list of potential pitfalls and try to avoid them.
One of the most helpful resources for building a culture of defensive development is to ensure that all developers are aware of the OWASP Proactive Controls. Implementing these controls will mitigate all of the OWASP top 10 along with many, many other vulnerabilities along the way.
Best Practice #3: Software Composition Analysis
One of the most important concepts for development teams to understand in the age of public, centralized package repositories is that the overwhelming majority of the code we ship is someone else’s code. Any problem a team tries to solve that can be even remotely generalized usually has a corresponding library sitting in a package repository that can be brought in as a dependency and reused throughout the codebase.
The unfortunate corresponding reality of using other people’s code is that, if it ships with your app, it is now your responsibility. If your software solution discloses sensitive information to an unauthorized user, for example, even the most generous of customers/users are not going to care that the responsible code was not written by your organization. It is the responsibility of the company shipping the product to ensure the quality of all code, including third-party dependencies.
Fortunately, there are tools that automate the identification of vulnerabilities in third-party code. This avoids the essentially impossible task of manually researching each and every dependency in an application’s manifest for known vulnerabilities and then keeping that research up to date from one release to the next. Most of these tools boil down to a handful of similar operations:
- Identify application dependencies by scanning for binary dependencies manifest files like packages.json or .csproj.
- Generate a bill of materials for the application rolling up all discovered dependencies.
- Compare the bill of materials with public vulnerability disclosure databases to identify any vulnerable dependencies.
- Present the results to the user in the form of a report or feedback within the CI interface.
The ultimate advantage of these tools is two-fold in that they identify publicly disclosed vulnerabilities in dependent code and they identify outdated dependencies. Leveraging the reports, development teams can then make informed decisions about which components can/should be updated for a given release.
One such tool that is a good on-ramp for this practice is OWASP Dependency-Check. It is open source and well maintained. As with many open-source projects, there is no commercial support, and development teams may find its occasional rough edges a little too much to deal with. Other paid tools like Snyk provide support and, presumably, a smoother user experience.
Whatever you choose, the key to successful implementation is to make sure they scan on some regular frequency (every build, every PR, every release, etc.) and that someone, somewhere in your organization is reviewing and triaging the results.
Best Practice #4: Static Analysis Security Testing (SAST)
SAST tends to be the software security practice that teams are most familiar with, even if they do not use it. SAST tools generally function as follows:
- Read and parse the static source code text files.
- Convert the parsed code to an abstract syntax tree (AST) in memory.
- Apply a set of pre-defined rules to the AST.
- Produce some form of file/line number report indicating locations in the codebase that matched a rule.
The rules are generally written to target known vulnerable coding patterns, such as missing/insufficient web output encoding (XSS vulnerability) or missing/insufficient sanitization of user input that is later used to assemble a SQL query (SQLi vulnerability). The most useful tools in this space will provide integration with developer IDE’s, build systems, bug trackers, etc.
These tools tend to be the most useful for identifying code vulnerabilities early and often, however, they also tend to be the most expensive. Teams that are interested in pursuing a paid solution are encouraged to consider their application threat model first – often the correct answer is that such tools are unnecessary for their risk profile. For applications in regulated industries or applications that deal with sensitive data like PII, these tools can be a significant boon.
If you decide to use a paid tool, there are a few considerations to keep in mind:
- If possible, find a tool that is written in the language you wish to scan with it. Tools tend to be much better at scanning software written in the same language.
- Ensure that the tool integrates readily with your company’s chosen SDLC management platform/bug tracker.
- Look for features like CI integration, IDE integration, and other opportunities to make sure the scan results are readily available to your developers.
The free tool space for SAST is more scarce, but the most prominent tool in this space is SonarQube. SonarQube executes a much more shallow scan than most of the paid tools, meaning that it will find fewer “things”, but the results it does produce tend to be of good quality. It also has a more comprehensive set of rules to capture non-security code quality issues. It integrates readily with a plethora of build systems, IDEs, bug trackers, and other software security tools.
If you wish to pursue SAST, it is important to consider the following:
- The tools must be tuned to your application to be useful. Familiarize yourself with the assorted levers and switches that can be used to fine-tune the results to your environment, and use them.
- You must build processes around the tools to ensure that your teams actually use them. There is no way around introducing some degree of friction to your process when you implement a software security tool – the goal should be to take a developer-first approach and implement processes that both put the scan results in front of them and minimize the work it takes to get a fix in place.
What is next?
There is certainly much more that can be said about each one of these practices. Indeed, some of these practices have entire teams dedicated to them in larger enterprises and are a career field unto themselves. While such a corporate structure is infeasible for smaller organizations, making thoughtful choices about which practices make the most sense in your environment and implementing them thoroughly can go a long way in bolstering the security of your software.