logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo logo

SSDL Touchpoints: Code Review (CR)

The overall goal of the Code Review practice is quality control. Those performing code review must ensure the detection and correction of security bugs. The SSG must enforce adherence to standards and the reuse of approved security features.

SSDL TOUCHPOINTS: CODE REVIEW
Use of code review tools, development of customized rules, profiles for tool use by different roles, manual analysis, ranking/measuring results.
  Objective Activity Level
CR1.1 know which bugs matter to you create top N bugs list (real data preferred) (T: training) 1
CR1.2 review high-risk applications opportunistically have SSG perform ad hoc review
CR1.4 drive efficiency/consistency with automation use automated tools along with manual review
CR1.5 find bugs earlier make code review mandatory for all projects
CR1.6 know which bugs matter (for training) use centralized reporting to close the knowledge loop and drive training (T: strategy/metrics)
CR2.2 drive behavior objectively enforce coding standards 2
CR2.5 make most efficient use of tools assign tool mentors
CR2.6 drive efficiency/reduce false positives use automated tools with tailored rules
CR3.2 combine assessment techniques build a factory 3
CR3.3 handle new bug classes in an already scanned codebase build capability for eradicating specific bugs from entire codebase
CR3.4 address insider threat from development automate malicious code detection
one

CR Level 1: Use manual and automated code review with centralized reporting. The SSG must make itself available to others to raise awareness of and demand for code review. The SSG must perform code reviews on high-risk applications whenever it can get involved in the process and must use the knowledge gained to inform the organization of the types of bugs being discovered. Management must make code review mandatory for all software projects. The SSG must enforce use of centralized tools reporting to capture knowledge on recurring bugs and push that information into strategy and training.

CR1.1

Create a top N bugs list (real data preferred). The SSG maintains a list of the most important kinds of bugs that need to be eliminated from the organization's code and uses it to drive change. The list helps focus the organization's attention on the bugs that matter most. A generic list could be pulled from public sources, but a list is much more valuable if it is specific to the organization and built from real data gathered from code review, testing, and actual incidents. The SSG can periodically update the list and publish a "most wanted" report. (For another way to use the list, see [T1.6 Create and use material specific to company history].) Some firms use multiple tools and real code base data to build top N lists, not constraining themselves to a particular service or tool. One potential pitfall with a top N list is the problem of "looking for your keys only under the street light." For example, the OWASP Top Ten list rarely reflects an organization's bug priorities. Simply sorting the day's bug data by number of occurrences does not produce a satisfactory Top N list since these data change so often.

CR1.2

Have SSG perform ad hoc review. The SSG performs an ad hoc code review for high-risk applications in an opportunistic fashion. For example, the SSG might follow up the design review for high-risk applications with a code review. Replace ad hoc targeting with a systematic approach at higher maturity levels. SSG review may involve the use of specific tools and services, or it may be manual.

CR1.4

Use automated tools along with manual review. Incorporate static analysis into the code review process in order to make code review more efficient and more consistent. The automation does not replace human judgment, but it does bring definition to the review process and security expertise to reviewers who are not security experts. A firm may use an external service vendor as part of a formal code review process for software security. This service should be explicitly connected to a larger SSDL applied during software development and not just "check the security box" on the path to deployment.

CR1.5

Make code review mandatory for all projects. Code review is a mandatory release gate for all projects under the SSG's purview. Lack of code review or unacceptable results will stop the release train. While all projects must undergo code review, the review process might be different for different kinds of projects. The review for low-risk projects might rely more heavily on automation and the review for high-risk projects might have no upper bound on the amount of time spent by reviewers. In most cases, a code review gate with a minimum acceptable standard forces projects that do not pass to be fixed and re-evaluated before they ship.

CR1.6

Use centralized reporting to close the knowledge loop and drive training. The bugs found during code review are tracked in a centralized repository. This repository makes it possible to do summary reporting and trend reporting for the organization. The SSG can use the reports to demonstrate progress and drive the training curriculum. (See [SM2.5 Identify metrics and use them drive budgets].) Code review information can be incorporated into a CSO-level dashboard that includes feeds from other parts of the security organization. Likewise, code review information can be fed into a Development-wide project tracking system that rolls up a number of diverse software security feeds (for example: penetration tests, security testing, black box testing, white box testing, etc.). Don't forget that individual bugs make excellent training examples.

two

CR Level 2: Enforce standards through code review process. The SSG must guide developer behavior through coding standards enforcement with automated tools and tool mentors. The SSG must combine automated assessment techniques with tailored rules to find problems efficiently.

CR2.2

Enforce coding standards. A violation of the organization's secure coding standards is sufficient grounds for rejecting a piece of code. Code review is objective—it does not devolve into a debate about whether or not bad code is exploitable. The enforced portion of the standard could start out being as simple as a list of banned functions. In some cases, coding standards are published as developer guidelines specific to technology stacks (for example, guidelines for C++ or Spring) and then enforced during the code review process or directly in the IDE. Note that guidelines can be positive ("do it this way") as well as negative ("do not use this API").

CR2.5

Assign tool mentors. Mentors are available to show developers how to get the most out of code review tools. If the SSG is most skilled with the tools, it could use office hours to help developers establish the right configuration or get started interpreting results. Alternatively, someone from the SSG might work with a development team for the duration of the first review they perform. Centralized use of a tool can be distributed into the development organization over time through the use of tool mentors.

CR2.6

Use automated tools with tailored rules. Customize static analysis to improve efficiency and reduce false positives. Use custom rules to find errors specific to the organization's coding standards or custom middleware. Turn off checks that are not relevant. The same group that provides tool mentoring will likely spearhead the customization. Tailored rules can be explicitly tied to proper usage of technology stacks in a positive sense and avoidance of errors commonly encountered in a firm's code base in a negative sense.

three

CR Level 3: Build an automated code review factory with tailored rules. The SSG must build a capability to find and eradicate specific bugs from the entire codebase.

CR3.2

Build a factory. Combine assessment results so that multiple analysis techniques feed into one reporting and remediation process. The SSG might write scripts to invoke multiple detection techniques automatically and combine the results into a format that can be consumed by a single downstream review and reporting solution. Analysis engines may combine static and dynamic analysis. The tricky part of this activity is normalizing vulnerability information from disparate sources that use conflicting terminology. In some cases, a CWE-like approach can help with nomenclature. Combining multiple sources helps drive better informed risk mitigation decisions.

CR3.3

Build a capability for eradicating specific bugs from entire codebase. When a new kind of bug is found, the SSG writes rules to find it, and uses the rules to identify all occurrences of the new bug throughout the entire codebase. It is possible to entirely eradicate the bug type without waiting for every project to reach the code review portion of its lifecycle. A firm with only a handful of software applications will have an easier time with this activity than firms with a very large number of large apps.

CR3.4

Automate malicious code detection. Automated code review is used to identify dangerous code written by malicious in-house developers or outsource providers. Examples of malicious code that could be targeted include: backdoors, logic bombs, time bombs, nefarious communication channels, obfuscated program logic, and dynamic code injection. Although out-of-the-box automation might identify some generic malicious-looking constructs, custom rules for static analysis tools used to codify acceptable and unacceptable code patterns in the organization's codebase will quickly become a necessity. Manual code review for malicious code is a good start, but is insufficient to complete this activity.