Building Security In: Tools and Techniques for Reducing Software Vulnerabilities

By definition secure software is high-quality software, but the reverse is not always true.

Announcements in late 2013 and early 2014 of worldwide security breaches revealed that the personal information of more than 100 million individuals had been stolen. In the United States, a staggering 43 million of these were accounted for by 2 specific breaches alone! In December 2013 retail giant Target Corp announced the loss of credit and debit card information of 40 million customers, and in April 2014, nationwide arts and crafts chain Michaels announced that the credit and debit information on 3 million customers was lost. Both incidents were attributed to malware attacks. With the announcement of each new breach, awareness of the vulnerability of critical infrastructures connected to the Internet increases, as does the need for secure system components that are immune to cyber-attack.

Many of the devices compromised in the recently announced attacks were embedded devices, such as payment card processing terminals. Vulnerable embedded devices are not just limited to in-store terminals either, as more and more credit card transactions are being made via mobile devices, such as smart phones or in-vehicle entertainment systems. Increasingly, the function of these devices depends on Internet connectivity, demonstrating the growing need to build security into software to protect against currently known and future vulnerabilities. This article will look specifically at the best practices, knowledge and tools available for building secure software that is free from vulnerabilities.

Secure Software
Defining what is and isn’t secure software is not easy; a piece of software created decades ago may yield security vulnerabilities when executed on a secure platform today, or a secure application written today may be vulnerable because of the security-patch free legacy operating system platform on which it is running.

In his book The CERT C Secure Coding Standard, Robert Seacord points out that there is currently no consensus on a definition for the term software security. For the purposes of this article, the definition of secure software will follow that provided by the U.S. Department of Homeland Security (DHS) Software Assurance initiative in Enhancing the Development Life Cycle to Produce Secure Software: A Reference Guidebook on Software Assurance. The DHS maintain that software, to be considered secure, must exhibit three properties:

  1. Dependability — Software that executes predictably and operates correctly under all conditions.
  2. Trustworthiness — Software that contains few, if any, exploitable vulnerabilities or weaknesses that can be used to subvert or sabotage the software’s dependability.
  3. Survivability (also referred to as “Resilience”) — Software that is resilient enough to withstand attack and to recover as quickly as possible, and with as little damage as possible from those attacks that it can neither resist nor tolerate.

There are many sources of software vulnerabilities, including coding errors, configuration errors, architectural flaws and design flaws. However, most vulnerabilities result from coding errors. In a 2004 review of the National Vulnerabilities Database for their paper Can Source Code Auditing Software Identify Common Vulnerabilities and Be Used to Evaluate Software Security? to the 37th International Conference on System Sciences, Jon Heffley and Pascal Meunier found that 64% of the vulnerabilities resulted from programming errors. Given this, it makes sense that the primary objective when writing secure software must be to build security in.

Building Security In
Most software development focuses on building high-quality software, but high-quality software is not necessarily secure software. Consider the office, media playing or web browsing software that we all use daily; a quick review of the Mitre Corporation’s Common Vulnerabilities and Exposures (CVE) dictionary will reveal that vulnerabilities in these applications are discovered and reported on an almost weekly basis. The reason is that these applications were written to satisfy functional, not security requirements. Testing is used to verify that the software meets each requirement, but security problems can persist even when the functional requirements are satisfied. Indeed, software weaknesses often occur due to unintended system functionality.

Building secure software requires adding security concepts to the quality-focused software development lifecycle where security is considered a quality attribute of the software under development. Building secure code is all about eliminating known weaknesses (Figure 1), including defects, so by necessity secure software is high-quality software.

Figure 1. building secure code by eliminating known weaknesses

Security must be addressed at all phases of the software development lifecycle, and team members need a common understanding of the security goals for the project and the approach that will be taken to do the work.

The starting point is understanding the security risks associated with the domain of the software under development. This understanding is determined by a security risk assessment. The security risk assessment process ensures that the nature and impact of a security breach are assessed prior to deployment. It then becomes possible to identify the security controls necessary to mitigate any identified impact. The security controls identified by employing the security risk assessment process then become system requirements.

Adding a security perspective to software requirements ensures that security is included in the definition of system correctness that then permeates the development process. A specific security requirement might validate all user string inputs to ensure that they do not exceed a maximum string length. A more general one might be to withstand a denial of service (DoS) attack. Whichever end of the spectrum is used, it is crucial that the evaluation criteria are identified for an implementation.

When translating requirements into design, it is prudent to consider security risk mitigation via architectural design. This can be in the choice of implementing technologies or by inclusion of security-oriented features, such as handling untrusted user interactions by validating inputs and/or the system responses by an independent process before they are passed on to the core processes.

The most significant impact on building secure code is the adoption of secure coding practices, including both static and dynamic assurance measures. The biggest return on investment stems from the enforcement of secure coding rules via static analysis tools. With the introduction of security concepts into the requirements process, dynamic assurance via security-focused testing is then used to verify that security features have been implemented correctly.

Creating Secure Code with Static Analysis
The leading cause of software vulnerabilities is software defects, many of which stem from common weaknesses in code. An ever-increasing number of dictionaries, standards and rules have been created to help highlight these common weaknesses so that they may be avoided.

In November 2013, ISO/IEC published the most recent set of rules regarding the creation of secure code in the C programming language. Entitled C Secure Coding Rules (ISO/IEC 17961:2013 or secureC), it is the latest in a growing field of secure coding standards that can form the basis of a secure software development process. Standards such as secureC, the Common Weakness Enumeration (CWE) dictionary from the Mitre Corporation and the CERT-C Secure Coding Standard from the Software Engineering Institute at Carnegie Mellon help software development teams to build security into new software.

Using static analysis tools can enforce secure coding standards, so even novice secure software developers can benefit from the experience and knowledge the standards encapsulate.

Using coding standards to eliminate ambiguities and weaknesses in the code under development has proved extremely successful in the creation of high-reliability software. The practice of following Motor Industry Software Reliability Association (MISRA) Guidelines for the use of C language in critical systems is one example of the way coding standards have been employed successfully. The same method can be used to similar effect in the creation of secure software.

For example, secureC echoes many of the commonly exploitable software vulnerabilities encapsulated in the Mitre corporation Common Vulnerabilities and Exposures (CVE) dictionary and in the CWE and CERT-C Secure Coding Standards — avoiding undefined behaviors in the C programming language and the use of tainted data. The standards in each of these dictionaries can be enforced by the use of static analysis tools that help to eliminate both known and unknown vulnerabilities while also eliminating latent errors in code. For example, the screenshot in Figure 2 shows the detection of a buffer overflow vulnerability due to improper data types.

Figure 2. LDRA TBvision screenshot showing improper data type sign usage resulting in buffer overflow vulnerability

Static software analysis tools assess the code under analysis without actually executing it. They are particularly adept at identifying coding standard violations. In addition, the range of metrics these tools offer can be used to assess and improve the quality of the code under development. One example is the cyclomatic complexity metric: it identifies unnecessarily complex software that is difficult to test.

When using static analysis tools for building secure software, the primary objective is to identify potential vulnerabilities in code. Example errors that static analysis tools identify include:

  • Use of insecure functions
  • Array overflows
  • Array underflows
  • Incorrect use of signed and unsigned data types.

Since secure code must, by nature, be high-quality code, static analysis tools can be used to bolster the quality of the code under development. The objective here is to ensure that the software under development is easy to verify. The typical validation and verification phase of a project can take up to 60% of the total effort, while coding typically only takes 10%. Eliminating defects via a small increase in the coding effort can significantly reduce the burden of verification, and this is where static analysis can really help.

Ensuring that code never exceeds a maximum complexity value helps to enforce the testability of the code. In addition, static analysis tools identify other issues that affect testability, such as having unreachable or infeasible code paths or an excessive number of loops.

By eliminating security vulnerabilities, identifying latent errors and ensuring the testability of the code under development, static analysis tools help ensure that the code is of the highest quality, and is secure not only against current threats, but also against unknown threats.

Fitting Tools into the Process
Tools that automate the process of static analysis and enforcement of coding standards such as secureC, CWE or CERT C Secure Coding guidelines ensure that a higher percentage of errors are identified in less time. In addition, the cost and efforts of achieving a given Evaluation Assurance Level (EAL) can be significantly reduced by using static analysis to document that the assurance requirements have been met for the target EAL level.

This rigor is complemented by additional tools for:

  • Requirements traceability – a good requirements traceability tool is invaluable to the building-security-in process. Being able to trace requirements from their source through all of the development phases and continuing right down through to the verification activities and artifacts, ensures the highest quality, secure software.
  • Unit testing — the most effective and cheapest way of ensuring that the code under development meets its security requirements is via unit testing. Creating and maintaining the test cases required for this, however, can be an onerous task. Unit testing tools that assist in the test-case generation, execution and maintenance, streamline the unit testing process, easing the unit testing burden and reinforcing unit test accuracy and completeness.
  • Dynamic analysis — analyses performed while the code is executing provide valuable insight into the code under analysis that goes beyond test case execution. Structural coverage analysis, one of the more popular dynamic analysis methods, has proved invaluable for ensuring that the verification test cases execute all of the code under development. This helps ensure that there are no hidden vulnerabilities or defects in the code.

While these various capabilities can be pieced together from a number of suppliers, some companies, such as LDRA, offer an integrated tool suite that facilitates the building-security-in process, providing all of the solutions described above.

Figure 3. Secure Coding in the Iterative Lifecycle

It is not surprising that the processes for building security into software echo the high-level processes required for building quality into software. Adding security considerations into the process from the requirements phase onwards is the best way of ensuring the development of secure code, as described in Figure 3. High-quality code is not necessarily secure code, but secure code is always high-quality code.

An increased dependence on Internet connectivity is driving the demand for more secure software. With the bulk of vulnerabilities being attributable to coding errors, reducing or eliminating exploitable software security weaknesses in new products through the adoption of secure development practices should be achievable within our lifetime.

By leveraging the knowledge and experience encapsulated within the secureC, CERT-C Secure Coding Guidelines and CWE dictionary, static analysis tools help make the objective of building security into software both practical and cost effective. Combine this with the improved productivity and accuracy of requirements traceability, unit testing and dynamic analysis, and the elimination of exploitable software weaknesses become inevitable.

Deepu_ChandranDeepu Chandran is a Senior Technical Consultant with LDRA’s India office. Deepu specialises in the development, integration and certification of mission- and safety-critical systems in avionics, nuclear, industrial safety and security. With a solid background in development and testing tools, Deepu guides organisations in selecting, integrating, and supporting their embedded systems from development through certification.

Share and Enjoy:
  • Digg
  • Sphinn
  • Facebook
  • Mixx
  • Google
  • TwitThis

Tags: ,