Take a Multi-Layered Approach to Automotive Application Security



There’s no magic spot where “useful connected system” and “absolutely secure” connected system overlap.

If a car built in 1978 is lovingly and expertly maintained, there is every reason to believe that it will remain as safe as when it rolled off the assembly line. Of course, a brand-new car that comes with sophisticated safety features such as advanced driver assistance systems (ADAS) and automatic crash response will still be far safer than that 40-year-old classic. At least, it starts out that way. Unlike the classic car, the current model won’t automatically retain its admirable safety features unless its software remains impervious to attack.

Figure 1: Deploying separation technology to help optimize security.

Today’s connected cars give security a new significance. Of course, application code is just one component of secure embedded systems, but in developing that code, many of the techniques familiar to safety-critical code developers can be deployed with a stronger security perspective. Although the automotive standard ISO 26262:2012 “Road vehicles – Functional safety” does not explicitly discuss software security, there is a clear obligation for automotive developers to ensure that insecure software cannot compromise safety. Part of that obligation is to recognize that no connected automotive system is ever going to be both useful and absolutely impenetrable, and no single defense of that system can guarantee impenetrability. It therefore makes sense to protect it proportionately to the level of risk involved if it were to be compromised. That means applying multiple levels of security, so that if one level fails, others are standing guard.

Examples of these levels might include:

  • Secure boot to make sure that the correct image is loaded
  • Domain separation to defend critical parts of the system
  • MILS (least-privilege) design principles to minimize vulnerability
  • Minimization of attack surfaces
  • Secure coding techniques
  • Security-focused testing

But should every one of these precautions be implemented on every occasion? And if not, how should the decisions be made as to what applies, and when? To address that question, we’ll look at the relationship between two critical layers: domain separation and secure coding.

First, Identify High-Risk Areas
It makes little commercial sense to apply every possible security precaution to every possible software architecture and application. That’s especially true when, for example, a head unit’s Linux-based OS is involved, complete with massive footprint and unknown software provenance. Where, then, should you focus your attention? While most studies in this area address enterprise computing, the basic premise to identify and focus attention on the components of the system at most risk is a sensible approach. According to the “Threat Analysis and Risk Assessment” process described in SAE J3061 “Surface Vehicle Recommended Practice: Cybersecurity Guidebook for Cyber-Physical Vehicle Systems,” examples of high-risk areas are likely to include:

  • Files from outside of the network
  • Backwards-compatible interfaces with other systems, such as old protocols, old code and libraries, and multiple versions that are hard to maintain and test
  • Custom APIs, protocols, etc. that may involve errors in design and implementation
  • Security code, such as anything to do with cryptography, authentication, authorization (access control) and session management

Domain Separation
Consider a system that deploys domain-separation technology—in this case, a separation kernel or hypervisor (Figure 1).

It is easy to find examples of high-risk areas specific to this scenario. For instance, consider the gateway virtual machine. How secure are its encryption algorithms? How well does it validate incoming data from the cloud? How well does it validate outgoing data to the different domains?

Then there are data endpoints (Figure 2). Is it feasible to inject rogue data? How is the application code configured to ensure that doesn’t happen?

Figure 2: Automotive attack surfaces and untrusted data sources

Another potential vulnerability arises when systems need to communicate across domains. For example, central locking of a vehicle generally belongs to a fairly benign domain, but in an emergency situation after an accident it becomes imperative that doors are unlocked, implying communication with a more critical domain. However the communications between these virtual machines are implemented, their implementation must be secure.

Secure Coding
With these high-risk software components identified, attention can be focused on the code associated with them. This leaves a system where secure code does not just provide an additional line of defense, but actively contributes to the effectiveness of the underlying architecture by reinforcing its weak points.

Optimizing the security of this application code requires contributions of a number of factors, mirroring the multi-faceted approach to the security of the system as a whole. This approach minimizes the attack surface, maximizes the separation of the outward-facing attack vector from the safety applications it serves, and ensures that application code and operating systems are developed with security as a priority.

Adopting a secure coding standard is a critical first step, and there are a number of sources to choose from. CERT C is a coding standard designed for the development of safe, reliable and secure systems that takes an application-centric approach to the detection of issues. MISRA C:2012 offers another option, despite a common misconception that it is designed for safety-related, not security-related, projects. Its suitability as a secure coding standard was further enhanced by the introduction of MISRA C:2012 Amendment 1 and its 14 additional guidelines.

Static analysis tools vary in their ability to identify subtle nuances of standard violations, but sophisticated implementations can seem slower because of the additional processing required to achieve that. A sensible approach is to choose tools with the option to run in “lightweight” mode initially, and to apply more complete analysis as development progresses.

The Computer Emergency Readiness Team (CERT) division of the Software Engineering Institute (SEI) has nominated 12 key secure coding practices. Here, we consider how four examples relate to the code for the automotive system outlined in Figure 2.

1. Heed compiler warnings
“Compile code using the highest warning level available for your compiler…use static and dynamic analysis tools”

Many developers have a tendency to attend only to compiler errors during development and ignore the warnings. CERT’s recommendation is to set the warnings at the highest level available and ensure that all of them are attended to. Static analysis tools are designed to identify additional and more subtle concerns.

2. Architect and design for security policies
“Create a software architecture and design your software to implement and enforce security policies”

Those familiar with the development processes promoted by ISO 26262 know that requirements must be established, with bi-directional traceability between those requirements, software design artifacts, source code, and tests a must. Designing for security implies extending those principles to include requirements for security alongside requirements for safety. Tools can help ease the administrative headaches associated with traceability (Figure 3).

Figure 3: Automating requirements traceability with the TBmanager component of the LDRA tool suite

3. Keep it simple
“Keep the design as simple and small as possible”

There are many complexity metrics to help developers evaluate their code, and automated static analysis tools help by automatically evaluating those metrics (Figure 4).

Figure 4: Using the TBvision component of the LDRA tool suite to assess code complexity

4. Use effective quality assurance techniques
“Good quality assurance techniques can be effective in identifying and eliminating vulnerabilities”

The traditional approach to testing in the security market is largely reactive. The code is developed in accordance with relatively loose guidelines and then tested by means of performance, penetration, load and functional testing to identify and address any vulnerabilities. Although it is clearly preferable to ensure that the code is secure “by design” by using the processes championed by ISO 26262, the tools used in the traditional reactive model such as penetration tests still have a place, simply to confirm that the system is secure.

Unit test tools provide a targeted “robustness test” capability by automatically generating test cases to subject the application code to such as null pointers, and upper and lower boundary values (Figure 5). Static analysis tools clearly lend themselves to secure code auditing.

Figure 5: Using the TBeXtreme component of the LDRA tool suite to test code robustness

Take a Multi-Layered Approach to the Endless Security Battle
No connected system is ever going to be both useful and absolutely impenetrable. It makes sense to protect it proportionately to the level of risk involved if it were to be compromised, and that means applying multiple levels of security so that if one level fails, others are standing guard.

Domain separation and secure application code provide two examples of these levels. The effort required to create a system that is sufficiently secure can be optimized by identifying high-risk elements of the architecture and applying best-practice secure coding techniques to the application code associated with those elements.

It would be very expensive to apply state-of-the-art security techniques to every element of every embedded system. It is, however, important to specify security requirements and then architect and design them to be appropriate to each element of a system—perhaps the most important lesson to take from CERT’s “Secure Coding Practices.” In terms of the coding itself, risk assessment will create important pointers regarding where the system as a whole will most benefit from the application of static and dynamic analysis techniques.

Fortunately, as MISRA’s analysis has proven, many of the most appropriate quality assurance techniques for secure coding are proven in the field of functional safety. These techniques include static analysis to ensure the appropriate application of coding standards, dynamic code coverage analysis to check for any excess “rogue code,” and the tracing of requirements throughout the development process.

Given the dynamic nature of the endless battle between hackers and developers, optimizing security is not merely a good idea. Should the unthinkable happen and with it a need to defend a connected system in court, there are very real advantages to being able to provide evidence of the application of the best available practices.

 

 


Mark Pitchford has over 25 years’ experience in software development for engineering applications. He has worked on many significant industrial and commercial projects in development and management, both in the UK and internationally. Since 2001, he has worked with development teams looking to achieve compliant software development in safety and security critical environments, working with standards such as DO-178, IEC 61508, ISO 26262, IIRA and RAMI 4.0.

Pitchford earned his Bachelor of Science degree at Trent University, Nottingham, and he has been a Chartered Engineer for over 20 years. He now works as Technical Specialist with LDRA Software Technology.

 

Share and Enjoy:
  • Facebook
  • Google
  • TwitThis
Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.