Software Validation Tools Give an Edge to Satellite and Spacecraft Development
The space industry can realize increased productivity, reliability and cost savings by employing the software certification process used by military and commercial avionics.
As the space industry continues to mature, software validation tools represent a best practice and an excellent starting point for process improvement. In fact, the use of tools may be mandated, particularly in circumstances where satellites interact with the commercial airspace, private space vehicles carry human crew and space vehicles operate in and re-enter public airspace. In many of these circumstances, private vehicles will require commercial licenses from the FAA, which may force some or all of the processes in DO-178C to be applied. But any company that is building components in these contexts should consider the use of certified software validation tools as a way to establish an edge for the future.
The space industry shares many avionics industry characteristics, including safety-critical system requirements as well as cost and schedule challenges. But while the avionics industry is governed by software development process methodologies such as DO-178C, the space industry has typically employed manual methods, using tools as divergent as Microsoft PowerPoint to IBM Rational DOORS, which do not provide a direct connection to source code and test results. By introducing the same techniques used by commercial and military avionics, the space industry could significantly improve the likelihood of mission success and cost reduction. These tools—already proven in the military and avionics communities—provide a direct link between system engineering tasks, software development, and test artifacts. The end result is improved organization and increased reliability to protect lives and missions. The following examples illustrate how these tools apply specifically to space vehicle and satellite system design processes.
Space Vehicle Design
Consider a space vehicle with hundreds of subsystems designed by dozens of suppliers. The complexity of certifying this vehicle for flight can be alleviated using a consistent set of tools. For example, in major space programs prime contractors as well as all subcontractors use LDRA tools for verification reports. This forces a consistent process across suppliers and allows for uniform evaluation of project milestones.
Figure 1 offers a typical Requirements Traceability Matrix (RTM) for this scenario. First, high-level program requirements connect to lower-level requirements. These lower-level requirements dictate the specifics addressed by the spacecraft flight software and have system test and code review verification tasks associated with them. The specific system test requirements correspond to Modified Condition / Decision Condition (MC/DC) that ensures input parameters don’t mask decisions made by the software.
This documentation tree provides a high degree of confidence that the software performs as expected. In cases where application failure can cause a loss of life, documentation review proves process reliability, illustrating whether subcontractors rigorously used the RTM to track their requirements and testing.
In the following examples, two high-level requirements are provided and two lower-level requirements are shown. These lower level requirements lead to unit test and system test verification plans. The third section gives system and unit test and system test verification results. Connections between the levels link artifacts such as requirements, code, and test.
In real life, the connections between the documents are more complicated, although the concepts of covering link and percentage coverage remain the same. Consider the case of the typical satellite system, with a system requirements specification that covers an interface control document and a software requirements specification. From right to left, you can trace the connections represented in the requirement traceability matrix.
The requirements on the left links to individual requirements (column 2) to the source code (column 3) so it is easy to see how each part of the code maps to specific requirements. A software verification plan must cover both the source code and any requirements-based test case (column 4). Again, this whole matrix of relationships all the way down to the software verification report can be represented via an automated tool chain that offers integrated artifacts from code review, code coverage, unit testing, and system testing.
The use of software certification validation tools bridges the gap between the software engineer, the tester, and the project manager. Even in cases where there are thousands of requirements, these tools scale to represent all of the requirements and verification results. At any point, the project manager can take a look at overall status or use filters to isolate subsystems and look at one subsystem at a time. A test engineer can look solely at the connections between software and test cases, while the project manager can validate test cases in the context of code coverage. The end result of using tools is increased productivity and reliability.
The space industry can realize increased productivity, reliability, and cost savings by employing the software certification process used by military and commercial avionics. Rigorous links are established between various parts of a program, establishing a better quality process where automated tools replace error-prone human efforts. Most importantly, the software validation tools ultimately decrease the margin of error and save lives.
SOFTWARE CERTIFICATION TOOLS SUPPORT, ENFORCE, AND AUTOMATE QUALITY PROCESSES
This form of analysis connects data to code, verifying that the data modified by a procedure is the data you expect the procedure to modify. In many process standards such as DO-178B/C, use of a coding standard is a key part of enforcement. Used in these environments, static data flow analysis makes sure that when code executes, it links and runs with the correct data and connects requirements and code. Armed with requirements traceability, developers can verify software does what it was specified to do, and these connections become proof points for project management and process auditing purposes.
DYNAMIC ANALYSIS (CODE COVERAGE)
Dynamic analysis evaluates the effectiveness of test plans, providing a high level of assurance that the subsystem on a spacecraft or satellite has been adequately tested before integration. Dynamic analysis tools automate these processes and document what modules, lines, or conditions in code have executed. More importantly, the tools trace what part of the code is executed by each part of the test plan, helping developers examine the effectiveness of the test plan. When portions of the code are not executed, it identifies “unreachable code,” which in most certified systems should be eliminated, as well as ”infeasible code,” which generally requires further examination.
Unit test tools automate the creation of test case drivers. Tools used in certification environments automatically create stubs and stub out global variables, which allow developers to examine code from the module to the function. Developers can then test code before hardware is fully available and run regression testing, which verifies that the results are the same every time. To achieve better quality testing, unit test tools are used in conjunction with code coverage tools. When combined, the developer chooses a representative sample of input cases to fulfil code coverage requirements, rather than randomly choosing a set of cases that is convenient. These tools map between portions of code and the expected and actual inputs and outputs to clearly document that inputs and outputs map as expected, prove higher level requirements are adequate, and confirm test cases ran as designed.
Requirement traceability is the critical component that ties this all together, but achieving requirement traceability manually is an error-prone and inadequate process. In fact, research has shown that 85% of software errors stem from the requirement traceability failures. Tools that directly parse requirements in a requirement management system and automatically connect them to test plans and test results save time in system engineering processes and support rigorous enforcement.
Jay Thomas, Director, Field Engineering for LDRA Technology has worked on embedded controls simulation, processor simulation, mission- and safety-critical flight software, and communications applications in the aerospace industry. His focus on embedded verification implementation ensures that LDRA clients in aerospace, medical, and industrial sectors are well grounded in safety-, mission-, and security-critical processes.