Testing Under Time Pressure



Avionic systems rely on computer system configurability.

In the future, test benches will be made up from subsystems that are manufactured by specialized suppliers. It is very important to make sure that the test benches are available during the complete development cycle, including after the commissioning of the aircraft type. The solution therefore is to use configurable computer systems as the basic architecture.

Airplanes are complex systems, and the development phase is a particularly high cost factor. This complexity is due to the high performance demanded of the individual electronic modules together with the chosen technology.

Figure 1: In modern avionic systems, software functions are distributed over multiple LRUs. (All image rights Vector Informatik GmbH)

Modern avionic systems use a highly component-based approach in which several different aircraft functions—such as ventilation control, air conditioning and oxygen supply in the cabin—may well be handled by a single physical electronic module. To prevent interference between these elements, the software modules for the individual functions are executed in different partitions. All of these partitions have their own memory areas and are processed at different times. A redundant Ethernet network connects the Line Replacement Units (LRUs) with one another. Intelligent switches make sure the individual network participants only exchange data on preconfigured connections (Figure 1).

Test activities are very extensive during the entire development process. Considerable delays can occur because the tests cannot be performed until the systems to be tested have been produced. Although hardware and software are developed in parallel (Figure 2) in accordance with the guidelines in DO-178C/ED-12 and DO-254/ED-80, the integration test is not performed until the end of development.

To solve this problem, developers are increasingly using virtual integration platforms which can execute simulated models of aircraft functions at a very early stage. Available in different extension stages and targeting different application scenarios, these integration platforms span systems that simply map physical environmental models and run on a single computer to virtualized electronics that run the software for the target system on a CPU emulator. The advantages of a virtual integration platform are obvious: The test activities start much earlier and, most importantly, can be performed in parallel. To do this, it is simply necessary to replicate the integration platforms.

Virtual aircraft LRUs are used as another type of module specifically intended for software development (Figure 3). These provide a virtual runtime platform, virtual bus interfaces, and digital and analog inputs and outputs. The run-time platforms are available in different variants: from CPU virtualization to operating system virtualization. Data buses and the inputs/outputs are mapped to User Datagram Protocol (UDP) connections and transferred over a network. Virtual aircraft LRUs are connected to one another thanks to this standardized mechanism (EuroCAE ED-247). The most complex extension stage is therefore a system that maps all the LRUs in an aircraft in virtual form.

Figure 2: Application area for the virtualization of electronic modules in the aviation sector.

From Model to Aircraft LRU
When the first modules or subsystems have been integrated and are available, the virtual integration platforms must also be able to integrate real systems. To use virtual and real LRUs together in a system, it is particularly important to address the real interfaces—for example, databuses and inputs/outputs—individually.

To commission a real module or subsystem, it is also necessary to simulate the physical environment, such as the air pressure, temperature or speed. This is done using special environmental models. Depending on the requirements, these differ in the precision with which the environment is simulated. The resulting hybrid test systems can be very large, and the installations can fill whole rooms or even halls. So far, current test systems have always been intended for a specific aircraft program and are usually developed and built for a single purpose—for example, for a subsystem. They make use of special systems that build on the relevant components from the test system supplier and contain substantial manufacturer-specific know-how. While the closed systems built using this approach offer high performance, extending or modifying test systems is difficult. Many test benches must remain available for decades, even after the end of the actual development cycle, making it necessary to address system obsolescence and other issues.

This demands a versatile approach that supports an evolutionary development sequence from a virtual to a hybrid test environment. Such an approach makes it possible for test procedures that are already used in a virtual integration platform to be used for parts of real systems. It must also be possible to extend the system. That is why a sponsored project was conducted with an aircraft manufacturer and other project partners to develop an architecture that is scalable and permits very flexible system construction. The central concept is a high-performance, data-driven communication structure whose core is based on “Data Distribution Services” (DDS).

DDS is a standard developed by the Object Management Group (OMG) in which it is possible to access globally distributed data using a publish-subscribe concept (Figure 4). Subscribers define their data transfer requirements, such as data lifetime and the reliability of transfer, via Quality of Service (QoS) parameters. Consequently, this global data pool contains all the information that a test bench needs. This applies to variables in simulation models as well as to data which is exchanged with real system components over buses or via inputs and outputs.

Figure 3: Set-up of a virtualized LRU in the aircraft environment

Architecture Supporting a Versatile Test Bench Structure
In addition to the introduction of a data-driven communication concept, other elements also help create a versatile test bench structure. Here, the end-to-end configuration and the control of the complete test bench are particularly important.

The core elements of the architecture are computer systems that execute simulation models and support the bus interfaces necessary for accessing real aircraft systems (Figure 5). The results of the research project showed that, even without additional mechanisms, it is possible to access most electronics modules using the ARINC 664 (AFDX®), ARINC 429, and ARINC 825 (CAN) data buses used in the aviation industry as well as the corresponding digital and analog inputs and outputs. These components are referred to as X-modules and are also able to execute models that are based on the Functional Mock-up Interface (FMI) standard or a manufacturer-specific standard. This second scenario was tested in the research project.

Simulation models are usually present as source code. The details of the model’s input and output values, parameter names, and data types are defined in a separate file. To execute this type of model, it is translated into code that can be executed on the target platform. The runtime environment on an X-module makes sure that the input and output values of a model in the global DDS data pool are available for the entire test bench. The flexible distribution of the simulation models to the X-models in the test bench opens up completely new possibilities for test bench design. It is now easy to scale the computer systems to meet the required load or to form function clusters for use in different scenarios. The distribution of the simulation models to the individual X-modules is then determined only by the test bench configuration.

The connection to real LRUs is established via digital data buses as well as via digital and analog inputs and outputs. In this case, the X-module must possess the corresponding hardware interfaces and make these available via DDS. While it is relatively easy to perform this mapping for the inputs and outputs, accessing the data buses is more difficult. To do this, the user data is extracted from packets received from the bus and mapped as individual data elements in DDS. By contrast, transmission requires assembling the message packets based on predefined rules and the X-module must comply strictly with the timing rules for bus communication.

Figure 4: The DDS communication concept with publisher and subscriber

A central control instance monitors the individual test bench components. This permits targeted access to X-modules and the simulation modules running on them. The central controller monitors test bench start-up and makes access to the entire system’s logging information convenient. While the control commands are coded as XML and transferred via HTTP (XML-RPC), the current states of all the X-modules present are available via DDS. The test bench components are controlled during operation using just a few simple commands such as INIT, LOAD, RUN and HOLD.

The individual modules are synchronized with one another to coordinate the execution time of the simulations distributed across the X-modules. Depending on the required precision, synchronization is performed via either Network Time Protocol (NTP) or Precision Time Protocol (PTP) (IEEE 1588).

Continuous Configuration from a Single Source
The test bench receives its configuration data via a central configuration instance. The data is present in a configuration file (Figure 6) in XML format. This file describes which X-modules belong to the test bench, how the simulation modules are distributed to the modules, what data buses are available and how the test bench variables are defined and linked to one another in global DDS storage. This means that the logical test bench structure is mapped to real hardware resources. During the test bench start-up phase, the individual participants read the configuration file, which is accessed using the WebDAV network protocol. To set up a configuration, it is necessary to take account not only of the different technical scenarios but also of the different sources of information. First, what is required is the configuration information for the real LRUs in an aircraft and/or the participating subsystems. The so-called Aircraft Interface Control Documents (ICDs) contain descriptions of the data connections used for each networked electronic device in the aircraft: Number and names of the data buses as well as the inputs and outputs, addressing, and data contents of the digital data buses. Because the format and scope of the descriptions are manufacturer- and aircraft-specific, a new input format was defined for entering data into the test bench.

Figure 5: Test bench architecture for the joint use of virtual and real LRUs

The required Aircraft ICDs are now combined to form an overall description and are available as Meta ICD (MICD) in a standardized format for the creation of configurations. This MICD is usually available to the aircraft manufacturer. It only contains the subset of information that is needed for the test bench. The employed simulation models are an important part of the overall system and configuration information must also be integrated for these. The supplier of the model delivers not only the suitable code, but also an electronic description of the interfaces for each individual model. This makes it possible to do without a real aircraft LRU in a test bench set-up. Instead, testers can work with a functionally equivalent simulation model that is executed in an X-module. The system integrator summarizes the logical relations within the test bench in a configuration file known as “Test Bench User Input” (TBUI). This defines which X-module executes which model and how the data buses, inputs, and outputs are to be distributed to the available hardware resources. This file also resolves name conflicts and defines additional variable connections. All this information is entered in an overall configuration that all system participants use on system start.

Figure 6: Role-based configuration concept for the test bench

Therefore, it is very easy to use different configuration files to create different system set-ups without having to adapt any test scripts. The defined test procedures interact with the test bench via the variables in global DDS storage and offer the tester direct access to real inputs and outputs. They also make it possible to modify input values at the individual simulation models. Thanks to this approach, test sequences that were defined for a virtual LRU at an early development stage can also be used, with few restrictions, for the real LRU.

Conclusion
The use of standardized components for configuring and executing tests in large-scale test bench installations greatly simplifies the set-up of these systems. In a joint research project with an aircraft manufacturer and other partners, Vector has specified a system of this type and has also implemented it in practice. The theoretical functional capability of the test bench architecture was successfully demonstrated in practice in a highly realistic oxygen supply application. This made use of seven X-modules on which 27 simulation modules were executed. Vector participated at the constructed demonstrator with an X-module based on the proven CANoe development and test tool.


Mr. Tischer has worked in product development and product management at Vector Informatik GmbH since 1997. As Head of Group, he is responsible for developing analysis, simulation and test tools for communication networks in the aerospace sector.

Notes on the Sponsored Program

The project on which this report is based was supported by funding from the German Federal Ministry for Economic Affairs and Energy under funding code 20Y1301S. The author is responsible for the content of this publication.

 

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google

Tags:

Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.