Creating a Strategy Around Data Integrity: Q&A with HCC Embedded’s David Brook



A call for embedded engineers to develop strategies for securely and reliably managing embedded IoT data.

Our thanks to David Brook, director of sales and marketing, HCC Embedded, which develops re-usable embedded software components for Flash, File Systems and Communications. Brook recently shared his insights on a number of security, IoT and other topics.

EECatalog: What should the infrastructure of an organization that has adopted a software life-cycle process appropriate for the risk the Internet of Things has introduced to smart metering (to cite one example) look like?

David Brook

David Brook

David Brook, HCC Embedded: Process is the key to all development that targets quality. The only method known to engineering of reducing risk of failure and number of software defects is to adopt a risk-appropriate software life-cycle process. This has been proven in aerospace, industrial, medical, and transport for decades. Without process the quality is partly left to luck; and given the number of variables in software development, the chances of success are fairly low. Process needs to govern all aspects of the development—and must take the product from conception to end of life. This means having a development process and a maintenance process. The development process is characterized by specifying the requirements and having test cases that map to all those requirements. Every test case must be for a requirement and every requirement must be covered by a test case. This can be at a high level; for example, the security of customers’ data must be preserved, all the way down to a detailed design level. All levels need to be traceable and auditable. Sure, this is too much for some product developments, but it’s important to define the risk and then use process appropriate to that risk. Without this step, the risks are obvious.

For example, to an embedded engineer, a smart meter looks conceptually similar to other embedded applications—it has an embedded processor running input and output functions to collect information and control an application. It has flash memory to store subscriber and usage data, and it has some kind of communications interface. However, the risk assessment of such a device and its components needs to ensure that data, which has real value, is stored in a fail-safe manner and protected from unauthorized access. If the meter data was exposed or hacked then energy bills can be tampered with and customer’s personal data could be open to identity theft. Such risks of data loss and exposure are not addressed simply by “careful” development or software testing. The lessons of data-centric scandals such as Heartbleed and Apple SSL are that a software life cycle that is appropriate for such valuable data must be held ultimately responsible.

Figure 1: The secure storage and communication of embedded IoT data is critical in order to protect these devices from becoming vulnerable points in larger networks. While current security protocols, such as TLS, are regarded as secure if used within their design parameters, adopting a risk-appropriate software life-cycle process should be a requirement.

Figure 1: The secure storage and communication of embedded IoT data is critical in order to protect these devices from becoming vulnerable points in larger networks. While current security protocols, such as TLS, are regarded as secure if used within their design parameters, adopting a risk-appropriate software life-cycle process should be a requirement.

EECatalog: If the use of formal verification methods “catches on” beyond those sectors where it is used already: aerospace etc., could finding enough individuals to perform (or judge how well machines are performing) such verification become a problem?

David Brook, HCC Embedded: Normal economic process would take over—if you need more quality engineers, then investments will have to be made. This would not be a bad thing in the current climate where very little of the revenue from all the products produced finds its way back to software engineering. The economics have meant that silicon vendors want software to be free, and as such many low-quality software solutions have been dumped on the market—devaluing software engineering in general. For the well being of the world, it would be good to invest more of the income generated by IoT products back to engineering the products themselves—and the logical conclusion would be that you would get better quality—seems like a healthy feedback loop – but in the current market this is not working.

EECatalog: How do you see the insurers (Insurance Industry) of enterprises influencing the actions taken to protect data that is stored locally or communicated over the Internet?

David Brook, HCC Embedded: I am not sure it is possible for a company to insure against a security breach or leakage of data. A more likely cause of change would be class action suits —U.S. style—that demonstrate that corporations have not taken the handling of personal data seriously. Proving this, in the context of most IT systems, would not be difficult. Just at a simple level, the vast majority of system administrators who are responsible for installing both hardware and software have little to no awareness of the processes that would be required to verify that their system is fit for the intended purpose. Do most system administrators receive from management a detailed paper describing the high-level requirements of the IT system and the standards they must employ for the type of work the company is doing? Do they specify things like, “We are protecting customer’s personal data so any system that communicates this data must reach “X” level of quality?” I would consider this to be the basics.

EECatalog: What will be thorniest non-technical problems that could thwart collaboration at the system-level and the reining in of freestyle coding practices and how can those problems be addressed?

David Brook, HCC Embedded: While you cannot rein in freestyle coding—it is the basis of 99 percent of everything we have today—creating a strategy around data integrity will require developers to challenge conventional practices, and engineering must require collaboration at a system-level.

For good reason, free-style coding is not used in applications such as flight control. Freestyle coding should not be looking after our personal data either. Companies must look beyond the short term and be willing to invest upfront to do things properly. A risk assessment of the impact of data loss or damage must become part of the development process. Some might argue that it is too expensive or unreasonable to implement such formal methods. This makes no sense in the face of the almost incalculable financial costs from recent data scandals and the requirement that you assess the risk to your users. Also, like in banking there is an economic race to the bottom that can only be controlled by legislation—people have to be legally responsible for handling personal data.

EECatalog: What is the role of government in all this, if any?

David Brook, HCC Embedded: There is an economic race to the bottom in all of this—to provide services as cheaply as possible and if you do not, you lose—similar to the mortgage disasters that occurred. This can only be controlled with legislation similar to what exists for automobiles, airplanes, nuclear power, etc., which mandate certain standards must be met before companies can trade in these markets. Otherwise the traditional company method of stumbling from one crisis to the next is inevitable and is proven by the security failures and breaches we are seeing on a weekly basis.

EECatalog: What can IT learn from the embedded development that has taken place in some vertical markets about preventing loss and harm?

David Brook, HCC Embedded: IT is a huge generalization. But in general, IT departments have very little knowledge of process. The fundamentals of process are all roughly the same:

  • Define the requirements of the system
  • Specify the implementation
  • Implement
  • Have a test plan that maps back to the requirements. Every requirement should have a test case and every test case should have a requirement.

How to extrapolate this to IT is complicated when they are using so much software and hardware that has no process of verification —and while that remains there are always going to be gaping holes. The idea that it is too complex to apply these methods may be true but what that is saying is that your system is inherently risky—and you have to recognize that. If you want to mitigate against risk then you have to take the idea of process (including verification) seriously—and that means including some form of validation of third party components (almost everything in an IT system) and making sure that not only is the component fit for purpose, but also that the processes used to develop and maintain that product are fit for purpose

For many years, network and security RFC’s and development have belonged in the IT sector. This is problematic for IoT designers—embedded applications are simple and do not face the same design challenges as IT applications. On the one hand there is a lot of software that can be easily adopted to provide basic network and security capability. However, IT lacks the benefits established by embedded development in some vertical markets. For example, industrial control devices that can affect safety have access to a mature set of functional safety processes and standards around IEC61508. The basis of these functional-safety standards has been adopted in medical, transport, and many other industries where safety is relevant. If IoT devices adopt the same approach as IT-based software, then it will suffer the same weaknesses.

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google

Tags: ,