Posts Tagged ‘top-story’

Arm Cortex-M4 Powers Precise Autonomous System

Thursday, September 27th, 2018

Using an open-source software stack and low-cost hardware to quickly build a reliable, inexpensive, and precise autonomous system.

The Case for More Accurate and Robust Navigation
Suppose you are designing an inspection drone for an offshore oil platform, where human inspection is dangerous and precision flight is key. In your developmental testing, you realize that the flight control is becoming unstable as the drone flies near and under the steel structure. You determine this loss of control is due to GPS outages and magnetic interference from the bridge, both of which negatively impact the drone’s attitude control system.

Drones must navigate harsh environments accurately.

You realize that you need to upgrade the navigation of your drone to be more robust and accurate. After investigation, you find two not very attractive alternatives:

Bad Option 1: Invest in a much higher-performance navigation solution such as a fiber-optic gyro based Inertial Measurement Unit (IMU). This option adds more than $20K per vehicle, and the weight of the larger IMU system will require you to fly a much larger drone to handle the added weight.

Bad Option 2: Adjust the navigation solution to reduce trust in the magnetic sensor when near the bridge, and use a constrained dead-reckoning algorithm while directly under the bridge. However, your current navigation solution is a black-box, and you can’t modify its operation sufficiently to do this. You will need to design a solution from scratch. This will take a large hardware and software development effort.

Avoiding a Custom Project and a Costly Buy
Many real-world navigation and control problems start like this. System developers are quickly confronted with either a large custom project or purchasing a very high-accuracy “overkill” solution. However, developers confronting navigation challenges similar to that described above can now consider a solution that requires neither digging into a from-scratch endeavour nor blowing up the budget with a major purchase. OpenIMU is a low-cost hardware and open-source software stack from ACEINNA for  simplifying and modernizing navigation system development (Figure 1).

Figure 1: The OpenIMU Full Stack Solution uses a robust, professional-grade, customizable open-source software stack and easy-to-integrate hardware and includes thorough documentation and simulation.

In the case of the  inspection drone discussed earlier, a systems developer using OpenIMU would be able to get started immediately—even without purchasing hardware. First, the developer would run simulation. Second, he would modify pre-existing, well-tested algorithms in the OpenIMU distribution. In addition to having runnable source code and a simulation environment, full documentation— including all the math of the algorithms—is available online.

On the embedded development side, the OpenIMU tool chain is an easily installed extension to the popular open-source code editor Visual Studio Code. Simply install VS Code and search for and install the ACEINNA extension. Once installed the OpenIMU home page will appear and you can start up a new navigation project. The easiest way to get started is to import a Custom IMU Example. These examples are ready-to-deploy OpenIMU applications that demonstrate different levels of navigation algorithm complexity.

What About Hardware?
Of course, just simulation and coding are not going to get the drone flying, so time to introduce some hardware. Applications built with the OpenIMU stack run directly on low-cost OpenIMU hardware— the first of which is the OpenIMU300. A proven nine-axis IMU module, the  OpenIMU300 is fully calibrated in ACEINNA’s factory for errors over temperature. To reduce errors like non-linearity and misalignment by up to a factor of 10x over other low-cost IMUs, the OpenIMU300 is calibrated on a three-axis rate-table .

The OpenIMU300 also features multiple serial ports for integrating external GPS and other types of sensors, a SPI port, and a powerful 168MHz Arm Cortex-M4 floating point CPU. The baseline OpenIMU300 typically delivers better than 5 deg/Hr of drift, and ACEINNA is working on higher-performance modules. A developer kit comes with a JTAG pod, evaluation board, and a precision text fixture to help developers go from code to test in minutes.

Now that we have our navigation application ready to go and running on hardware, we are ready to collect some data and see if it all works. The OpenIMU solution has data real-world collection and logging needs covered as well. A combination of Python scripts and a developers’ website,  ACEINNA Navigation Studio, make collecting and analyzing data a breeze (Figure 2). No need to write a custom driver for your new algorithm. A configurable JSON file allows these data logging and graphing tools to work even when your OpenIMU application has customized the data packets/messages to your own unique requirements.

Figure 2: Live custom IMU data  is captured and logged with ACEINNA Navigation Studio.

Summary
The OpenIMU open-source development chain is a big step forward — revolutionizing the development process of navigation in autonomous vehicles. Convenient, free, and modern software tools combined with low-cost navigation hardware is a powerful combination in many upcoming applications. ACEINNA is committed to continuing development of this family with both lower-cost and higher-performance modules on the horizon. More videos, blogs, and a community forum are also forthcoming resources that aim to democratize advanced navigation algorithms.

Resources:

OpenIMU – The Open-Source GPS/INS Platform[i]
ACEINNA Visual Studio Marketplace[ii]
[i] https://www.aceinna.com/openimu
[ii] https://marketplace.visualstudio.com/items?itemName=platformio.aceinna-ide


Mike Horton is the CTO of ACEINNA where he is responsible for corporate technology strategy and inertial-navigation related technology development. Prior to ACEINNA, Mike Horton founded Crossbow Technology, a leader in MEMS-based inertial navigation systems and wireless sensor networks, with his advisor the late Dr. Richard Newton while at UC Berkeley. Crossbow Technology grew to $23M in revenue prior to being sold in two transactions (Moog, Inc and MEMSIC) totaling $50M. In addition to his role at ACEINNA, Mike is active as an angel investor with two Silicon Valley based angel groups—Band of Angels and Sand Hill Angels. He also actively mentors young entrepreneurs in the UC Berkeley research community. Horton holds over 15 patents and earned a BSEE and MSEE from UC Berkeley.

 

The Urban Sprawl of the IoT

Wednesday, April 4th, 2018

As the Internet of Things proliferates, connected cities become smart cities. The complexity of multiple wireless protocols requires a sound embedded operating system to coordinate hardware and software.

Since the industrial revolution in the mid-18th century, people have moved from rural communities to cities in search of work and as they settled, the cities have sprawled outwards. Over 80% of the world’s population are expected to be living in cities by 2025 (Mordor Intelligence), and these cities are expected to be connected, intelligently managed, efficient environments, with smart utility metering, buildings, healthcare, and transportation systems.

Figure 1: Smart homes are expected to grow 24% CAGR 2018-2023. (Mordor Intelligence)

Part of city life is the ‘busy-ness’ with a transport infrastructure to take people from A to B by road or rail, and the network of entertainment, workplaces, and interests in the urban areas. This connectivity is mirrored by the Internet of Things (IoT), which is expected to be one of the main drivers of the connected, or smart, city, enabling increased levels of connectivity in the workplace, the home, and the wider urban areas.

Considering that many estimates put the number of connected devices at trillions, not billions any more, the need to ensure a secure data path for all of those devices is critical.

Connectivity Changes
At the moment, there are a lot of units working in a closed loop system, explains Simon Ford, Senior Director, Product Marketing, Mbed OS, Arm. “These will now be required to be connected and need flexibility to use the right one for the right job,” he says.

Arm’s open source embedded OS is designed for the IoT. Based on the Arm Cortex-M microcontroller, it includes a Real Time OS (RTOS) and drivers for sensors and I/O devices. For connectivity, there are multiple standards to support, such as LTE Cat-M1 (on the 1.4MHz bandwidth), Narrowband IoT (NB-IoT),, a Low Power Wide Area Network (LPWAN) technology, or, for non-cellular or private networks, the Long Range (LoRa) low power wireless technology, or Bluetooth Low Energy. In. In smart cities, LoRa is likely to be used to monitor a long-range infrastructure, such as an industrial estate. Wi-Fi and Bluetooth are likely to be confined to buildings and low power Wide Area Network (WAN) standards, such as NB-IoT and LoRaWAN 1.1 ( (January 2018)

Connectivity has to be ubiquitous. At the same time, connectivity must be secure and able to be managed remotely. And software has to be updated as and when updates are released. “Connectivity unlocks capability,” observes Ford. “We are now moving from a closed loop controller to an interconnected Internet world. This means changes to connectivity and security threat models.”

“Security is not optional,” warns Chris Porthouse, Vice President and General Manager of Device Services at Arm. One focus for Mbed OS is to enable multi-standard connectivity, and to support whatever standard is appropriate. The other, says Porthouse, is to ensure that users are confident the network is secure and know where data is coming from.

“Arm is in a unique position,” says Ford, “offering device level security and able to enhance at the hardware level with TrustZone  [Arm’s System on Chip security technology] for example.”

Platform Security Architecture (PSA)
The Platform Security Architecture (PSA) will be a common industry framework for secure, connected devices. The Arm initiative is supported by semiconductor companies, such as Microchip, Nuvoton, NXP, Silicon Labs, STMicroelectronics, and Renesas; software companies, including Arm Keil, IAR Systems, and Green Hills Software; security specialists, for example Data I/O, Symantec and Trustronic; systems companies, such as Cisco and Sprint; and Cloud companies, including Amazon Web Services AWS, Microsoft Azure, Google Cloud and Mbed OS. It will define different threat models, explains Porthouse. For example, a kettle has a different threat model to a smart meter, but both have data that needs protecting.

Figure 2: Arm’s PSA initiative is supported by giants from semiconductor vendors to Cloud platform providers. (Image: Arm)

The aim is to define security functions and guarantee a platform that allows developers to deploy the IoT in the target application. PSA will establish security parameters and implement them without thinking, continues Porthouse. “Security won’t be a hard decision, although there may be trade-offs, but not at the expense of timescales.”

Considering that many estimates put the number of connected devices at trillions, not billions any more, the need to ensure a secure data path for all of those devices is critical.

The Mbed OS is designed to help companies take the IoT to the target application. Its ecosystem of partners’ offerings provides hardware, for example, that is RF-certified, together with proven security and Cloud services. With half of the Mbed OS contributions made by partners, the open source project can help customers meet the new techniques that are thrown up by the IoT, making it a huge and complex area of design. As applications and their appropriate wireless technology standards continue to emerge, this trend towards complexity is set to escalate.

 Partners Build Cities
Arm has been working with partner Advantech to add elements to its traditional base of IoT customers. Its sensor nodes, IoT gateways, and WISE-PaaS/EdgeSense software for smart edge computing are built on Arm’s Mbed OS and mbed Cloud technologies.

Figure 3: Advantech’s starter kit introduces developers to LoRa and Mbed OS.

 

A LoRa module and starter kit includes Advantech’s WISE-1510 LPWAN LoRA IoT node, the WISE-DB1500 development board with built in temperature and humidity sensor, the WISE ED20 debug board, Arm Mbed OS support, accessories and Quickstart guide (Figure 3).

For long range IoT, such as smart lighting and smart metering, Advantech offers the WISE-1510 M2.COM sensor node, based on the standard M.2 sensor form factor, commonly used in IoT applications to combine wireless connectivity and computing performance. M2.COM was created by Advantech, Arm, Bosch,, Texas Instruments and Sensirion and is based on Mbed OS. It has an Arm Cortex-M4 processor and LoRa transceiver and is suitable for smart cities as it has interfaces for sensors and Input/Output (I/O) control, including Universal Asynchronous Receiver/Transmitter (UART), Inter-Integrated Circuit (I2C), General Purpose I/O (GPIO), Pulse Width Modulation (PWM) and Analog to Digital Conversion (ADC), low power consumption.


Caroline Hayes has been a journalist covering the electronics sector for more than 20 years. She has worked on several European titles, reporting on a variety of industries, including communications, broadcast and automotive.

 

AI Technology and Employment

Monday, December 18th, 2017

Adding context to the current beliefs about the AI pros and cons which emerged in a survey.

Few topics divide opinion as significantly as artificial intelligence (AI). While organizations such as Amazon, Baidu, Facebook, Google, and many others are investing significant resources in acquiring technologies, as well as developing and enhancing AI solutions, others have a different perspective. In fact, serial entrepreneur Elon Musk described AI technology as ‘summoning the demon,’ believing that the biggest threat the world faces is creating something that will eventually rival human intelligence.

Ever since the changes introduced by the first machines, humans have worried that they will be replaced by automation and their jobs will disappear. For at least a century before John Maynard Keynes coined the term ‘technological unemployment’ in the 1930s, the issue of machines replacing human labor was debated.

Arm wanted to get a sense for prevailing consumer sentiment around the promise of artificial intelligence, and so it commissioned a global opinion survey in early 2017. When it comes to employment, there is no question that AI technology will have an impact on jobs and, while history is no guarantee of the future, until now there has been no large-scale reduction in employment due to technological advancement. However, in our survey, 30 percent of respondents cite ‘fewer or different jobs for humans’ as being the biggest drawback of a future in which AI technology impacts human life.

Figure 1: Respondents’ views on the greatest potential drawbacks of AI.

Transition or Reduction?
This may seem to be an unnecessary concern based on history, and there is some recognition that artificial intelligence may drive a shift in employment as opposed to a reduction. This reflects the experience from previous times when a technological leap forward has increased automation, leading to improved efficiency and productivity. The result of this has almost always been a significant reduction in the cost base in employment sectors, bringing lower prices in key markets. This has, in turn, led to greater disposable income within the general population and has ultimately given birth to new market sectors, thereby creating new employment opportunities.

We can find a good example of how a technological leap drove a shift in employment patterns across several sectors by looking to the agricultural revolution. In the early 1800s, agriculture was the major employment sector. However, at this time the introduction of machinery meant that some farm workers lost their jobs. Some of these workers migrated to the newly created agricultural machine industry and began building the very machines that had cost them their jobs.

A secondary effect was greater efficiency, which meant that food became cheaper to produce, therefore prices began to fall significantly. People were spending significantly less of their disposable income on food, freeing up money to spend on other goods and services. The increased demand for these goods and services boosted employment in these sectors as well.

Until now it has been true that many tasks that are difficult for humans are easy for computers, and vice-versa. In general, computers were seen to be good with routine tasks, such as complex calculations, whereas humans retained an edge in recognizing faces or objects. AI technology will inevitably change the balance here and, due to machine learning techniques, is already making significant headway into facial and pattern recognition.

We were interested to learn our respondents’ opinions in this area. In our poll we asked them whether humans or AI machines would be better in certain industries, particularly with regard to safety and efficiency.

Figure 2: The survey indicated that in all cases, respondents believe that humans will do the job better.

The results showed that respondents tend to believe that humans will be better placed to do all the tasks we asked about, safely and efficiently. In general, the more repetitive the work, such as heavy construction or package delivery, the higher the AI machine scored. Obviously, the responses were based on the respondents’ knowledge of AI technology today, so answers may well change in the near-to-mid future.

Clearly, the rise of artificial intelligence will impact some sectors more than others. In time, some jobs may all but disappear, while others will see almost no impact and, as we have already discussed, new employment types will almost certainly be created. In our poll, we asked respondents to indicate the three types of jobs that were, in their opinion, the most likely to be affected.

Most Vulnerable
While there was a broad spread of opinion, the manufacturing and banking sectors were both high in the minds of our respondents as being the most under threat from AI technology. Even with these results, the responses are very much set in the ‘here and now.’ As an example, as vehicles continue their AI-driven march towards full autonomy, we would expect the number of respondents foreseeing the demise of taxi drivers to rise significantly.

However, there are already some positive applications for AI, especially in the medical field. Google’s own AI company, Deepmind, is working with the UK’s National Health Service (NHS) to use machine learning to combat blindness. By training a deep-learning algorithm with a million eye scans, experts predict this initiative could prevent up to 98 percent of the most severe causes of blindness.

IBM is now in the business of supporting the fight against cancer. Its Watson for Oncology platform can scan clinical trial data, medical journals, textbooks, and other sources and present oncology professionals with reports that suggest the most effective treatment options. In early testing, Watson gave the same recommendation as professional oncologists 99 percent of the time.

For individuals, Your.MD offers basic healthcare via a mobile app. Based on a chatbot, users speak about their symptoms, and Your.MD suggests conditions based on the user’s personal profile and the current symptoms. Processing natural language ensures a seamless user experience while a sophisticated map of the user’s condition is generated through various machine-learning algorithms.

Figure 3: Currently, manufacturing and banking are the sectors believed to be most under threat from AI.


New Roles
In addition to transforming some areas of employment, AI itself will become a whole new industry, spawning new job roles that do not exist today. A recent global study of more than 1000 large companies that are already using or testing AI by Accenture PLC identified some new categories of jobs that can only be performed by humans. The first is ‘trainers,’ people that assist translation algorithms and natural language processors to reduce errors and understand the subtleties of human communication—for example detecting sarcasm.

Another new job category, according to the report, will be ‘explainers.’ These people will bridge the understanding gap between ‘black box’ AI systems and people. This becomes all the more necessary due to the EU’s General Data Protection Regulation, planned for next year; this creates a ‘right to explanation’ allowing consumers to question any algorithmic decision that impacts them.

The final category identified in the report is ‘sustainers.’ These humans will ensure that AI systems operate as expected and intended. Many companies have low levels of confidence in their AI systems at present and these ‘human sustainers’ will monitor and address any unplanned consequences—effectively providing a quality assurance function for AI.

Clearly, change is inevitable, and as people start to come to terms with the implications of AI on society in general and employment in particular, it is only natural that there will be some concerns raised. Overall, we see that the more people know about AI, the more positive their view on an AI-centric future.

Alongside the concerns about employment, our research also identified concerns about privacy and security. At Arm security is always a key consideration as we design our AI-enabling technologies.

Despite some regional differences, the report indicated that across the globe people are generally in agreement that artificial intelligence will have a positive impact on healthcare, transport, and the workplace. All of which will lead to a better quality of life for everyone. In fact, 36 percent of respondents believe that AI has already impacted their lives and 61 percent think that society will be better as a result of increased automation and artificial intelligence.

As I mentioned, the global AI survey was conducted in conjunction with Northstar Research Partners. A free copy is available here.


Jem Davies is a vice president, fellow and general manager of Arm’s machine learning group. Previously, he set the technology roadmaps for graphics, video, display and imaging and was responsible for technological investigations of a number of acquisitions, including most recently that of Apical, leading to the formation of Arm’s Imaging and Vision group.

Davies has previously been a member of Arm’s CPU Architecture Review Board and he holds four patents in the fields of CPU and GPU design. He has worked in Cambridge since he graduated from the University of Cambridge and ran his own business there for eighteen years. Most recently has been at Arm for 13 years.

 

Delivering 1 GIOPS Per Server with ReRAM

Thursday, October 26th, 2017

Tackling server design with low-latency/low-power storage subsystems based on new storage class memory which removes the bottleneck from the compute/storage side.

Hyper-converged infrastructure (HCI) is disrupting the traditional storage and data center markets because it creates a way for organizations to consolidate their systems to cut back on costs and management. According to Gartner, “The market for hyper-converged integrated systems (HCIS) will grow 79 percent to reach almost $2 billion in 2016, propelling it toward mainstream use in the next five years. HCIS will be the fastest-growing segment of the overall market for integrated systems, reaching almost $5 billion, which is 24 percent of the market, by 2019.”

What Is HCI?
Hyper-converged infrastructure (HCI) is a framework that combines storage, computing, and networking into a single system so organizations can reduce the complexity of their data centers through virtualization and software defined storage and networking.

Pain Points of Hyper-Convergence Infrastructure
With the evolution of HCI, there are new challenges. One challenge with hyper-converged infrastructure is that it changes the scale out dynamic. The basic elements of compute and storage cannot scale independently anymore. As a result, scale out is achieved by adding a new node, which introduces new bottlenecks.

Hyper-converged applications require multi-million IOPs storage performance due to the intensive I/O workload. And yet current SSD technologies based on NAND Flash memory introduce significant latency, at 100µs to 200µs for a read I/O. To overcome the limitations of NAND Flash, IT architects developed techniques such as massive parallelization and distributed workload to compensate for those limitations by having data storage accesses split across multiple NAND Flash components. Now that the servers are moving towards hyper-convergence, it will become very challenging to hide the inherent limitations of NAND Flash to the application level.

ReRAM Revolutionizing Data Storage Technology
New technologies such as Resistive Memories (ReRAM) are coming into the market that will slash latency to less than 10 microseconds, resulting in new products such as ultra-fast NVMe SSDs. Latencies will drop even further if designers use the memory bus as the physical interface rather than PCIe. Storage devices that will sit on the memory bus will be NV-DIMMs, providing latencies under the microsecond range.

While the substantial performance and power benefits of ReRAM can address the storage part of the equation, this new product category will require a fresh look at the CPU/compute side as well. System resources will continue to be consumed by the storage I/O. A new architecture will be necessary to ensure compute capabilities meet application and network interface needs while keeping power consumption low and bandwidth high.

Figure 1 illustrates an example of a bottleneck on the compute/storage side, where most of the resources are used for the storage IOs, such that there are not enough computing capabilities for the application and for the network interface. About 3.3 cores running at full time in a high-end CPU are required to manage 1 MIOPS with NVMe devices, which is an expensive power and cost budget. A typical 2U server integrates 24 SSDs, that leads to 18 MIOPS using 750k IOPS SSDs assuming the application requires a high queue depth. Therefore, 18×3.3 = 60 cores are required just for the IO management, which is 75% of the resources of a high-end 4-CPU-based architecture, as Figure 1 shows. In case the IOPS need to go over the network, the related throughput is in the range of 18M x 4096 x 8 = 600Gbit/s, which corresponds to 15 40GbE ports.

Figure 1: 18 MIOPS Flash-based NVMe storage system

The use of an Arm® RISC CPU provides enough computing capabilities for the I/O management, while keeping a low -power consumption and enough bandwidth for the application and network driver. The combination of the Crossbar ReRAM, through NVMe or NV-DIMM storage devices, and Arm RISC CPUs, successfully addresses IOPS and power consumption. Assuming that an Arm RISC CPU will be available in a reasonable power budget with 64 cores and 4 memory channels, we can estimate that a hyper-converged node can reach 12.5 MIOPS within a 100W power budget (figure 2). Because accessing a DIMM interface is simpler than accessing a PCIe device, we can estimate that the storage software driver will be faster to execute compared to the NVMe driver, leading to 1 MIOPS per core for the IO management. Due to the small form factor, about 20 of such nodes could be integrated in a 2U chassis, leading to a 250 MIOPS 2U hyper-converged server, with a 2kW power budget.

Figure 2: 12.5MIOPS Hyperconverged Node

In this case, getting the IOPS over the network represents a very high bandwidth: 250M x 4096 x 8 = 8Tbit/s, even if only 18% of the CPU resources are used for the IO management.

Coming back to the user level, as an example of a virtualization use case, such a server can execute 83,000 Virtual Machines (VM) in parallel (3000 IOPS/VM). In a current flash-based 2U hyper-converged server, integrating 24 2.5″ SSDs at 750k IOPS per SSD, only 6,000 VMs can be executed, such that 14 servers are needed to execute the same VM number. The Crossbar ReRAM provides about 15x improvement for the I/O performance density (up to 125 MIOPS/U), and performance efficiency (125k IOPS/W) at server level.

Figure 3: 83000 VMs on hyperconverged servers

For the same VM number, users will benefit from a reduced TCO due to a more integrated solution delivering the same performance with less space, less power, and fewer software licenses.

Table 1

R&D efforts are required on the compute/network side to get network interfaces in the few Tb/s range, and on the software side in order to reduce the storage driver execution time where the Crossbar ReRAM enables smaller I/O (512B or even lower, see Table 1), which can be used in big data analytics and OLTP data base applications, leading to 1 GIOPS per U in the server.


Sylvain Dubois is Vice President of Strategic Marketing and Business Development, Crossbar, Inc. He joined the Crossbar management team in 2013 as Vice President of Strategic Marketing and Business Development. With over 17 years of semiconductor experience in business development and strategic product marketing, he brings a proven ability to analyze market trends, identify new, profitable business opportunities and create precise product positioning that is in perfect sync with market demands to drive market share leadership and business results.

Dubois holds a Master of Science in Microelectronics from E.S.I.E.E. (Paris), University of Southampton (UK) and Universidad Pontifica Comillas (Spain).

Security Principles for IoT Devices Using Arm TrustZone Security Extension

Friday, October 6th, 2017

Examining four “must do’s” for tackling the complex challenges IoT device security poses.

The Internet of Things (IoT) market is expected to enable and deploy around 50 billion connected devices by 2020. These IoT devices will be deployed across the board to cater for multiple use cases, for example home or building automation, automotive, and the diverse embedded segment: gateways, set-top boxes, security cameras, industrial automation, digital signage and healthcare, to name a few.

Figure 1: The four principles for securing IoT devices using Arm® TrustZone® security technology

The predicted scale of the IoT poses a challenge for developers to secure connected endpoint devices from a myriad of physical and remote software attacks. For example, the DDOS Mirai botnet launched through IoT devices such as digital cameras and DVR players. Among the many requirements for securing an IoT device, key principles include device identity, trusted boot, secure over-the-air updates, and certificate-based authentication.

1. Protect your IoT device identity with hardware isolation
Device identify involves being able to identify the endpoint device as an authentic device, rather than another device that may be masquerading as legitimate by assuming its identity. If and when a malicious device assumes the identify of a legitimate device, it can launch different types of attacks. When a new device is activated and contacts the server for the first time, it needs to verify that it is indeed one of the target devices and not a hacker’s computer. One security measure is to configure a unique device-shared secret before the device is shipped or an identity certificate issued to the device by a certification authority (CA) for anonymous attestation.

This type of security function is typically executed in a protected and isolated environment. Arm® v7-M and Arm v8-M architectures can host such an environment with an MPU or Arm® TrustZone® security technology. They both provide enhanced security hardening to host protected and trusted execution partitions. The TrustZone security extension, however, offers more security robustness over MPU-based protected partitions, and is the industry-preferred security technology, with over 15 billion shipped to date in Cortex™-A based devices.

2. Assure device integrity with a trusted boot sequence
Trusted boot is yet another key function—when implemented on the IoT device, it will assure the integrity of the boot and runtime software as being authentic and not tampered with, since it was installed or provisioned on the device. The implementation of a trusted boot on the platform requires a hardware-based root of trust, which is typically a security processor or security enclave (SE) that can host a protected and secured environment. Arm TrustZone is an example of a low-cost virtual SE capability that can host a secure partition.

 3. Securely fix vulnerabilities with an over-the-air software update
The Mirai botnet is a good example of the importance of firmware/software patching of an IoT endpoint device, to fix zero-day security vulnerabilities identified in the field. It is practically impossible to know and address all vulnerabilities upfront, as hackers are always finding new ways to detect and exploit vulnerabilities. It is an ongoing race between the device manufacturer and hackers. The fundamental requirement to mitigate attacks is to have on-device capabilities to allow patching of software/firmware images in a secure way, to address known vulnerabilities.

Many government agencies across the globe have identified this as an international security issue and are feverishly working to put forth regulations and guidelines for device patching. The National Telecommunications and Information Administration (NTIA) is seeking to define different firmware patching capabilities for IoT devices, to increase consumer awareness and to ask for the support of these standards on the devices they purchase. The secure software update capability is typically implemented in a protected partition, with either an Arm MPU or TrustZone technology providing security robustness.

4. Ensure trusted communication with certificate-based authentication
With so many connected devices, it is inevitable that these devices will be communicating with each other. A very good example of this is in the automotive industry with vehicle-to-vehicle and vehicle-to-infrastructure communication (V2X). During secure communication between devices, it is critical that they only respond to authenticated devices to send and receive information that could be acted upon without serious ramifications.

There are several prevalent authentication schemes, for example user ID/password, one-time password (OTP), server unique ID, and message payload. The message payload authentication token is one of the ways to facilitate the authentication of devices with certificates issued by a CA, by embedding them in the communication packets. This is another trusted function that must be executed in a highly secure environment that cannot be tampered with. Arm TrustZone technology or an MPU-based capability provides this secure environment, and this can be further supported with a SE security-hardened solution.

Robust, Hardware-Enforced Security for IoT—No Matter the Device
In summary, for IoT to scale, it must do so with security as its foundation. The challenge, however, is that developers must protect IoT devices from a variety of attack types, across a range of devices—from low cost, low power, and harvested power, to high-end devices. Therefore, ensuring strong security principles in these four areas from the start of a project is key:

  1.  Protect your IoT device identity with hardware isolation
  2.  Assure device integrity with a trusted boot sequence
  3.  Securely fix vulnerabilities with an over-the-air software update
  4.  Ensure trusted communication with certificate-based authentication

Figure 2: Arm TrustZone security technology provides seamless, developer-friendly, hardware-enforced security protection

It must be said that those are not the only means and ways to protect a device, however, these principles set a strong foundation for other measures to be built upon. So, whether it’s being implemented in low-cost and low-power MCUs or high-end devices for richer IoT experiences, TrustZone security technology provides seamless, developer-friendly, hardware-enforced protection for isolated and protected environments that exceeds the strength of MPU-only protection.


Suresh Marisetty is a Security Solutions Architect at Arm, covering the emerging market segments of Automotive, IoT, and Embedded. He has over 25 years of industry experience with over 10-years of Security Architecture expertise, driving end-to-end SoC security solutions from concept to product delivery. He has successfully delivered about half a dozen silicon solutions for Automotive, Embedded, Mobile, Desktop, and Server platforms at Intel. His prior contributions include Server platform machine check architecture for RAS and its enabling as an industry standard through UEFI SIG. He holds about 25 patents, several publications, and is a co-author of book Beyond BIOS – Developing with UEFI.

 

TrustZone Basics

Thursday, September 28th, 2017

What are the factors that lend an IoT security tool its prowess?

Most would agree that IoT devices need to be secure or they can easily become a Distributed Denial of Service (DDoS) nuisance at best, or at worst, a gateway to hacking on an internal network. Many also know that an IoT product should integrate layers of security, starting with the silicon design at the component level. Other layers in the IoT chain include communication hops (i.e., device-to-edge-to-cloud), various protocol layers, users, and the application. At the end of the chain, on the edge of the network, we find the IoT devices themselves.

In general, IoT devices include a microprocessor (MPU) or microcontroller (MCU) that carries most of the application’s logic, implemented using firmware. Typical IoT devices perform tasks within the firmware. Such tasks include establishing secure communication channels, storing data, and identifying and performing authentication with other devices or a public cloud. IoT devices also protect data (at rest and in transit), authenticate users, and so forth. Technologies such as digital certificates, cryptographic tools, and keys establish the underlying security that’s needed to run services such as digital data signing and verification, Transport Layer Security/Secure Sockets Layer (TLS/SSL) and encryption or decryption.

OS-only Level Security Limitations

At the hardware level, silicon providers might implement security features in an MPU’s hardware such as a hardware crypto accelerator, a silicon-based true number generator, or offer external components such as the Trusted Platform Module (TPM) and hardware Crypto Authenticators. Open SSL tools are also commonly used in conjunction with the above technologies. Nevertheless, a large number of developers still rely on the primary operating system (OS), such as Linux, to supply baseline security. However, Linux only allows developers to segregate security assets and operations under root access. The OS-only level of security is becoming less than acceptable due to inherent risks in exposing keys and certificates, which in turn leads to compromised systems. Using hardware security technologies implemented by silicon vendors, in conjunction with tools such as OpenSSL, is a much better option. However, this approach also has limited benefits when primary security is derived from reliance on Linux user or kernel mode isolation.

Figure 1: A proven embedded security technology, Arm® TrustZone® makes possible dual-OS systems and employs hardware as well as software isolation.

It’s clear that as the line separating hardware and software blurs, developers need tools that enable easier access to store and manipulate cryptographic keys, certificates, and to perform cryptographic operations. Arm® TrustZone® is one option that adds value to the embedded security picture. TrustZone, as a mature technology, has been used to secure mobile phones, set top boxes, payment terminals, and more. TrustZone has facilitated secure transactions, maintained secure identities, and enabled Digital Rights Management (DRM), among other things. TrustZone enables a system with dual OSes; applications get both software and hardware isolation anchored in a root-of-trust and protected by secure boot techniques.

With the flip of a bit, the code can run in secure mode or non-secure mode. Peripherals, memory, memory controllers, buses, and so forth, can be configured to segregate clearly defined access to certain resources only in secure mode. Terms like “normal world” and “secure world” define a paradigm that offers a powerful tool for IoT security from the silicon on up. A Trusted Execution Environment (TEE) within this paradigm refers to the “secure world” OS and the assets that are needed to facilitate “normal world” applications when they need to execute “secure world” functions.

For example, a Linux implementation that’s running with Open SSL in the normal world can have the Open SSL engine implemented in the TEE. In this case, only the TEE (i.e., the secure world OS) can load drivers and make them accessible for various hardware security functions. Using this approach, cryptographic keys are stored in a secure world keystore. The TEE can also generate keys and certificates, but most important, all cryptographic operations are executed in the secure world. In this example, not even the Linux kernel can access isolated security features or keys. In fact, awareness of the TEE is not necessary for those end users with kernel access and rights, much less for user applications in the normal world.

Able to solve very complex requirements, the TEE can monitor the OS, applications, processes, or memory regions in the normal world. The TEE applies remediation when intrusions or malware are detected. By extending the ability to isolate critical processes from normal world OS and applications, the TEE prevents complete system compromise. When a normal world application is compromised or hacked, a critical process controlling, for example, a connected door lock, is unaffected. The hacker cannot interfere with operations whose critical components reside in the secure world. Most use cases that require strong IoT security fall into one of three categories: communication, storage, and authenticity/integrity verification. A TEE can improve security for all of these use cases.

Easing Adoption

In practical terms, the dual OS paradigm requires developers to build applications with both domains in mind, with critical portions relegated to the secure world. The TEE and TrustZone are still something of a new frontier for many developers. Thus, there are ongoing efforts to ease adoption. Mature TEE-related products and tools can help by providing Application Programming Interfaces (APIs) and software libraries that run from the normal world, abstracting completely the existence of the secure domain. In this way, application development tools can greatly reduce the complexity of the dual OS paradigm, allowing developers to do what they do best.


Adrian Buzescu is VP Product Management at Sequitur Labs, bringing to the team more than 17 years of product engineering experience within the telecom sector with companies such as Vodafone and T-Mobile.

Buzescu has a proven record of simplifying technical complexity by initiating and taking to market multiple mobile services and products—Wi-Fi Calling, Family Finder, Visual VoiceMail, Enhanced Caller ID, MMS, Network Address Book, Remote Phone Unlock, M2M eSIM etc. In his role as director of Product Development in T-Mobile he developed and managed partnerships with infrastructure, handset, SIM card and silicon OEMs driving the technical implementation of technologies such as IMS/GBA and TrustZone/TEE across the entire handset portfolio enabling new lines of business and first-in-the-industry products.

Logic Relaxes: Q&A with Eta Compute

Sunday, September 10th, 2017

Delay insensitivity and other steps to power saving.

Editor’s Note: Teaching logic to be less sensitive has its benefits, as EECatalog learned when we spoke with Paul Washkewicz. In June Eta Compute announced a reference design for its EtaCore Arm® Cortex®-M3, and Washkewicz, the company’s co-founder and vice president, spoke with us about Eta Compute’s delay insensitive asynchronous logic, or DIAL, design IP and related topics. “Process variations, temperature variations, voltage variations—we are delay insensitive to all those perturbations,” Washkewicz says. He adds, “the logic was specifically created to be insensitive to those and the delays that are caused by those uncertainties.” Edited excerpts of the interview follow.

EECatalog: What’s something our readers should know about the EtaCore Arm® Cortex®-M3 reference design?

Paul Washkewicz, Co-Founder and VP, Eta Compute

Paul Washkewicz, Co-Founder and VP, Eta Compute

Paul Washkewicz, Eta Compute: Some of the development kits out there are overdesigned, with three or four different types of energy harvesting and three or four different applications, which makes for a very interesting development environment. But we found if you want to spend time on the application and not on the hardware, if that hardware is large and bulky, you have to “squint” quite a bit more to visualize how that hardware is actually going to get deployed in the field as a sensor, whereas ours is much closer to the form factor that is required.

We took a different approach and developed a very small sensor node, about an inch by two inches. And it includes the energy harvesting. There’s a solar cell, the Arm core, ADC, RTC, DSP and sensors, and a connection to your computer where you can download your programs onto that little board. You can develop an application on it but, not only that—you can also envision it being deployed as it is.

Figure 1: A 12-bit, 200 kilo sample per second A to D converter developed by Eta Compute. It runs at 1 or 2 microwatts.

Figure 1: A 12-bit, 200 kilo sample per second A to D converter developed by Eta Compute. It runs at 1 or 2 microwatts.

EECatalog: On the “history of digital logic” timeline, where does Eta Compute come in?

Washkewicz, Eta Compute: What we have done at ETA Compute is create a new logic design methodology. A long time back, when digital logic was just coming out, companies like Synopsys created design compilers with RTL language which were converted into gates and optimized for power, performance, and area. We came up with a new paradigm for low-power embedded systems that extends standard tools to generate digital logic that runs at a lot lower voltage and therefore a lot lower power.

And the basics of running at lower voltage included creating a logic library that can operate, in our case, all the way down to .25 volts and which targets power-constrained embedded systems.

When we set out to create this technology, our thinking was, “since we’re creating something revolutionary, we don’t want to be revolutionary in too many areas,” so we worked with Arm to get an Arm license for the Cortex-M3 and subsequently went to an Arm Cortex-M0.

We needed to change part of the logic design process, but at the same time, we wanted to work with a standard platform that everyone knows well. Because once we’re done converting the Arm core into our technology, it runs like an Arm, and it has to use the familiar Arm development environment and the Arm tools. If an engineer has software for the Arm Cortex-M3 already, we want that to run on our technology in exactly the same way.

EECatalog: How does that logic change look to the engineer, for example?

Washkewicz, Eta Compute: Take an engineer working on something for a coin cell battery or very small energy harvesting solar cells using an asynchronous Arm Cortex-M3. With a program that does very simple sensor fusion to calculate location, perhaps, running on that engineer’s standard design at 1.2 volts and burning a certain amount of power, what he could do is, start turning the voltage down, and he would notice power savings running that same sensor fusion algorithm. As long as it’s fast enough for whatever his application is, he can lower the voltage continuously down to .25 volts, and that software program will run the application the same as it always has, it just might run more slowly.

EECatalog: And how does power savings change how a real-world example might look?

Washkewicz, Eta Compute: Embedded systems and especially the IoT are looking for significantly lower power. For example, in the medical world they have real-time location services at hospitals such as Johns Hopkins. At such hospitals 25,000 or 30,000 sensors could be installed to sense where the patients are, where the doctors and nurses are, where the medicines are, where the heart rate monitors are, where all this equipment is. And as people and things move around the hospital, you have a centralized computer program or application that allows somebody in charge to know where these valuable assets are. Logic that conserves energy changes the situation where, say, 30,000 sensors are out there, and the batteries only last a week or two weeks, so that you must hire staff to run around figuring out which batteries are low and changing them. And on top of that you generate a lot of hazardous waste.

EECatalog: I interrupted what you were saying in answer to the digital logic timeline question, so let’s return to that.

Washkewicz, Eta Compute: With the logic change, we got power consumption down into the low single-digit microwatt range, and that is the lowest power Arm core you can find anywhere, I think by a significant margin. But if you want to do an SoC for an embedded system, you find that the analog to digital converter might dominate your power consumption.

That led our analog team off to develop what we believe is the world’s lowest power A to D converter (Figure 1)—it is a 12-bit, 200 kilo sample per second converter that also burns in the low single digits microwatts, [so] now you have an Arm Cortex-M3 running low single-digit microwatts, and you have an A to D converter at 200 kilo samples per second, 12 bits running at 1 or 2 microwatts. Now you are starting to get something that is really, really low power.

And that led us to the green blocks seen on Figure 1. Some folks wanted encryption. You always have a real time clock running. You may want a DSP. Those sorts of requirements caused us to turn to our engineering teams, who then converted what Figure 1 shows in green, using the same logic technology, making it possible to scale these down from typical 1.2 volts down to 0.25 volts, too.

What’s shown at the top of the bus in Figure 1 is all dynamically voltage scalable technology converted from standard RTL to our form of asynchronous logic.

The final step resulted from our asking, “Where do we get very highly efficient power management for .25 volts that scales all the way up to 1.2 volts?” It didn’t exist, so our analog engineers have designed power management that can supply the various rails to the SoC, including the important one that scales down to .25 all the way up to 1.2 volts—we get 85 to 95 percent efficiency to generate that .25 volts.

For everything Figure 1 shows in green, every spare bit of power savings has been squeezed out of all of those blocks—with the result being an SoC that is world class in power consumption.

EECatalog: Are you seeing hesitancy among potential customers to embrace this disruption?

Washkewicz, Eta Compute: When you do something radically different, you get the early adopters, and then possibly the technology leaders and then the followers. We try to go right after the early adopters, and what was helpful to that effort is that it has been formally verified from an Arm synchronous implementation to our asynchronous implementation. In addition, all the test benches exist, and we have silicon evaluation boards on which they can test software and run it through its paces. For an early adopter, it makes quick work of them deciding that, yeah, this is an SoC that operates like an Arm SoC and we can scale the voltage and get the power savings. We can demonstrate that right away, right on the evaluation board.

Ultimately, these are transistors. We are not changing the process at TSMC where Eta Compute has taped out 90nm LP and 55nm ULP. There are no modifications, and the transistors are the same transistors at TSMC. Our logic gates are just configured differently. The early adopters can verify where the code operates the way they want. They can run all their test benches so that they can quickly prove that they can get the benefit of power savings in their familiar development environment.

EECatalog: Could you speak a bit about the collaborative relationship with Arm?

Washkewicz, Eta Compute: We went to Arm with this idea about two years ago. Arm has a lot of licensees, so we are one of many, but we are doing something that is unique. Arm has done some development work along these lines—different method—but along the [same] line of lowering the voltage—internally.

So they were reasonably interested to see our outcome, and supported us, as formal verification was successfully completed. If you want to call it an Arm Cortex-M3, it has to be formally verified to be an Arm product afterwards. So as long as we overcame that hurdle, they were fine with it, and they seem pretty excited about it now.

As Arm leadership has indicated, they are very interested in Arm-based sensors being deployed in billions of units, and this is one way that deployment can happen. If you have things that can [allow] batteries to last 5x or 10x longer or can run off energy harvesting, then that is the way that billions of sensors are going to get deployed. It has been a very good relationship with Arm. The good results we achieved helped, and it supports their company direction for IoT.

EECatalog: Are you agnostic with regard to cores and processors?

Washkewicz, Eta Compute: We definitely have had customers ask us about some of the other cores—from 8051 to MIPS or even RISC-V, for example. But as a small company just coming out with technology it is very beneficial for us to work with Arm, due to its large customer base. It does not matter which core you pick, or what version of the core, you’ve got a lot of engineers that you can work with to get your technology to the marketplace.

EECatalog:  Where do you see energy harvesting running into challenges?

Washkewicz, Eta Compute: One of the interesting things is that transmission of the data has a lot of power usage relative to local computation. You compute things locally and reduce the data that you have to transmit. So, one of the challenges for IoT is the transmission.

LoRaWAN and Narrowband IoT [NB-IoT], which is more based on the cellular companies—what they have done is create a protocol which is much thinner and lighter and with lower data rates [such that] you can have wide area networks for Internet of Things that are low-powered. The steep challenge is being addressed, but it is not rolled out worldwide yet. The RF protocols and the technologies for the WAN is one of the challenges for connecting the IoT.

Many folks are working on lowering power and improving the performance of the devices themselves, including us, but we look for more deployment of the wireless piece of it.

EECatalog: Please put Eta Compute in the context of an overall industry trend or trends.

Washkewicz, Eta Compute: There is a lot of buzz in the industry about machine learning, and this is dominated by big iron, big computers, running deep neural networks. But you don’t want to send all the data back to the main computer and do big data analytics and machine learning back in the big data centers. It’s helpful if you have sensor nodes deployed or wireless networks deployed locally, and it would be nice to do some local learning there.

EECatalog: Why?

Washkewicz, Eta Compute: If it’s an energy-constrained device, like most of these small sensors are, then they don’t have a lot of power, so you get dominated by transmitting data back to the home office or the cloud in order to do machine learning. If you can do computation locally, it can be augmented by big iron, but you don’t have to transmit as much data back.

I think most folks would agree with that, but up until now one of the big issues has been: you burn too much power doing the calculation remotely—you didn’t save anything. Now, however, because we have come up with this technology that is so low power, it does enable more localized computation.

Sensory and ARM Processors Enabling AI at the Edge

Thursday, May 25th, 2017

Sensory Inc., a Silicon Valley company focused on improving the user experience and security of consumer electronics through embedded artificial intelligence (AI) technologies, continues to expand its support for ARM® Cortex® processors as part of its commitment to make embedded AI more prevalent across numerous product categories. With broad ARM platform support, Sensory offers device manufacturers embedded AI-enabling solutions for on-device voice wake up, speech and natural language recognition and processing, chatbots, computer vision and voice biometrics recognition, and more, all of which do not require internet connectivity for functionality.

Sensory’s TrulyHandsfree speech recognition and wake word technology with deep neural nets, is available on a wide range of ARM Cortex-M and Cortex-A based processors deeply embedded on-chip or at the OS level. TrulyNatural enables large vocabulary speech recognition with phrase spotting techniques for natural language recognition and chatbots on-device via Cortex-A cores running at the OS level. TrulySecure, provides face and voice recognition with user identification and biometric authentication via Cortex-A cores at the OS level.

“ARM has an increasing focus on bringing intelligence from the cloud to the device,” said Laurence Bryant, vice president of personal mobile compute, ARM. “Sensory has developed some of the best demonstrations of how AI solutions, such as natural language, computer vision, biometric wake words, and more can be implemented on the edge. ARM’s IP engines are well-suited for utilizing these embedded technologies and driving compute innovation forward in this space.”

For many reasons, including growing amounts of data from on-device sensors, device and AI system responsiveness and user privacy, we are seeing a shift away from full reliance on cloud servers for AI processing in favor of moving varying degrees of AI processing to client devices. Sensory, together with ARM, are making it possible to bring AI to the edge, allowing device manufacturers to reduce their reliance on the cloud through deeply embedded AI solutions that run on the applications processor. Sensory’s technologies can be utilized for either completely cloud-free operation or complementarily with cloud-based systems to provide a quicker, smarter and better overall AI user experience.

Embedded AI applications from Sensory that currently support ARM processor-based platforms include:

· Deep neural networks

· Speech recognition

· Always-listening wake word

· Statistical language models

· Chatbots – natural language processing

· Computer vision · Voice biometrics

“ARM is uniquely positioned to support all of Sensory’s AI-based technologies on multiple platforms,” said Todd Mozer, CEO at Sensory. “Our common vision of AI distributed to the edge makes for a valuable offering for the ARM developer community and end users.”

Portable Network Graphics image

Billions of CE devices feature ARM processor-based platforms, ranging from low-power wearables to smartphones and complex IoT Hubs. To date, Sensory’s deeply embedded technologies and OS-level software have shipped in more than two billion devices from some of the world’s largest CE, toy and robotics manufacturers.

Sensory provides ports of its solutions to numerous chip providers in the ARM ecosystem, including: NXP, QuickLogic and ST Micro, as well as offers OS-level support for ARM-based applications processors running Android, Linux and others.

For more information about this announcement, Sensory or its technologies, please contact sales@sensory.com; Press inquiries: press@sensory.com

About Sensory Inc.

Sensory Inc. creates a safer and superior UX through vision and voice technologies. Sensory’s technologies are widely deployed in consumer electronics applications including mobile phones, automotive, wearables, toys, IoT and various home electronics. With its TrulyHandsfree™ voice control, Sensory has set the standard for mobile handset platforms’ ultra-low power “always listening” touchless control. To date, Sensory’s technologies have shipped in over a billion units of leading consumer products. Visit Sensory at www.sensory.com

Contact Information

Sensory Inc.

4701 Patrick Henry Drive
Bldg 7
Santa Clara, CA, 95054
USA

tele: 408-625-3300
fax: 408-625-3300
http://www.sensoryinc.com/

A Game Console for Dogs and More ARM-Based Solutions at CES

Thursday, February 4th, 2016

Designers are leveraging a wide range of ARM cores to deliver innovative products for consumer applications.

Over the past decade, ARM® Ltd. has garnered an enviable position as the majority provider of processor cores for cell phones, tablets, and many wearable products. However, walking through the aisles at this year’s International CES show demonstrated that ARM processor cores have also become the engines of choice for many additional products such as 3D printers, robots, digital pens, wireless power transceivers, pet toys, wireless TV viewer with a pico projector, and many unusual products that show how innovative companies can be. Additionally, designers are employing cores ranging from the low-end Cortex® M0 up to multi-core A9 clusters to meet their cost and performance requirements.

Following are examples that caught my eye, of novel products employing various ARM cores to accomplish tasks and deliver new capabilities.

Processors Going to the Dogs

One of the more unusual products at the conference comes from CleverPet. The CleverPet Hub uses smart hardware, teaching as well as offering mental stimulation to dogs through advanced cognitive and behavioral science techniques. The Hub entertains and engages dogs through puzzles that combine lights, sounds and touchpads and rewards the dog with food or treats each time a puzzle is solved (Figure 1). The game controller uses a processor based on an ARM Cortex-M3 core, to manage the light sequences, monitor the touchpads, and open and close the treat dispenser. A WiFi connection allows the owner to adapt the game’s complexity and style in real-time to match an animal’s responses and progress, thus providing the dog with new learning experiences.

Figure 1:  A game console for dogs, the CleverPet Hub uses lights, touchpads and rewards to provide mental exercises for a dog. The system employs an ARM Cortex-M3 to manage all the activities. (Photo courtesy CleverPet.)

Figure 1: A game console for dogs, the CleverPet Hub uses lights, touchpads and rewards to provide mental exercises for a dog. The system employs an ARM Cortex-M3 to manage all the activities. (Photo courtesy CleverPet.)

Low Cost 3D Printer

A low-cost 3D printer demonstrated by New Matter targets the educational market and home users at a cost of $399 (Figure 2). The MOD-t uses a closed-loop servo control system based on an ARM Cortex-M4 microcontroller that is part of the STM-32 family from STMicro. The microcontroller leverages the ARM Cortex-M4 processor, which performs a calibration cycle and then does the positioning and dispensing of the low-cost plastic polylactic acid (PLA) filament material. The printer is controlled via a browser-based interface that can be found on most modern desktop and mobile browsers. Both a WiFi interface and a USB 2.0 port allow the printer files in .STL or .OBJ format to be transferred to the printer. A Texas Instruments CC3100 WiFi module provides the wireless connectivity; it also has its own embedded ARM processor.

Figure 2: The MOD-t 3D printer from New Matter leverages an ST32-family microcontroller from STMicro that employs an ARM Cortex A4 processor core to control the print head and material feed. (Photo courtesy New Matter.)

Figure 2: The MOD-t 3D printer from New Matter leverages an ST32-family microcontroller from STMicro that employs an ARM Cortex A4 processor core to control the print head and material feed. (Photo courtesy New Matter.)

In the robotics area, Wowwee showed off its latest creation, the Chip K9 robotic dog. Packing almost 20 sensors, as well as transmitters, receivers, infrared emitters, Bluetooth communications and mechatronics, the robot can respond to multiple commands (Figure 3). It can also interact with a smart ball that incorporates eight infrared LED sensors, a docking station for recharging, and a wireless wristband controller that has several dedicated control functions. A microcontroller based on an ARM Cortex M4 CPU processes all the sensor data and control inputs to make the robot respond, emulating a real dog’s actions. The company expects to formally deliver the product in July at a pre-order price of $179.

Figure 3: The Chip K9 robotic dog developed by Wowwee contains many sensors, infrared emitters, Bluetooth communications, and a lot of mechatronics, all managed by a microcontroller based on the ARM Cortex M4 processor. (Photo courtesy Wowwee.)

Figure 3: The Chip K9 robotic dog developed by Wowwee contains many sensors, infrared emitters, Bluetooth communications, and a lot of mechatronics, all managed by a microcontroller based on the ARM Cortex M4 processor. (Photo courtesy Wowwee.)

Handwriting and Image Printing

Able to capture and digitize your handwriting without any special paper or writing tablets, the Digipen from Stabilo uses two ARM Cortex-M0 cores that decode the motion information collected by the acceleration, rotation rate, magnetic field, and pressure sensors in the pen (Figure 4). The pen measures position, movement, and writing pressure (up to 2048 pressure levels). Both a USB connection for charging, and a Bluetooth low-energy (BLE) wireless link are built into the Digipen. The BLE link allows data exchange with any BLE-compatible device such as tablet or smartphone. The pen translates hand-written texts, numbers and figures into documents that can be both saved and edited or, if desired, it can interact with the user via an app, and so offer direct assistance when learning and working.

Figure 4:  The Digipen developed by Stabilo packs two ARM M0 cores to handle the sensor data capture and analysis to convert the motion into digital images that the software will display on devices such as a tablet or smartphone. (Photo courtesy the author.)

Figure 4: The Digipen developed by Stabilo packs two ARM M0 cores to handle the sensor data capture and analysis to convert the motion into digital images that the software will display on devices such as a tablet or smartphone. (Photo courtesy the author.)

With all the smartphones and tablets in the market, one of the biggest challenges is obtaining a printed copy of pictures taken by the phone or tablet, especially when you are not near a computer. To solve that problem, designers at Polaroid partnered with Zink Holdings to create the Zink hAppy zero ink printer (ZIP), shown in Figure 5, which connects wirelessly to Apple AirPrint enabled applications, as well as to smartphones and tablets using built-in WiFi communications. Controlled by an ARM 926-based processor, the printer uses an ink-free process that delivers full-color, photo-quality prints on special picture rolls that come in 3/8, ½, ¾, 1, and 2-in. widths.

Figure 5: A zero ink portable printer developed by Zink Holdings and Polaroid delivers full-color, photo-quality prints under the control of an ARM 926 processor. (Photo courtesy Polaroid.)

Figure 5: A zero ink portable printer developed by Zink Holdings and Polaroid delivers full-color, photo-quality prints under the control of an ARM 926 processor. (Photo courtesy Polaroid.)

Using quad-core processors, EzeeCube and Endless Computers have both developed nicely styled computer systems for entertainment and other applications. The EzeeCube Smart Media Center is based on a quad-core Cortex-A9 processor from Freescale Semiconductor (now part of NXP Semiconductors) serving as the main CPU in the base compute unit. A modular approach to the media system divides the functions into stackable building blocks—the base unit, a 2 Terabyte add-on storage module, and a Blu-Ray player module, each 6.3-inch long and about 1.65-inch high (Figure 6, top). Leveraging the resources on the Freescale chip, the base module includes 1-Gbit/s Ethernet, WiFi using 802.11n, Bluetooth 3.0, USB 2.0, HDMI output, and an optical digital audio output.

figure6-TOP_ezeecube

Figure 6: Two computer systems, one from EzeeCube (top) and the other from Endless Computing (bottom) both employ quad-core processor chips based on the ARM Cortex A9 and A5, respectively.  (Photos courtesy EZeeCube and Endless Computing.)

Figure 6: Two computer systems, one from EzeeCube (top) and the other from Endless Computing (bottom) both employ quad-core processor chips based on the ARM Cortex A9 and A5, respectively. (Photos courtesy EZeeCube and Endless Computing.)

A complete computer/media player in a nicely styled, white egg-like package, the Endless Mini from Endless Computers is based on a multimedia SoC from Amlogic (Figure 6, bottom). The SoC contains a quad-core ARM Cortex-A5 compute cluster, an ARM Mali-450 quad-core GPU, hardware support for 1080p decoding and the ability to support multiple video formats including H.265, H.264 (30 frames/s). I/O consists of HDMI 1.4b, multiple audio output options, a 10/100/1000 Ethernet MAC, and other system resources. The Mini runs the company’s own OS, and is ready to use right out of the box—just plug it into a keyboard and monitor or TV and you have a system capable of browsing the Web, creating documents, editing music, and much more, says the company.


Extension Media websites place cookies on your device to give you the best user experience. By using our websites, you agree to placement of these cookies and to our Privacy Policy. Please click here to accept.