Posts Tagged ‘top-story’

The Machine Learning Group at Arm

Thursday, November 2nd, 2017

An Interview with Jem Davies, Arm Fellow and Arm’s new VP of Machine Learning.

Arm has established a new Machine Learning (ML) group. Putting this within context, machine learning is a subset of AI, and deep learning is a subset of ML. Neural networks are a way of organizing computational capabilities that are particularly effective for delivering the results that we see with machine learning. With machine learning, computers “learn” rather than get programmed. Machine learning is accomplished by feeding an extensive data set of known-good examples of what the computer scientist wants to see from the machine.

 

Figure 1: Deep learning, using Neural Networks (NN), attempts to model real life with data using multiple processing layers that build on each other. Examples of algorithms integral to ML include Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and others. (Credit: Arm, Jem Davies)

Arm has published some of its viewpoints about Artificial Intelligence (AI) online.

According to Jem Davies, Arm Fellow and Arm’s new VP of Machine Learning, Client Line of Business, machine learning is already a large part of video surveillance in the prevention of crime. Davies’ prior role as general manager and Fellow, Media Processing Group at Arm, is an excellent segue into ML, as Graphic Processing Units (GPUs) hold a primary role in accelerating the computational algorithms needed for ML. ML requires large amounts of good data and computational power that is fast at processing repetitive algorithms. Accelerators like GPUs and now FPGAs are used to off-load CPUs so the entire ML process is accelerated.

Davies is the kind of good-humored, experienced engineer whom everyone wants to work with; his sense of humor is just one tool in his arsenal for encouraging others with an upbeat attitude. I had an opportunity to meet with Davies at Arm TechCon. Edited excerpts of our interview follow.

Lynnette Reese (LR): Artificial Intelligence (AI) as a science has been around for a very long time.  What attributes in improved technology do you think contributed the most to the recent maturing in AI? Is it attributed to the low cost of compute power?

Jem Davies, Arm

 Jem Davies: Really, it’s the compute power that’s available at the edge. In the server, there’s no real change, but the compute power available at the edge has transformed the last five years, so that’s made a huge difference. What’s fired up interest in the neural network world is the availability of good quality data. Neural networks provide a technique that’s more than 50 years old, what we’ve got now is the training data, good, quality, correlated data. So for example, the one that sort of drove it initially was image recognition. In order to train a neural network to do image recognition, you have to have vast quantities of images that are labeled. Where are you going to find one of those? As it turns out, Google and Facebook have all of your pictures. You’ve tagged all of those pictures, and you clicked on the conditions that said they could do what they wanted with them. The increasing capability of computing, particularly in devices, has led to the explosion in data.

LR: You said that the explosion of data is the impetus for machine learning, and this is clear with image recognition, perhaps, but where else do we see this?

Davies: The computational linguists are having a field day. Nowadays we have access to all sorts of conversations that take place on the internet. You have free, easy access to this data. You want to work out how people talk to each other, look on the internet. If you are trying to work out how people talk to each other, look on the web. And it turns out that they do it in all sorts of different languages, and it’s free to take. So, the data is there.

LR: So, applying successful ML to any problem first requires good data?

Davies: If you haven’t got the data, it’s difficult to get a neural network algorithm. They are working on that; there is research being done to work using much smaller amounts of training data, which is interesting because it opens up training at the device level. We are doing training on-device now but in a relatively limited way; but you don’t need six trillion pictures of cats to accomplish cat identification.

LR: In your Arm talk about Computer Vision last year you said there were 6 million CCTVs in the U.K. What do you imagine AI will be doing with images CCTV 20 years from now? For instance, do you perceive that we can combat terrorism much more efficiently?

Davies: It is being done today. We are analyzing gait, suspicious behavior; there are patterns people have that give themselves away. This is something an observational psychologist already knows. People give themselves away by the way they stand; the way they hold themselves.

LR: What about sensing beyond visual recognition? For example, can you use an IR sensor to determine whether a facial prosthesis is in use, for example?

Davies: When engineering moves beyond the limited senses that humans possess, you can throw more information at the problem. Many activities work much better using IR than in the visible spectrum. IR poses fewer issues with shadows, for instance. One example of challenges we face with a security camera is that the camera might have to cover an area where the sun is streaming down, and there’s a shadow at the other end of the frame. If you are tracking someone from one side to the other of the frame, shadows can interfere with obtaining consistent detail in such situations. Move that to the IR domain, and it gets a whole lot easier. But why stop there? You can add all sorts of other things to it as well. Why not add radar, microwaves? You can do complete contour mapping.

LR: So, you could get very detailed with this? Adding additional sensors can give more data.

Davies: Yes, sensor fusion is the way forward. We [humans] are adding together the input from all our senses all the time. And our brains sometimes tell us, “That input doesn’t fit, just ignore it.” I can turn my head one way and think I can still see someone in my peripheral vision. But actually, you can’t. The spot right in front of you is the area that you can see in any detail. The rest is just your brain filling things in for you.

LR:  What’s Arm doing to innovate for AI?

Davies: We are doing everything; hardware, software, and working with the ecosystem. If you look at hardware, we are making our existing IP, our current processors better at executing machine learning workloads. We are doing that for our CPUs and our GPUs. On the interconnect side, things like DynamIQ [technology], enable our partners to connect other devices into SoCs containing our IP. This is a considerable amount of software because people do not want to get deep into the details.

If you look at the Caltrain example, where an image recognition model for trains was developed with data from Google images and used on a 32-bit Arm-based Raspberry Pi, it’s becoming quite easy to apply ML techniques. He just downloaded stuff off the web; he didn’t know anything about it. He’s not an image recognition specialist, he doesn’t know anything about neural networks, and, why should he?  If we [Arm] do our jobs properly, if we provide the software to people, it just works. It turns out there’s a lot of software involved; probably half my engineers are software engineers. The Arm compute library, is given away [as] open source; it has optimized routines to do all the things that go into machine learning. That is what powers the implementations on our devices. Google’s Tensorflow, Facebook’s Caffe, and others plug into that, and so you end up with force multiplier effect. We do the work, give it to the ecosystem, and Facebook has now got millions of devices that are optimized to run on Arm CPUs and Arm Mali GPUs. As you can see, there’s a lot of hardware development, software development, and a significant amount of working with the ecosystem. Everybody’s getting involved.

LR: What can you tell me about Arm’s new Machine Learning business? Do you have any industry numbers?

Davies: Industry numbers are hard to get. What I will say is that it’s going to be huge. It’s going to affect everything we do. One of the reasons why we formed the machine learning business as it is, is that it cuts across all new lines of business.

LR: Not that you should take sides, but what would you say about using FPGAs vs. GPUs in AI?

Davies: Arm doesn’t take sides. Arm always plays both sides. FPGAs are flexible; you can reconfigure the hardware to great benefit. But that comes at the cost of much less density and much more power. People [physically] get burnt touching an FPGA. For us, it’s a trade-off. If you can implement something in an FPGA that’s absolutely, specifically tailored to that problem. Presumably, it will be more efficient. But executing on an FPGA…an FPGA is bigger, much more expensive, and uses more power. Which way does that balance come down? It’s a different problem, as it comes down to it. Pretty much for anything battery powered, the answer is that FPGAs are a bust in that space. FPGAs don’t fit there; not in small, portable electronic devices. For environments that are much bigger, less power constrained, maybe there’s a place for it. However, note that both Altera and Xilinx have products with Arm cores now.

LR: What would you say to young engineers starting out today that want to go into Machine Learning?

Davies: “Come with us, we are going to change the world,” which is precisely what I said in an all-hands meeting with my group just last week. And I don’t think that’s too grand. Look at what Arm did with graphics. We started a graphics business in 2006; we had zero market share. Yet our partners shipped over a billion chips last year containing Mali Arm GPUs.

LR: Billions of people are tapping on their devices using Arm’s technology.

Davies: Yes. If I look back on what we have achieved at Arm, the many hundreds of people doing this, you can easily say that Arm has changed the world.

LR: So, Arm is not a stodgy British company? Everyone needs good talent, and Arm is changing the way we live?

Davies: Absolutely, we are a talent business. Don’t tell the accountants, but the assets of the company walk out the door every night. If you treat people well, they come back in the next morning.

LR: It sounds like Arm values employees very much.

Davies: Well, we definitely try to. Clearly, as any company, we occasionally get things wrong, but we put a lot of effort into looking after people because together we make the company. As with any company, our ambition is limited by the number of people we have. Effectively, we are no longer limited by money [due to Softbank’s vision upon acquiring Arm].

LR: So now you can build cafeterias with free food and busses to carry employees around?

Davies: Right, I am still waiting for my private jet…but seriously, that’s what I was talking about, that we are changing the world. I think [new graduates] would want to be part of that.

 

Questions to Ask When Specifying a Frame Grabber for Machine Vision Applications

Tuesday, May 30th, 2017
Over the last two decades there had been a general trend towards the standardization of machine vision interfaces. Not long ago, engineers were required to purchase unique and expensive cables for every camera-to-frame grabber combination, or even camera to PC interface. In some cases, different cameras from the same manufacturer required an entirely different set of cables, resulting in costly upgrades and unsatisfied customers.
Standardization has changed that, making life easier for engineers and manufacturers alike. Camera Link became a universally accepted interface in 2001 and is still going strong. CoaXPress (2011), USB3 (2013), GigE Vision (2006) and Camera Link HS (2012) are also now universally accepted interfaces for machine vision solutions. Others such as Firewire have been in the market since the 80’s but are being replaced by newer interfaces. Thunderbolt®, an Apple technology, is still on the periphery and not as widely developed or accepted into the market.
Nowhere has standardization been more keenly observed than in frame grabbers. Along with cameras and cables, frame grabbers are essential components in most high-end machine vision systems. Frame grabbers are essential for these high end applications as the data rates exceed anything that can be provided by a non frame grabber solution. They are also required when complex I/O signals are introduced into the vision system. Examples of these are quarature encoders, strobes and triggers of various types.
With the introduction of high-speed communication links like Ethernet, Firewire and USB, pundits forecasted an end of the frame grabber. After all, a smart digital video camera was capable of packaging information into packets for direct feed into a PC’s communication ports, so there was no long a need for a frame grabber, right?
Not so fast. For all their hype Direct-to-PC standards are, at best, adequate for lower-end applications. Cameras are evolving at a rapid pace and now produce megapixel images at extremely fast frame and line rates, far exceeding the 120MB/s serial-interface limit. Vision engineers have found that frame grabbers offer advantages that continue to make them necessary, perhaps more now than ever before.
SPECIFYING FRAME GRABBERS
The main component in a machine vision system that determines the frame grabber is the sensor. To find the right sensor, the customer asks themselves three questions about their machine vision solution:
  • What do I need to image?
  • How do I need to image it?
  • When do I need to image it?
Upon deciding a choice of sensor, the customer then goes about finding which companies can offer this sensor in a package; or in other words, a camera. Next, they must determine their application’s imaging requirements to choose what performance features are needed in a frame grabber, including:
  • Is there a large volume of data that needs to be acquired from the camera?
  • Is there a high speed of data acquisition involved?
  • What about timing?
  • Are interrupts OK?
  • Can the system deal with dropped or lost frames?
  • Are there other components to consider such as encoders or strobes? Other I/O?
  • Is it a multi-camera system?
  • What is the maximum distance between the camera and the PC?
Other important questions for frame grabbers are: “How do I hook up my encoder?” or “How do I test various I/O options?” For this reason it is important that the frame grabber have the capability of hosting a number of optional components, be it a DIN mounted I/O test fixture or a cable to get the I/O outside the PC to connect to an external I/O.
As important as these questions are, frame grabber cost can be as much of a factor as performance. For some, performance is the end-all and as such, they will specify the frame grabber that does precisely what is required no matter the price point. For others, price will dictate just how “exact” they want their system to be, or how much they can operate with certain limitations such as bandwidths and distances.

SUPPLIER QUALIFICATIONS
Frame grabbers are no longer exclusively used in machine vision; they are today an essential component of dozens of industries. It is therefore important that the frame grabber manufacturer is involved in standards committees and other groups monitoring the evolution of this fast-changing technology.
It is equally critical that the manufacturer works closely with camera manufacturers, cable companies and image processing software developers to ensure that the customer will be able to integrate their choice of components with a specific frame grabber. BitFlow has been in the business of frame grabbers since 1993. Over that time, BitFlow has advanced and adopted the various machine vision interfaces to best serve the needs of the customer. The company’s frame grabber interfaces now include Camera Link, CoaXPress and Differential, and is coupled with powerful software and APIs’ compatible with popular image processing software packages.
To learn more, please visit www.bitflow.com.
Affordable yet powerful, the BitFlow Aon-CXP is optimized for use with the newest generation of smaller, cooler operating CXP single-link cameras popular in the IIoT

About BitFlow

BitFlow has been developing reliable, high-performance Frame Grabbers for use in imaging applications since 1993. BitFlow is the leader in Camera Link frame grabbers, building the fastest frame grabbers in the world, with the highest camera/frame grabber densities, triggering performance, and price. With thousands of boards installed throughout the world, into hundreds of imaging applications, BitFlow is dedicated to using this knowledge and experience to provide customers with the best possible image acquisition and application development solutions. BitFlow, located in Woburn, MA, has distributors and resellers located all over the world including Asia, the Americas, and Europe. Visit our website at www.bitflow.com.

Contact Information

BitFlow, Inc.

400 West Cummings Park Suite 5050
Woburn, MA, 01801

tele: 781-932-2900
fax: 781-932-2900
http://www.bitflow.com/

Consumer Robot Sales to Surpass 50 Million Units Annually by 2022, According to Tractica

Wednesday, May 24th, 2017

Rising Adoption of Household, Toy, and Educational Robots Will Drive a Five-Fold Increase in Consumer Robot Shipments in 5 Years

The consumer robotics market is undergoing a period of significant evolution, with rising demand for household, toy, and educational robots being fueled by several key market developments including a proliferation of robotics startups with innovative products, rapidly declining prices due to lower component costs, the growth of connected smart devices as an enabler for consumer robots, and demographic trends around the world.

According to a new report from Tractica, worldwide shipments of consumer robots will increase from 10.0 million in 2016 to 50.7 million units annually by 2022. During that period, the market intelligence firm forecasts that the consumer robotics market will grow from $3.8 billion in 2016 to $13.2 billion by 2022.

“Consumer robotics is shifting from a phase of being largely dominated by cleaning robots, into robotic personal assistants or family companions,” says research analyst Manoj Sahi. “In addition, robotic toys, which, until now, were largely gimmicks, are transforming into interactive connected play devices that have virtually limitless possibilities, as well as useful educational tools as a part of science, technology, engineering, and math (STEM)-based curriculum.”

Tractica’s report, “Consumer Robotics”, examines the global market trends for consumer robots and provides market sizing and forecasts for shipments and revenue during the period from 2016 through 2022. The report focuses on crucial market drivers and challenges, in addition to assessing the most important technology issues that will influence market development. In total, 109 key and emerging industry players are profiled. An Executive Summary of the report is available for free download on the firm’s website.


About Tractica

Tractica is a market intelligence firm that focuses on human interaction with technology. Tractica’s global market research and consulting services combine qualitative and quantitative research methodologies to provide a comprehensive view of the emerging market opportunities surrounding Artificial Intelligence, Robotics, User Interface Technologies, Wearable Devices, and Digital Health. For more information, visit www.tractica.com or call +1-303-248-3000.

Contact Information

Tractica

1111 Pearl Street
Suite 201
Boulder, CO , 80302
USA

www.tractica.com/