Machine Learning: Why the Time is Ripe for Rapid Prototyping



Intel® processors are part of two experiments to see if machines can learn to classify data never encountered during the training phase

A research field at the intersection of statistics, artificial intelligence, and cognitive science, machine learning is seeing an increasing number of successful start-ups, applications, and services[i]. From automatic recommendations of movies to watch, what food to order, or which products to buy, to personalized online radio and recognizing your friends in your photos, many modern websites and devices have machine learning algorithms at their core. When we use websites like Facebook, Amazon, or Netflix, it is very likely that part of the site uses machine learning models, extracting knowledge from data to improve the user experience.

Figure 1: An array of inexpensive Arduino sensors detected the gas released by bacteria as bananas went from edible to overripe to rotten in one of the authors’ two smart home applications discussed in this article. In their tests, the authors used a pre-production unit of Seco Labs’ UDOO Intel® x86 single board computer, (PCB rev B), Android-x86 7.1.1 (built from git) and the latest version of UAPPI (IDE and Companion revision 2547da77[ii], belzedoo revision 8e22b40e[iii].

Quickly Developing Two Smart Home Applications
Although humans are an integral part of the learning process, traditional machine learning systems used in these applications don’t care that inputs/outputs are from/for humans. Furthermore, even though applying machine learning algorithms to real-world problems requires embedding the algorithms in software or hardware tools of some sort, and the form and usability of these tools impact machine learning’s feasibility, research at the intersection of Human Computer Interaction (HCI) and machine learning is still rare.

At the same time, interest and activity in this intersection is growing, with the realization of the importance of making interaction with humans a central part of developing machine learning systems. These efforts include applying interaction design principles to machine learning systems, using human-subject testing to evaluate machine learning systems and inspire new methods, and changing the input and output channels of machine learning systems to better leverage human capabilities[iv].

In this article, we introduce a rapid prototyping environment we are developing that integrates machine learning algorithms into our open source extension of App Inventor, UDOO App Inventor (UAPPI)[v]. See Side Bar. Such integration makes it possible to quickly develop prototypes of applications for cyber-physical systems. These systems can interact with the physical world and also identify and classify scenarios according to values measured by sensors in the environment.

Transforming Interactions with the IoT
Machine Learning is transforming the way we interact with products and with the whole ecosystem of technologies that goes under the umbrella of Internet of Things (IoT).

So far, most of these changes occur beyond any awareness and conscious involvement of the final users. Machine learning algorithms are designed to evolve and change based on data that’s either inputted into a system or collected in real-time. Algorithms process the data, draw conclusions, and make inferences to tailor content for users.

However, standard machine learning algorithms are non-interactive: they input training data and output a model. Usually, their behavior is controlled by parameters that let the user tweak the algorithm to match the properties of the domain, for example, the amount of noise in the data.

The model construction process typically involves a training loop wherein the user repeatedly chooses, is given or produces example data, and then provides criteria or labels with which to train the system. As it is an interactive process, these systems are collectively known as interactive machine learning (IML)[vi].

Possibilities for Non-Machine Learning Experts
Implementing machine learning solutions is hard for novices. It requires a solid math and computer science background. To get satisfactory results, machine learning algorithms need to be tuned and optimization parameters regulated. But since the dawn of the new millennium there is evidence showing that users—even users who are not machine learning experts—can often construct good classifiers[vii].

It is challenging to get a machine learning algorithm to work properly, even at prototype-level accuracy for users who are not experts. Taking on this challenge, we have introduced a new category of blocks in UAPPI called “Machine Learning,” which provides easy to use components.

Our two smart home implementations are:

  • A generic data classifier which, once trained, can distinguish rotten from good fruit, using inexpensive Arduino sensors
  • A detector that uses an off-the-shelf USB webcam to recognize faces and identify the state of some facial components, for example, determining if the eyes are open or if the person is smiling

The tests were carried out on a pre-production unit of UDOO X86 (PCB rev B), Android-x86 7.1.1 (built from git) and the latest version.

Data Classification with SVM
Support Vector Machine (SVM)[x] is the first algorithm considered for prototyping interactions. Support Vector Machine is a machine learning technique used in data classification problems. Its simplicity of use, robustness, and ability to generalize has made it one of the most used machine learning algorithms.

The SVM is trained by providing it some data samples. Each data sample is composed of a features vector and a tag. When some examples are provided, the machine computes a mapping function able to classify unlabeled feature vectors, never seen during the training phase.

UAPPI’s UdooSvm component enables both the training phase and the classification using libsvm[xi] or the underlying implementation. Adhering to the separation of concerns principle, this component is not aware of what the feature vectors represent or how they can be generated. Feature vectors can be generated by other blocks, specific to the required task, for example, analyzing a video or audio frame or reading data from some Arduino digital and/or analog pins.

Figure 2: Raw values read from MQ sensors (on Y-axis) changed over time (X-axis). The peaks on the graph correspond to working hours, when air circulates more, as individuals move around in the workspace.Figure 2: Raw values read from MQ sensors (on Y-axis) changed over time (X-axis). The peaks on the graph correspond to working hours, when air circulates more, as individuals move around in the workspace.

In our example, we wanted to get the ripeness level of some bananas, distinguishing among three categories: good fruit, expiring fruit, and rotten fruit. We expected bacteria to release gases during their activity on the fruit. Which kinds and levels of gases were unknown, so we connected an array of different Arduino gas sensors[xii] to the analog input pins.

We left two bananas for nine days inside a wood box (Figure 1) with several holes to let air flow through. On the top of the box we taped the gas sensors array: MQ-2, MQ-3, MQ-4, MQ-5, MQ-7, MQ-9, MQ-135. Raw data read from the sensors is shown in Figure 2.

In order to easily reproduce the experiment and to publish the full dataset, we connected the sensors to a UDOO Neo, a single-board computer running Linux, able to sample analog values via an embedded ADC.

Every five minutes the UDOO Neo read the sensor values and inserted a record in a local database. Next, data was analyzed offline, using the same library (libsvm) and the same methodology that we have integrated in UAPPI. All the feature values were scaled to avoid having features in greater numeric ranges dominate those in smaller ones.

Data collected during the first six days were tagged as “good fruit.” In the next two days, bananas ripened more and more, and data points were tagged as “going to rot.” Subsequent data were tagged as “rotten fruit,” as banana peels became totally black. The dataset was shuffled and split, with seventy percent of the vectors were considered as the training set, and the remaining thirty percent as the test set. We generated the SVM models using the same kernels available in UAPPI, obtaining the results shown in Table 1.

Table 1:  Creating the Support Vector Machine (SVM) algorithm models and obtaining the results revealed that the accuracy of  the SVM algorithm varied with the kernel function.

As expected, SVM accuracy varies with the kernel function. In UAPPI, users can easily experiment different SVM kernels, selecting an item from a drop-down menu.

At this point, adding some logic in UAPPI which announces the fruit state using text to speech and Twitter is a matter of minutes, as we can see in Figure 3.

Figure 3: Every hour, the SVM classifies the actual gas levels, then uses text-to-speech and Twitter to announce that ripe fruit is ready to eat.

 

Prototyping a Smart Lamp in a Few Minutes

Google Mobile Vision      
Google Vision[xiii] is a framework for finding objects in photos and video, using real-time on- device vision technology. The mobile face API[xiv] makes it possible to find human faces in photos, tracks the positions of facial landmarks (eyes, nose, and mouth), and provides information about the state of facial features, e.g., are the subject’s eyes open; is a smile present?

Figure 4: A smart lamp that modulates red light intensity, proportionally to the smile detected via a USB webcam.

UAPPI integrates this library in a component, UdooVision. Using a USB camera, it can detect the prominent face in the frame. The component exposes values between 0 and 1 proportionally to how much the eyes are open and how much the person is smiling (0 means closed eyes / no smile; 1 means eyes fully opened/ full smile).

Figure 5: UAPPI blocks for a lamp that can be powered up by smiling in front of a camera.

In a few minutes, it is possible to prototype a smart lamp (Figure 4). When the left eye is winked, the lamp enters “programming mode,” where it powers up an RGB LED strip proportionally to the smile. Winking again exits the programming mode. The programming blocks for this lamp are shown in Figure 5.

Conclusion
Interactive machine learning (IML) systems enable communication from humans to machines and the internal states of machines and systems are often updated upon the interaction. IML systems include systems that are using active learning and semi-supervised learning frameworks as well as systems that allow humans to provide information beyond data labels. IML can be a powerful resource for designing interactions[xv], especially for designing interactions with embedded systems in smart environments.

These interactions can concern either the use of multimodal data as input to produce desired outputs, or the generation of new patterns of interactions that exploit the huge amount of data made available by cyber-physical systems.

However, to incorporate machine learning in a design project, defining the learning problem as clearly and fully as possible is a must:

▪              What are the input parameters we will provide?

▪              What kinds of outputs are we looking for?

▪              What kind of training data will exemplify the correlations between these inputs and outputs?

If we cannot reason clearly about what a particular component should do and what data it should be trained on, then most likely the machine learning algorithm won’t be able to figure this out either.

The integration of SVM in UAPPI allows novice users to experience the machine learning, allowing the classification of data never encountered during the training phase. The data, as in our example, can come from simple Arduino sensors. UAPPI users need only apply creativity and imagine an application scenario, because the mathematical transformations of the feature vectors—essential to obtain satisfactory results—are completely transparent and automated.

This is just a first step in the integration of machine learning algorithms in UAPPI. Unfortunately, the integration of external components in the App Inventor is challenging. While App Inventor is great for prototyping Android applications, it was developed with ancient Android technologies that are no longer supported. Also, the integration with Google Mobile Vision, which can be easily integrated in a modern Android Studio application, has raised many problems.

Making more algorithms available in UAPPI will allow users to experiment more and more. However, without a background in machine learning, users will proceed by trial and error, a process of validating assumptions that can be highly labor intensive.

It may require the procurement and cleaning of a dataset, the selection of a different machine learning algorithm, the exploration of the relevant features, and a training process to see any preliminary results whatsoever. The good news is that rapid prototyping tools can help us in each of these phases, empowering us to develop more robust heuristics about the interaction phenomena we are designing for.

UDOO X86

  • UDOO X86 is a prototyping single board computer fully based on Intel processors. Like the other boards of the UDOO family, it embeds on the same PCB a high-performance multicore processor and an Arduino compatible microcontroller unit.
  • The board was designed to be fully compatible with the x86 architecture and to provide high computational power and high graphics performance.
  • It allows users to install all the most common operating systems in their official and original versions, avoiding of being stuck to custom or old versions as it happens on ARM platforms.
  • The UDOO X86 board integrates a full Arduino 101, based on the Intel® Curie chip.
  • The standard Arduino pinout allows programmers to directly use all the official Arduino libraries, plug shields and connect sensors.
  • The integration on the same PCB of a fully- fledged computer and one of the most versatile microcontrollers easily allows rapid prototyping of IoT solutions.

The board is available in 4 versions (Basic, Advanced, Advanced Plus, Ultra) with 3 different   Intel   processors:   Atom   X5-E8000, Celeron N3160, Pentium N3710 (up to 2.56 GHZ) with increasing GPU performances—up to three displays at 4K resolution. The different versions are equipped with different sizes of MMC storage and embedded RAM modules, from 2GB up to 8GB in dual channel configuration.

UDOO App Inventor
APP INVENTOR for Android is a visual programming platform for creating mobile apps for Android based smartphones and tablets. Originally developed by Google, it is now maintained by the MIT[1].

Developing apps in App Inventor does not require writing classic source code. The look and behavior of the app is developed visually, using a series of building blocks for each intended graphical or logical component.

App Inventor aims to make programming enjoyable and accessible to novices, makers, and designers.

  •  Makers, designers, and students have a tool that can be used to develop applications by providing   must-have   functionality   like   GUIs, network access and storage on databases, and by incorporating the popular Arduino sensors and actuators[2].

 

  •  In short, UAPPI integrates two worlds, the computer (Android) and the microcontroller (Arduino) in a seamless environment where it is possible to envision and prototype interactive solutions for the digital world and the IoT[3].
  1.  Abelson   H.:   App   Inventor   for   Android, https://research.googleblog.com/2009/07/ap pinventor-for-android.html  (2009)
  2. Rizzo, Antonio, et al. “UDOO App Inventor: Introducing Novices to the Internet of Things.” International    Journal    of    People-Oriented Programming (IJPOP) 4.1 (2015): 33-49.
  3. Rizzo, Antonio, et al. “Making IoT with UDOO” On Making (2016)

Acknowledgments
We thankfully acknowledge the support of the European Union H2020 program through the AXIOM project (grant ICT01-2014 GA 645496).

References

[i] Jordan, M. I., and T. M. Mitchell. “Machine learning: Trends, perspectives, and prospects.” Science 349.6245 (2015): 255-260.

[ii] UAPPI IDE and Companion source code, https://github.com/fmntf/appinventor-sources

[iii] UAPPI Arduino runtime source code, https://github.com/fmntf/belzedoo

[iv] Amershi, Saleema, Maya Cakmak, William Bradley Knox, and Todd Kulesza. “Power to the people: The role of humans in interactive machine learning.” AI Magazine 35, no. 4 (2014): 105-120.

[v] Rizzo, Antonio, et al. “UDOO App Inventor: Introducing Novices to the Internet of Things.” International Journal of People-Oriented Programming (IJPOP) 4.1 (2015): 33-49.

[vi] Fails, Jerry Alan, and Dan R. Olsen Jr. “Interactive machine learning.” In Proceedings of the 8th international conference on Intelligent user interfaces, pp. 39-45. ACM, 2003.

[vii] Ware, Malcolm, Eibe Frank, Geoffrey Holmes, Mark Hall, and Ian H. Witten. “Interactive machine learning: letting users build classifiers.”  International Journal of Human- Computer Studies 55, no. 3 (2001): 281-292

[viii] UAPPI IDE and Companion source code, https://github.com/fmntf/appinventor- sources

[ix] UAPPI  Arduino  runtime source  code, https://github.com/fmntf/belzedoo

[x]   Andrew, Alex M. “An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods by Nello Christianini and John Shawe-Taylor, Cambridge University Press, Cambridge, 2000, xiii+ 189 pp., ISBN 0-521-78019-5 (Hbk, £ 27.50).” (2000): 687-689.

[xi] Chih-Chung Chang and Chih-Jen Lin, LIBSVM: a library for support vector machines. ACM Transactions   on   Intelligent   Systems   and Technology, 2:27:1–27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

[xii] Arduino MQ gas sensors documentation webpage, http://playground.arduino.cc/Main/MQGasSensors

[xiii] Mobile Vision – Google Developers, https://developers.google.com/vision/

[xiv] https://developers.google.com/android/reference/com/google/android/gms/vision/face/ package-summary

[xv] Fiebrink, Rebecca, Perry R. Cook, and Dan Trueman. “Human model evaluation in interactive supervised learning.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 147-156. ACM, 2011.

 

 

Share and Enjoy:
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Mixx
  • Google
  • TwitThis