AMD’s “Beefy” APUs Bulk Up Thin Clients for HP, Samsung

There are times when a tablet is too light, and a full desktop too much. The answer? A thin client PC powered by an AMD APU.

Note: this blog is sponsored by AMD.

A desire to remotely access my Mac and Windows machines from somewhere else got me thinking about thin client architectures. A thin “client” machine has sufficient processing for local storage and display—plus keyboard, mouse and other I/O—and is remotely connected to a more beefy “host” elsewhere. The host may be in the cloud or merely somewhere else on a LAN, sometimes intentionally inaccessible for security reasons.

Thin client architectures—or just “thin clients”—find utility in call centers, kiosks, hospitals, “smart” monitors and TVs, military command posts and other multi-user, virtualized installations. At times they’ve been characterized as low performance or limited in functionality, but that’s changing quickly.

They’re getting additional processing and graphics capability thanks to AMD’s G-Series and A-Series Accelerated Processing Units (APUs). By some analysts, AMD is number one in thin clients and the company keeps winning designs with its highly integrated x86 plus Radeon graphics SoCs: most recently with HP and Samsung.

HP’s t420 and mt245 Thin Clients

HP’s ENERGY STAR certified t420 is a fanless thin client for call centers, Desktop-as-a-service and remote kiosk environments (Figure 1). Intended to mount on the back of a monitor such as the company’s ProDisplays (like you see at the doctor’s office), the unit runs HP’s ThinPro 32 or Smart Zero Core 32 operating system, has either 802.11n Wi-Fi or Gigabit Ethernet, 8 GB of Flash and 2 GB of DDR3L SDRAM.

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

Figure 1: HP’s t420 thin client is meant for call centers and kiosks, mounted to a smart LCD monitor. (Courtesy: HP.)

USB ports for keyboard and mouse supplement the t420’s dual display capability (DVI-D  and VGA)—made possible by AMD’s dual-core GX-209JA running at 1 GHz.

Says AMD’s Scott Aylor, corporate vice president and general manager, AMD Embedded Solutions: “The AMD Embedded G-Series SoC couples high performance compute and graphics capability in a highly integrated low power design. We are excited to see innovative solutions like the HP t420 leverage our unique technologies to serve a broad range of markets which require the security, reliability and low total cost of ownership offered by thin clients.”

The whole HP thin client consumes a mere 45W and according to StorageReview.com, will retail for $239.

Along the lines of a lightweight mobile experience, HP has also chosen AMD for their mt245 Mobile Thin Client (Figure 2). The thin client “cloud computer” resembles a 14-inch (1366 x 768 resolution) laptop with up to 4GB of SDRAM and a 16 GB SSD, the unit runs Windows Embedded Standard 7P 64 on AMD’s quad core A6-6310 APU with Radeon R4 GPU. There are three USB ports, 1 VGA and 1 HDMI, plus Ethernet and optional Wi-Fi.

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Figure 2: HP’s mt245 is a thin client mobile machine, targeting healthcare, education, and more. (Courtesy: HP.)

Like the t420, the mt245 consumes a mere 45W and is intended for employee mobility but is configured for a thin client environment. AMD’s director of thin client product management, Stephen Turnbull says the mt245 targets “a whole range of markets, including education and healthcare.”

At the core of this machine, pun intended, is the Radeon GPU that provides heavy-lifting graphics performance. The mt245 can not only take advantage of virtualized cloud computing, but has local moxie to perform graphics-intensive applications like 3D rendering. Healthcare workers might, for example, examine ultrasound images. Factory technicians could pull up assembly drawings, then rotate them in CAD-like software applications.

Samsung Cloud Displays

An important part of Samsung’s displays business involves “smart” displays, monitors and televisions. Connected to the cloud or operating autonomously as a panel PC, many Samsung displays need local processing such as that provided by AMD’s APUs.

Samsung’s recently announced (June 17, 2015) 21.5-inch TC222W and 23.6-inch TC242W also use AMD G-Series devices in thin client architectures. The dual core 2.2 GHz GX222 with Radeon HD6290 powers both displays at 1920 x 1080 (HD) and provides six USB ports, Ethernet, and runs Windows Embedded 7 out of 4GB of RAM and 32 GB of SSD.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Figure 3: Samsung’s Cloud Displays also rely on AMD G-Series APUs.

Said Seog-Gi Kim, senior vice president, Visual Display Business, Samsung Electronics, “Samsung’s powerful Windows Thin Client Cloud displays combine professional, ergonomic design with advanced thin-client technology.” The displays rely on the company’s Virtual Desktop Infrastructure (VDI) through a centrally managed data center that increases data security and control (Figure 3). Applications include education, business, healthcare, hospitality or any environment that requires virtualized security with excellent local processing and graphics.

Key to the design wins is the performance density of the G-Series APUs, coupled with legacy x86 software interoperability. The APUs–for both HP and Samsung–add more beef to thin clients.

 

Move Over Arduino, AMD and GizmoSphere Have a “Jump” On You with Graphics

The UK’s National Videogame Arcade relies on CPU, graphics, I/O and openness to power interactive exhibits.

Editor’s note: This blog is sponsored by AMD.

When I was a kid I was constantly fascinated with how things worked. What happens when I stick this screwdriver in the wall socket? (Really.) How come the dinner plate falls down and not up?

Humans have to try things for ourselves in order to fully understand them; this sparks our creativity and for many of us becomes a life calling.

Attempting to catalyze visitors’ curiosity, the UK’s National Videogame Arcade (NVA) opened in March 2015 with the sole intention of getting children and adults interested in videogames through the use of interactive exhibits, most of which are hands-on. The hope is that young people will first be stimulated by the games, and secondly that they someday unleash their creativity on the videogame and tech industries.

The UK's National Videogame Arcade promotes gaming through hands-on exhibits powered by GizmoSphere embedded hardware.

The UK’s National Videogame Arcade promotes gaming through hands-on exhibits powered by GizmoSphere embedded hardware.

 Might As Well “Jump!”

The NVA is located in a corner building with lots of curbside windows—imagine a fancy New York City department store but without the mannequins in the street-side windows. Spread across five floors and a total of 33,000 square feet, the place is a cooperative effort between GameCity (a nice bunch of gamers), the Nottingham City Council, and local Nottingham Trent University.

The goal of pulling in 60,000 visitors a year is partly achieved by the NVA’s signature exhibit “Jump!” that allows visitors to experience gravity (without the plate) and how it affects videogame characters like those in Donkey Kong or Angry Birds. Visitors actually get to jump on the Jump-o-tron, a physics-based sensor that’s controlled by GizmoSphere’s Gizmo 2 development board.

The Jumpotron uses AMD's G-Series SoC combining an x86 and Radeon GPU.

The Jumpotron uses AMD’s G-Series SoC combining an x86 and Radeon GPU.

The heart of Gizmo 2 is AMD’s G-Series APU, combining a 64-bit x86 CPU and Radeon graphics processor. Gizmo 2 is the latest creation from the GizmoSphere nonprofit open source community which seeks to “bring the power of a supercomputer and the I/O capabilities of a microcontroller to the x86 open source community,” according to www.gizmosphere.org.

The open source Gizmo 2 runs Windows and Linux, bridging PC games to the embedded world.

The open source Gizmo 2 runs Windows and Linux, bridging PC games to the embedded world.

Jump!” allows visitors to experience—and tweak—gravity while examining the effect upon on-screen characters. The combination requires extensive processing—up to 85 GFLOPS worth—plus video manipulation and display. What’s amazing is that “Jump!”, along with many other NVA exhibits, isn’t powered by rackmount servers but rather by the tiny 4 x 4 inch Gizmo 2 that supports Direct X 11.1, OpenGL 4.2x, and OpenCL 1.2. It also runs Windows and Linux.

AMD’s “G” Powers Gizmo 2

Gizmo 2 is a dense little package, sporting HDMI, Ethernet, PCIe, USB (2.0 and 3.0), plus myriad other A/V and I/O such as A/D/A—all of them essential for NVA exhibits like “Jump!” Says Ian Simons of the NVA, “Gizmo 2 is used in many of our games…and there are plans for even more games embedded into the building,” including furniture and even street-facing window displays.

Gizmo 2’s small size and support for open source software and hardware—plus the ability to develop on the gamer’s Unity engine—makes Gizmo 2 the preferred choice. Yet the market contains ample platforms from which to choose. Arduino comes to mind.

Gizmo 2's schematic.

Gizmo 2′s schematic. The x86 G-Series SoC is loaded with I/O.

Compared to Arduino, the AMD G Series SoC (GX-210HA) powering Gizmo 2 is orders of magnitude more powerful, plus it’s x86 based and running at 1.0GHz (the integral GPU runs at 300 MHz). This makes the world’s cache of Intel-oriented, Windows-based software and drivers available to Gizmo 2—including some server-side programs. “NVA can create projects with Gizmo 2, including 3D graphics and full motion video, with plenty of horsepower,” says Simons. He’s referring to some big projects already installed at the NVA, plus others in the planning stages.

“One of things we’d like to do,” Simons says, “is continue to integrate Gizmo 2 into more of the building to create additional interactive exhibits and displays.” The small size of Gizmo 2, plus the wickedly awesome performance/graphics rendering/size/Watt of the AMD G-Series APU, allows Gizmo 2 to be embedded all over the building.

See Me, Feel Me

With a nod to The Who’s (1) rock opera Tommy, the NVA building will soon have more Gizmo 2 modules wired into the infrastructure, mixing images and sound. There are at least three projects in the concept stage:

  • DMX addressable logic in the central stairway.  With exposed cables and beams, visitors would be able to control the audio, video, and possibly LED lighting of the stairwell area using a series of switches. The author wonders if voice or other tactile feedback would create all manner of immersive “psychedelic” A/V in the stairwell central hall.
  • Controllable audio zones in the rooftop garden. The NVA’s Yamaha-based sound system already includes 40 zones. Adding AMD G-Series horsepower to these zones would allow visitors to create individually customized light/sound shows, possibly around botanical themes. Has there ever been a Little Shop of Horrors videogame where the plants eat the gardener? I wonder.
  • Sidewalk animation that uses all those street-facing windows to animate the building, possibly changing the building’s façade (Star Trek cloak, anyone?) or even individually controlling games inside the building from outside (or presenting inside activities to the outside). Either way, all those windows, future LCDs, and reams of I/O will require lots more Gizmo 2 embedded boards.

The Gizmo 2 costs $199 and is available from several retailers such as Element14. With Gerber schematics and all the board-focused software open source, it’s no wonder this x86 embedded board is attractive to gamers. With AMD’s G-Series APU onboard, the all-in-one HDK/SDK is an ideal choice for embedded designs—and those future gamers playing with the Gizmo 2 at the UK’s NVA.

BTW: The Who harkened from London, not Nottingham.

Coke Machine with a Wii Mote? K-Cup Coffee Machine Marries Oculus Rift?

Talk about some kind of an weird Coke-machine-meets-Minority Report mash-up, but the day is here where vending machines will do virtual reality tricks, access your Facebook page, and be as much fun as Grand Theft Auto. And they’ll still dispense stuff.

448px-Custom_Coke_machine_-_operating_screen

By Jeff Moriarty (Custom Coke machine – operating screen) [CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

According to embedded tech experts ADLINK Technology and Intel, the intelligent vending machine is here. Multiple iPad-like screens will entertain you, suggest which product to buy, feed you social media, and take your money.

These vending machines will be IoT-enabled, connected to the cloud and multiple online databases, and equipped with multiple cameras. Onboard signal processing will respond to 3D gesture control or immerse you in a virtual reality scenario with the product you’re trying to buy. Facial recognition, voice recognition, and even following your eyes as you roam the screens will be the norm.

Facial recognition via Intel's Perceptual Computing SDK. (Courtesy: Intel Corporation.)

Facial recognition via Intel’s Perceptual Computing SDK. (Courtesy: Intel Corporation.)

For you, the customer, the vending machine experience will be fun, entertaining—and very much like a video game. For the retailer, they’re hoping to make more money off of you while using remote IoT monitoring, predictive diagnostics, and “big data” to lower their costs.

The era of the intelligent vending machine is upon us. The machine’s already connected to the Internet…and one of many billions of smart IoT nodes coming to a store near you.

Read more about them here.

Virtual, Immersive, Interactive: Performance Graphics and Processing for IoT Displays

Vending machines outside Walmart

Current-gen machines like these will give way to smart, IoT connected machines with 64-bit graphics and virtual reality-like customer interaction.

Not every IoT node contains a low-performance processor, sensor and slow comms link. Sure, there may be tens of billions of these, but estimates by IHS, Gartner, Cisco still infer the need for billions of smart IoT nodes with hefty processing needs. These intelligent IoT platforms are best left to 64-bit algorithm processors like AMD’s G-and R-Series of Accelerated Processing Units (APU). AMD’s claim to fame is 64-bit cores combined with on-board Radeon graphics processing units (GPU) and tons of I/O.

As an example, consider this year’s smart vending machine. It may dispense espresso or electronic toys, or maybe show the customer wearing virtual custom-fit clothing. Suppose the machine showed you–at that very moment–using or drinking the product in the machine you were just starting at seconds before.

Far fetched? Far from it. It’s real.

These machines require a multi-media, sensor fusion experience. Multiple iPad-like touch screens may present high-def product options while cameras track customers’ eye movements, facial expressions, and body language in three-space.

This “visual compute” platform will tailor the display information to best interact with the customer in an immersive, gesture-sort of experience. Fusing all these inputs, processing the data in real-time, and driving multiple displays is best handled by 64-bit APUs with closely-coupled CPU and GPU execution units, hardware acceleration, and support for standards like DirectX 11, HSA 1.0, OpenGL and OpenCL.

For heavy lifting in visual compute-intensive IoT platforms, keep an eye on AMD’s graphics-ready APUs.

If you are attending Embedded World February 24-26, be sure to check out the keynote Heterogeneous Computing for an Internet of Things World,” by Scott Aylor, Corporate VP and General Manager, AMD Embedded Solutions on Wednesday the 25th at 9:30.

This blog was sponsored by AMD.

The Soft(ware) Core of Qualcomm’s Internet of Everything Vision

Qualcomm supplements silicon with multiple software initiatives.

Qualcomm Snapdragon
Update 1: Added attribution to figures.
The numbers are huge: 50B connected devices; 7B smartphones to be sold by 2017; 1000x growth in data traffic within a few years. Underlying all of these devices in the Internet of Things…wait, the Internet of Everything…is Qualcomm. Shipping 700 million chipsets per year on top of a wildly successful IP creation business in cellular modem algorithms, plus being arguably #1 in 3G/4G/LTE with Snapdragon SoCs in smartphones, the company is now setting its sights on M2M connectivity. Qualcomm has perhaps more initiatives in IoT/IoE than any other vendor. Increasingly, those initiatives rely on the software necessary for the global M2M-driven IoT/IoE trend to take root.

Telit Wireless Devcon
Speaking at the Telit Wireless Devcon in San Jose on 15 October, Qualcomm VP Nakul Duggal of the Mobile Computing Division painted a picture showing the many pieces of the company’s strategy for the IoT/E. Besides the aforementioned arsenal of SnapDragon SoC and Gobi modem components, the company is bringing to bear Wi-Fi, Bluetooth, local radio (like NFC), GPS, communications stacks, and a vision for heterogeneous M2M device communication they call “dynamic proximal networking”. Qualcomm supplies myriad chipsets to Telit Wireless, and Telit rolls them into higher order modules upon which Telit’s customers add end-system value.

Over 8 Telit Wireless modules are based upon Qualcomm modems.

Over eight Telit Wireless modules are based upon Qualcomm modems, as presented at the Telit Wireless Devcon 2013.

But it all needs software in order to work. Here are a few of Qualcomm’s software initiatives.

Modem’s ARM and API Open to All
Many M2M nodes–think of a vending machine, or the much maligned connected coffee maker–don’t need a lot of intelligence to function. They collect data, perform limited functions, and send analytics and diagnostics to their remote M2M masters. Qualcomm’s Duggal says that the ARM processors in Qualcomm modems are powerful enough to perform that computational load. There’s no need for an additional CPU so the company is making available Java (including Java ME), Linux and ThreadX to run their 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm is already on its 3rd generation of Gobi LTE modems.

Qualcomm has also opened up the modem APIs and made available their IoT Connection Manager software to make it easier to write closer-to-the-metal code for modem. Duggal revealed that Qualcomm has partnered with Digi International in this effort as it applies to telematics market segments.

Leverage Smartphone Graphics
And some of those M2M devices on the IoE may have displays–simple UIs at first (like a vending machine)—but increasingly more complex as the device interacts with the consumer. A restaurant’s digital menu sign, for example, need not run a full blown PC and Windows Embedded operating system when a version of a Snapdragon SoC will do. After all, the 1080p HDMI graphics needs of an HTC One with S600 far outweigh those of a digital sign. Qualcomm’s graphics accelerators and signal processing algorithms can easily apply to display-enabled M2M devices. This applies doubly as more intelligence is pushed to the M2M node, alleviating the need to send reams of data up to the cloud for processing.

Digital 6th Sense: Context
Another area Duggal described as the “Digital 6th Sense” might be thought of as contextual computing. Smartphones or wearable fitness devices like Nike’s new FuelBand SE might react differently when they’re outside, at work, or in the home. More than just counting steps and communicating with an App, if the device knows where it is…including precisely where it is inside of a building…it can perform different functions. Qualcomm now includes the Atheros full RF spectrum of products including Bluetooth, Bluetooth LE, NFC, Wi-Fi and more. Software stacks for all of these enable connectivity, but code that meshes (no pun) Wi-Fi with GPS data provides outside and inside position information. Here, Qualcomm’s software melds myriad infrastructure technologies to provide inside positioning. A partnership with Cisco will bring the technology to consumer locations like shopping malls to coexist with Cisco’s Mobility Services Engine for location-based Apps.

Smart Start at Home
Finally, the smart home is another area ripe for innovation. Connected devices in the home range from the existing set-top box for entertainment, to that connected coffee pot, smart meter, Wi-Fi enabled Next thermostat and smoke/CO detector, home health and more. These disparate ecosystems, says Duggal, are similar only in their “heterogeneousness” in the home. That is: they were never designed to be interconnected. Qualcomm is taking their relationships with every smart meter manufacturer, their home gateway/backhaul designs, and their smartphone expertise, and rolling it into the new AllJoyn software effort.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener.

The open source AllJoyn initiative, spearheaded by Qualcomm, seeks to connect heterogeneous M2M nodes. Think: STB talks to thermostat, or refrigerator talks to garage door opener. Courtesy: Qualcomm and AllJoyn.org .

AllJoyn is an open source project that seeks to set a “common language for the Internet of Everything”. According to AllJoyn.org, the “dynamic proximal network” is created using a universal software framework that’s extremely lightweight. Qualcomm’s Duggal described the ability for a device to enumerate that it has a sensor, audio, display, or other I/O. Most importantly, Alljoyn is “bearer agnostic” across all leading OSes or connectivity mechanism.

AllJoyn connectivity diagram.

AllJoyn connectivity diagram. Courtesy: www.alljoyn.org .

If Qualcomm is to realize their vision of selling more modems and Snapdragon-like SoCs, making them play well together and exchange information is critical. AllJoyn is pretty new; a new Standard Client (3.4.0) was released on 9 October. It’s unclear to me right now how AllJoyn compares with Wind River’s MQTT-based M2M Intelligent Device Platform or Digi’s iDigi Cloud or Eurotech’s EveryWhere Device Framework.

Qualcomm’s on a Roll
With their leadership in RF modems and smartphone processors, Qualcomm is laser focused on the next big opportunity: the IoT/E. Making all of those M2M nodes actually do something useful will require software throughout the connected network. With so many software initiatives underway, Qualcomm is betting on their next big thing: the Internet of Everything. Software will be the company’s next major “killer app”.

Intel’s Atom Roadmap Makes Smartphone Headway

After being blasted by users and pundits over the lack of “low power” in the Atom product line, new architecture and design wins show Intel’s making progress.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

A 10-second Google search on “Intel AND smartphone” reveals endless pundit comments on how Intel hasn’t been winning enough in the low power, smartphone and tablet markets.  Business publications wax endlessly on the need for Intel’s new CEO Brian Krzanich to make major changes in company strategy, direction, and executive management in order to decisively win in the portable market. Indications are that Krzanich is shaking things up, and pronto.

Forecasts by IDC (June 2013) and reported by CNET.com (http://news.cnet.com/8301-1035_3-57588471-94/shipments-of-smartphones-tablets-and-oh-yes-pcs-to-top-1.7b/) peg the PC+smartphone+tablet TAM at 1.7B units by 2014, of which 82 percent (1.4B units, $500M USD) are low power tablets and smart phones. And until recently, I’ve counted only six or so public wins for Intel devices in this market (all based upon the Atom Medfield SoC with Saltwell ISA I wrote about at IDF 2012). Not nearly enough for the company to remain the market leader while capitalizing on its world-leading tri-gate 3D fab technology.

Behold the Atom, Again

Fortunately, things are starting to change quickly. In June, Samsung announced that the Galaxy Tab 3 10.1-inch SKU would be powered by Intel’s Z2560 “Clover Trail+” Atom SoC running at 1.2GHz.  According to PC Magazine, “it’ll be the first Intel Android device released in the U.S.” (http://www.pcmag.com/article2/0,2817,2420726,00.asp)and it complements other Galaxy Tab 3 offerings with competing processors. The 7-inch SKU uses a dual-core Marvell chip running Android 4.1, while the 8-inch SKU uses Samsung’s own Exynos dual-core Cortex-A9 ARM chip running Android 4.2. The Atom Z2560 also runs Android 4.2 on the 10.1-incher. Too bad Intel couldn’t have won all three sockets, especially since Intel’s previous lack of LTE cellular support has been solved by the company’s new XMM 7160 4G LTE chip, and supplemented by new GPS/GNSS silicon and IP from Intel’s ST-Ericsson navigation chip acquisition.

The Z2560 Samsung chose is one of three “Clover Trail+” platform SKUs (Z2760, Z2580, Z2560) formerly known merely as “Cloverview” when the dual-core, Saltwell-based, 32-nm Atom SoCs were leaked in Fall 2012. The Intel alphabet soup starts getting confusing because the Atom roadmap looks like rush hour traffic feeding out of Boston’s Sumner tunnel. It’s being pushed into netbooks (for maybe another quarter or two); value laptops and convertible tablets as standalone CPUs; smartphones and tablets as SoCs; and soon into the data center to compete against ARM’s onslaught there, too.

Clover Trail+ replaces Intel’s Medfield smartphone offering and was announced at February’s MWC 2013. According to Anandtech.com (thank you, guys!) Intel’s aforementioned design wins with Atom used the 32nm Medfield SoC for smartphones. Clover Trail is still at 32nm using the Saltwell microarchitecture but has targeted Windows 8 tablets, while Clover Trail+ targets only smartphones and non-Windows Tablets. That explains the Samsung Galaxy Tab 3 10.1-inch design win. The datasheet for Clover Trail+ is here, and shows a dual-core SoC with multiple video CODECs, integrated 2D/3D graphics, on-board crypto, multiple multimedia engines such as Intel Smart Sound, and it’s optimized for Android and presumably, Intel/Samsung’s very own HTML5-based Tizen OS (Figure 1).

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

I was unable to find meaningful power consumption numbers for Clover Trail+, but it’s 32nm geometry compares favorably to ARM’s Cortex-A15 28nm geometry so Intel should be in the ballpark (vs Medfield’s 45nm). Still, the market wonders if Intel finally has the chops to compete. At least it’s getting much, much closer–especially once the on-board graphics performance gets factored into the picture compared to ARM’s lack thereof (for now).

Silvermont and Bay Trail and…Many More Too Hard to Remember

But Intel knows they’ve got more work to do to compete against Qualcomm’s home-grown Krait ARM-based ISA, some nVidia offerings, and Samsung’s own in-house designs. Atom will soon be moving to 22nm and the next microarchitecture is called Silvermont. Intel is finally putting power curves up on the screen, and at product launch I’m hopeful there will be actual Watt numbers shown, too.

For example, Intel is showing off Silvermont’s “industry-leading performance-per-Watt efficiency” (Figure 2). Press data from Intel says the architecture will offer 3x peak performance, or 5x lower power compared to the Clover Trail+ Saltwell microarchitecture. More code names to track: the quad-core Bay Trail SoC for 2013 holiday tablets; Merrifield with increased performance and battery life; and finally Avoton that provides 64-bit energy efficiency for micro servers and boasts ECC, Intel VT and possibly vPro and other security features. Avoton will go head-to-head with ARM in the data center where Intel can’t afford to lose any ground.

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from "Device to Data Center", says Intel. (Courtesy: Intel.)

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from “Device to Data Center”, says Intel. (Courtesy: Intel.)

Oh Yeah? Who’s Faster Now?

As Intel steps up its game because it has to win or else, the competition is not sitting still. ARM licensees have begun shipping big.LITTLE SoCs, and the company has announced new graphics, DSP, and mid-range cores. (Read Jeff Bier and BDTi’s excellent recent ARM roadmap overview here.)

A recent report by ABI Research (June 2013) tantalized (or more appropriately galvanized) the embedded and smartphone markets with the headline “Intel Apps Processor Outperforms NVIDA, Qualcomm, Samsung”. In comparison tests, ABI Research VP of engineering Jim Mielke noted that that Intel Atom Z2580  ”not only outperformed the competition in performance but it did so with up to half the current drain.”

The embedded market didn’t necessarily agree with the results, and UBM Tech/EETimes published extensive readers’ comments with colorful opinions.  On a more objective note, Qualcomm launched its own salvo as we went to press, predicting “you’ll see a whole bunch of tablets based upon the Snapdragon 800 in the market this year,” said Raj Talluri, SVP at Qualcomm, as reported by Bloomberg Businessweek.

Qualcomm  has made its Snapdragon product line more user-friendly and appears to be readying the line for general embedded market sales in Snapdragon 200, 400, 600, and “premium” 800 SKU versions. The company has made available development tools (mydragonboard.org/dev-tools) and is selling COM-like Dragonboard modules through partners such as Intrinsyc.

Intel Still Inside

It’s looking like a sure thing that Intel will finally have competitive silicon to challenge ARM-based SoCs in the market that really matters: mobile, portable, and handheld. 22nm Atom offerings are getting power-competitive, and the game will change to an overall system integration and software efficiency exercise.

Intel has for the past five years been emphasizing a holistic all-system view of power and performance. Their work with Microsoft has wrung out inefficiencies in Windows and capitalizes on microarchitecture advantages in desktop Ivy Bridge and Haswell CPUs. Security is becoming important in all markets, and Intel is already there with built-in hardware, firmware, and software (through McAfee and Wind River) advantages. So too has the company radically improved graphics performance in Haswell and Clover Trail+ Atom SoCs…maybe not to the level of AMD’s APUs, but absolutely competitive with most ARM-based competitors.

And finally, Intel has hedged its bets in Android and HTML5. They are on record as writing more Android code (for and with Google) than any other company, and they’ve migrated past MeeGo failures to the might-be-successful HTML5-based Tizen OS which Samsung is using in select handsets.

As I’ve said many times, Intel may be slow to get it…but it’s never good to bet against them in the long run. We’ll have to see how this plays out.

“Mirror, Mira” on the Car’s IVI Screen: Two Different Standards?

You might be hearing about a new technology called MirrorLink that mimics your smartphone’s screen on the larger nav screen in your “connected car”. Or, you might be following the news on Miracast, a more open standard now baked into Android that offers Apple AirPlay-like features to stream smartphone content to devices like connected TVs.

You’d be forgiven if you think the two similarly-named standards are trying to accomplish the same thing. I didn’t understand it either, so I did some digging. Here’s what I found out.

The Smart, Connected Car
When I attended the Paris Auto Show last Fall specifically to investigate in-vehicle infotainment (IVI) trends for the Barr Group under contract to Intel, I got spun up “right quick” on all manner of IVI. From BMW’s iDrive to Chevrolet’s MyLink, the connected car is here. In fact, it’s one of the biggest trends spotted at last week’s 2013 CES in Las Vegas. MirrorLink is being designed into lots of new cars.

BMW's iDrive IVI uses a native system and doesn't rely on smartphone mirroring.

BMW’s iDrive IVI uses a native system and doesn’t rely on smartphone mirroring. (Courtesy of BMW.)

The biggest question faced by every auto manufacturer is this: in-car native system, or rely on the apps in one’s smartphone? Ford’s industry breakthrough MyFord Touch with SYNC by Microsoft is native and based upon Microsoft Auto Platform (now called Windows Embedded Automotive 7). Elsewhere, premium brands like BMW, Lexus and Cadillac have designed self-contained systems from the ground up. Some, like BMW, include in-car cellular modems. Others rely on the smartphone only for music and Internet access, but that’s it.

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps

2013 Chevrolet MyLink IVI uses MirrorLink with smartphone apps. (Courtesy of Chevrolet.)

Still others, like Toyota and Chevrolet use a technology called MirrorLink to “mirror” the smartphone’s screen onto the car’s larger IVI. For all apps that make sense to be viewed on the IVI, the system will display them — usually identically to what the user sees on the smartphone (subject to safety and distraction caveats).

MirrorLink is now a trademarked standard owned by the Car Connectivity Consortium that’s designed specifically for cars and smartphones. That means the standard worries about driver distractions, apps that make sense for drivers (such as Google Maps) and those that don’t (such as a panoramic camera stitching application). Apps have to be qualified for use with MirrorLink.

As well, MirrorLink replaces the phone’s touch I/O with in-car I/O such as steering wheel controls, console joysticks, or the IVI head unit’s touchscreen or bezel buttons. Equally as important, audio input from microphones is routed from the car to the phone, while output uses the car’s speakers. The car’s antennae for radio and GPS will be given preference over the phone’s, improving the signal reception.  The protocols between smartphone and car also take input from the vehicle’s CANbus, including speed. This means that you can check your email when parked, but not while driving. A great resource for how it works and what the future holds is here.

MirrorLink started as a Nokia idea that was intended for smartphone-to-car connectivity. Now at version 1.1, it’s a client-server architecture where the IVI head unit is the USB host.  It uses industry-standard protocols such as Internet Protocol (IP), USB, Wi-Fi, Bluetooth (BT HFP for telephony, BT A2DP for media), RTP, and UPnP. Recent additions use The Trusted Computing Group concepts of device attestation protocols with SKSD/PKSD keys via authentication. The actual screen sharing uses the VNC protocol.

MirrorLink and Trusted Computing Group authentication process for trusted content.

MirrorLink and Trusted Computing Group authentication process for trusted content. (Courtesy of Car Connectivity Consortium.)

What MirrorLink doesn’t yet support is video streaming, since drivers watching video is a no-no is cars (tell that to the Japanese who I’ve seen with TVs mounted in their cars!).

Android and Miracast
Miracast, on the other hand, is all about streaming. It’s a Wi-Fi Alliance spec recently demoed at CES 2013 that’s designed to stream video and photos from smartphones, tablets, and future embedded devices. Like Apple’s AirPlay, it moves stuff from a small screen onto a big TV screen. It’s based upon Wi-Fi’s not-new-but-rarely-used Wi-Fi Direct standard (WiDi 3.5) that avoids routers to establish peer-to-peer connectivity.

The Wi-Fi Alliance Miracast standard streams video from small to large screens, as shown in this excerpt from a YouTube video. (Courtesy of YouTube and Wi-Fi Alliance.)

The Wi-Fi Alliance Miracast standard streams video from small to large screens, as shown in this excerpt from a YouTube video. (Courtesy of YouTube and Wi-Fi Alliance.)

Miracast supports 1080p HD video, 5.1 surround, and CPUs from nVidia, TI, Qualcomm, Marvell and others have announced plans to support it. Built into the spec is the ability to stream DRM and HDCP protected content using already established HDMI and DisplayPort style copy protection schemes. I guess they figure if you’ve got the rights to play it on your phone, might as well play it on your TV too.

Last Fall, Google updated Android Jelly Bean to 4.2 and included Miracast as part of the update, and I’m thrilled that my Nexus 7 tablet can now, in theory, stream content to my Samsung Smart TV. As Android proliferates throughout the embedded market, I can envision commercial applications where a user might do more than stream a video to another embedded device. Sharing the entire smartphone’s screen can be useful for PowerPoint presentations or demoing just about any Android app in existence. If it’s on the phone’s screen, it can get mirrored via Wi-Fi to another screen.

Will MirrorLink and Miracast Converge?
I doubt the two standards will merge. MirrorLink is exclusively aimed at IVI systems in cars, and the closely curated standard is intended to vet applications to assure safe operation in a vehicle. Miracast is similar in that it mirrors a smartphone’s screen, but there are no limitations on moving between screens, so Miracast is clearly the superset standard to a broader market.

Ironically, as the Car Connectivity Consortium looks to release MirrorLink Version 2.0, they’re examining Miracast as a way to provide an “alternative video link” for streaming H.264 1080p@30 FPS into the car cabin.

Why? For passenger entertainment. Think about minivans (shudder) and Suburbans loaded with kids.

Tizen OS for Smartphones – Intel’s Biggest Bet Yet

Tizen HTML5 from Intel and Linux Foundation to be used by Samsung handsets in 2013 mobile.

Figure 1: Intel and the Linux Foundation collaborated on Tizen, an open source HTML5-based platform for smartphones, IVI, and other embedded devices.

[Update on 27 February 2013: At the recent 2013 Mobile World Congress in Barcelona, Samsung demoed a development handset running Tizen. CNET editor Luke Westaway posted a video review of the device which showed snappy performance, Android-like features, but felt that the early version was "a bit rough around the edges". Still, to see Tizen running on actual consumer hardware gives it cred.  A larger review by CNET's Roger Cheng can be found here: http://cnet.co/15R8xs3 ]

[8 Jan 2013 Update: Added "Disclosure" below and fixed some typos.]

Disclosure: As of 8 Jan 2013, I became a paid blogger for Intel’s ‘Roving Reporter’ embedded Intelligent Systems Alliance (edc.intel.com). But my opinion here is my own, and I call it like I see it.

Samsung hedges Apple, Google bets with Intel’s HTML5-based Tizen

Just when you thought the smartphone OS market was down to a choice between iOS and Android, Intel-backed Tizen jumps into the fray (Figure 1).  Tizen is Intel’s next kick at the can for mobile, and it’s joining several OS wannabes:  Microsoft Windows Phone 8, RIM Blackberry’s whatever-they’re-going-to-announce on 31 January 2013, and eventually Ubuntu phone platform.

Figure 2: On 3 January 2013 Ubuntu announced a plan to offer a smartphone OS. Key feature: use the phone as a computing platform and even drive a desktop monitor.

Samsung  Prepares to “Date” Other Partners

Samsung Electronics announced on 3 January that it will start selling smartphones sometime this year using Tizen as the OS platform. Samsung’s spokesperson didn’t elaborate on timing or models, but said in an emailed statement ”We plan to release new, competitive Tizen devices…and keep expanding the lineup.”

Tizen is the third incarnation of Intel’s attempts at building an embedded ecosystem which included MeeGo and Moblin. Tizen, in collaboration with The Linux Foundation, was announced mid-2011 and has been quietly gestating in the background and is now on Release 2.0. One of the largest supporters of Tizen is Samsung, so the recent announcement is no surprise.

Samsung no doubt seeks a back-up plan as Google’s Android OS has flown past Apple’s iOS as the predominant operating system for mobile devices  plus tablets (75%; Figure 3).

Figure 3: Android is now the predominant smartphone OS in 2012, according to IDC. (Source: IDC; http://www.idc.com/getdoc.jsp?containerId=prUS23818212 ).

As Samsung is now the world’s largest smartphone supplier (Figure 4), the company might be following a play from Apple in seeking to control more of its own destiny through Tizen.

Figure 4: IC Insights – and most other analyst firms – rank Samsung as the world’s largest smartphone supplier. This data is from 28 November 2012.(Source: IC Insights; http://www.icinsights.com/news/bulletins/Samsung-And-Apple-Set-To-Dominate-2012-Smartphone-Market/)

And with Samsung and Apple’s patent dispute nastiness, along with rumblings over whether Samsung may or may not continue to supply processors for iPhones, Tizen represents one more way for Samsung to control their own destiny separate from Google and Apple.

Intel’s Mobile Imperative Needs HTML5

Intel, on the other hand, desperately needs more wins in the mobile space.  Last year I blogged how the company gained some traction by announcing several Atom (Medfield) SoC-based handset wins,  but the company has gone on record stating their real goal is to be inside mobile devices from Apple, Samsung or both. In fact, it’s a bet-the-farm play for Intel and it most likely pushed Intel CEO Paul Otellini into his future retirement plans.

The general embedded market is closely following what happens in mobile, adopting low-power ARM SoCs and Atom CPUs, using wireless Wi-Fi and NFC radios for M2M nodes, and deploying Android for both headed and headless systems such as POS and digital signage. If Tizen moves the needle in smartphones for Samsung, chances are it’ll be used by other players. With HTML5, it will be straightforward to port applications and data across hardware platforms – a goal that Intel’s EVP Renee James  touted at 2012′s Intel Developers Forum (Figure 5).

Figure 5: Intel’s Renee James is betting on HTML5 in Tizen to kickstart transparent computing. (Image taken by author at IDF 2012.)

 

Tizen is based upon HTML5 with plans to achieve the old Java “write once, run anywhere” promise.   For Intel, the Tizen SDK and API means that applications written for the most popular mobile processors – such as Qualcomm’s Snapdragon or nVidia’s Tegra 3 – could easily run on Intel processors. In fact, at IDF Intel posited a demo of a user’s application running first on a home PC, then a smart phone, then a connected in-vehicle infotainment (IVI) system, and then finally on an office platform. Intel’s Renee James explained that it matters not what underlying hardware runs the application – HTML5 allows seamless migration across any and all devices.

Tizen Stakes for Intel and Samsung

This pretty much sums up the Tizen vision, both for Intel and for Samsung. Tizen means freedom, as it abstracts the hardware from any application.

If successful, Tizen opens up processor sockets to Intel as mobile vendors swap CPUs. Tizen also allows Samsung to choose any processor, while relying on open source and open standards-based code supported by The Linux Foundation.