The Secret World of USB Charging

There’s a whole set of USB charging specs you’ve probably never heard of because big-battery smartphones, tablets and 2:1’s demand shorter charge times.

Editor’s note: this particular blog posting is sponsored by Pericom Semiconductor.  

$5 chargers useNow that you can buy $5 USB chargers everywhere (mains- and cigarette lighter-powered), it’s tempting to think of them like LED flashlights: cheap commodity throw-aways. And you would’ve been right…until now.

My recent purchase of an Asus T100 Transformer Windows 8.1/Intel Atom 2:1 tablet hybrid forced me to dig into USB charging (Figure).

My own Asus T100 Transformer Book has a “unique” USB charging profile.  (Courtesy: Asus.)

My own Asus T100 Transformer Book has a “unique” USB charging profile.
(Courtesy: Asus.)

This device is fabulous with its convenient micro USB charging port with OTG support. No bulky wall wart to lug around. But it refuses to charge normally from any charger+cable except for the (too short) one that came with it.

My plethora of USB chargers, adapters, powered hubs and more will only trickle charge the T100 and take tens of hours. And it’s not just the device’s 2.0A current requirement, either. There’s something more going on.

Just Say “Charge it!”

The USB Innovators Forum (USB-IF) has a whole power delivery strategy with goals as shown below. Simply stated, USB is now flexible enough to provide the right amount of power to either end of the USB cable.

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

The USB Power Delivery goals solidify USB as the charger of choice for digital devices. (Courtesy: www.usb.org )

There’s even a USB Battery Charging (UBC) compliance specification called “BC1.2” to make sure devices follow the rules. Some of the new power profiles are shown below:

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

Table 1: USB Implementers Forum (USB-IF) Battery Charging specifications (from their 1.2 compliance plan document October 2011).

The reason for UBC is that newer devices like Apple’s iPad, Samsung’s Galaxy S5 and Galaxy Tab devices–and quite possibly my Asus T100 2:1–consume more current and sometimes have the ability to source power to the host device. UBC flexibly delivers the right amount of power and can avoid charger waste.

Communications protocols between the battery’s MCU and the charger’s MCU know how to properly charge a 3000mAh to 10,000mAh battery. Battery chemistry matters, too. As does watching out for heat and thermal runaway; some USB charger ICs take these factors into account.

Apple, ever the trend-setter (and master of bespoke specifications) created their own proprietary fast charging profiles called Apple 1A, 2A and now 2.4A. The Chinese telecom industry has created their own called YD/T1591-2009. Other suppliers of high-volume devices have or are working on bespoke charging profiles.

Fast, proper rate charging from Apple, Samsung and others is essential as harried consumers increasingly rely on mobile devices more than laptops. Refer to my complaint above RE: my Asus T100.

Who has time to wait overnight?!

USB Devices Available

Pericom Semiconductor, who is sponsoring this particular blog posting, has been an innovator in USB charging devices since 2007. With a growing assurance list of charge-compatible consumer products, the company has a broad portfolio of USB ICs.

Take the automotive-grade PI5USB8000Q, for instance. Designed for the digital car, this fast charger supports all of the USB-IF BC modes per BC1.2, Apple 1A and 2A, and the Chinese telecom standard. The IC powers down when there’s no load to save the car’s own battery, and can automatically detect the communication language to enable the proper charging profile (Figure). Pretty cool, eh?

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

The USB-IF’s CDP and SDP charging profiles require communication between the USB charger and the downstream port (PD) device being charged. Refer to Table 1 for details. (Courtesy: Pericom Semiconductor.)

As For My Asus 2:1?

Sadly, I can’t figure out how the T100 “talks” with its charger, or if there’s something special about its micro USB cable. So I’m stuck.

But if you’re designing a USB charger, a USB device, or just powering one, Pericom’s got you covered. That’s a secret to get all charged up about.

Part 1: Three Embedded Companies On The Move

Here are three companies in the embedded market that are changing or enhancing their strategies. What you need to know…and why they’re doing it.

Change is never easy but in tech, it’s essential. It’s interesting that within the last week, three recognizable companies have announced significant strategy changes or enhancements.  Here’s a quickie snapshot with links to their PRs.

COMPANY 1: Curtiss Wright Defense Systems Re-Orgs

CWClogo

[Note 1: at time of post, there was no PR posted on the company's websites. We'll provide the link when available.

Note 2: the PR is here.  Note 3: Updated to read "Part 1"]

One of the “big three” rugged board and system suppliers (GE Intelligent Platforms and Mercury Computer are the others), Curtiss Wright Defense Systems has been beefing up their systems expertise and capabilities for years. Defense Systems–only recently renamed “Defense Solutions“–was created out of a score of mergers including Dy4 Systems, VISTA Controls, Lau Defense Systems, and many others. The company most recently acquired rugged systems supplier Parvus from Eurotech, solidifying CW as a growing powerhouse.

The mothership defense company Curtiss-Wright Corporation assembled Curtiss-Wright Defense Solutions originally as a way to provide rugged systems into its own growing businesses, although Defense Solutions has long successfully sold VME, VPX, and CompactPCI boards and systems into the broader COTS defense market.

As of 21 May, Defense Solutions “has been expanded to include the Company’s Avionics & Electronics Group and Peerless Instrument and INDAL Technologies businesses.” According to the press release, this makes Curtiss-Wright a market-aligned organization that includes: Commercial/Industrial, Defense and Energy segments–not unlike the way that parts of General Electric (GE) is organized.

CW website re-orgs

Lynn Bamford, herself part of one of CW’s earlier acquisitions heads the new organization as Sr. VP and GM. [Update: Lynn came along with the Ixthos acquisition.] By adding three additional businesses, Defense Solutions now includes shipboard pumping systems (Peerless Instrument), airborne flight surface actuators and landing structures (INDALTechnologies), plus sensor consolidation systems (Avionics & Electronics). These are all in addition to the rugged boards, systems and software already provided to sea-, air- and land-based platforms.

My take on this remains the same as I’ve written for many years: Curtiss-Wright Corporation–already a very successful Tier 3 defense supplier–is positioning to grow into a Tier 2 supplier. There, it will find company with the likes of L3, BAE, CSC, and many others. The only questions are: 1. When; and 2. When will Curtiss-Wright’s customers become concerned that CW might become an actual competitor. In the rugged embedded industry, this “do not compete with thy customer” has been a mantra since COTS became S.O.P.

Disclosure: I previously worked at Dy4 Systems and VISTA Controls, and much later advised Curtiss-Wright on the acquisition of those companies.

 

Can You See the Future? The Embedded Vision Summit Helps Designers—and their systems—See it Clearly

The one day Embedded Vision Summit shows developers how to make their systems smarter with cameras, DSP and other sensors.

Embedded Vision Summit logo wide

Jeff Bier

Jeff Bier: head of the Embedded Vision Alliance’s Embedded Vision Summit.

Update 5/9/14: typo, caption and URL corrections.

BDTI’s Jeff Bier is known in the industry as a rock-solid guy, an expert on all things DSP, and the man behind the company that publishes processor benchmarks and analyses that are on par with IEEE peer-reviewed content. And Jeff doesn’t jump up and down with excitement much. At least, I’ve never seen it. Look at his photo and you’ll see what I mean.

But he’s virtually hopping from foot to foot with excitement about the 29 May 2014 one-day Embedded Vision Summit to be held at the Santa Clara Convention Center. This fourth annual conference is Jeff’s brainchild because he sees “embedded vision as the next most important use for DSP [devices], algorithms, and their associated sensors.”

“More significant than software defined radio?” I asked. “Yes,” he said.

“Than cellular baseband processing?” “Yep.”

“Than the image processing done on the world’s billions of smartphones?”
“That,” he said, “is a perfect example of embedded vision.”

Definition: Helping machines see

Most people yawn or politely make excuses to water the cactus at the mention of “computer vision”. To me, it’s a camera-based system doing high-speed QA on an assembly line. Snore.

But embedded vision, says Jeff, is the practical use of computer vision in applications ranging from smartphone photography, augmented reality, Microsoft Kinect-/Minority Report-like 3-space gestures, facial detection, video games, and so on.

Embedded vision is not your father’s computer vision; rather, it’s a deployable software-defined sensor system that’s:

  • inexpensive,
  • ubiquitous,
  • practical, and
  • extracts new meaning from (primarily image) sensors.

With low-cost embedded vision on board, machines become dramatically smarter about the world around them. The up-and-coming Embedded Vision Summit won’t have presentations by assembly line companies like Campbell’s Soup or Procter and Gamble.

But there might be a presentation from a factory company like Ford Motor, because automobiles are one of the “killer apps” for embedded vision. Instead of Ford, Google will be there discussing their self-driving car.

Google’s self-driving Lexus. Guess the low-end Google Prius takes the Street View images while the luxurious Lexus gets the swanky job of shuttling around wide-eyed passengers. (Courtesy: Google.)

Google’s self-driving Lexus. I guess the low-end Google Prius takes the Street View images while the luxurious Lexus gets the swanky job of shuttling around wide-eyed passengers. (Courtesy: Google.)

Interested yet in embedded vision?

Automotive embedded vision

Google’s self-driving car is a perfect example of embedded vision, combining cameras, radar and ultrasonic sensors with DSP algorithms and processors. The Embedded Vision Summit will include Google’s Nathaniel Fairfield speaking about “Self-Driving Cars”. (See full agenda snapshot down at the bottom of this post.)

Many auto manufacturers are already fusing cameras and other sensors into Advanced Driver Alert/Assistance Systems (ADAS) for lane departure warning, anti-collision emergency braking, blind spot detection, and the most basic of all: the “steerable” back-up camera with overlay.

ADAS systems surround next-gen cars. Embedded vision may use cameras along with, or in lieu of, these systems for lower cost implementations.  (Courtesy: Analog Devices. As reported in “Automotive sensors may usher in self-driving cars; EDN. )

ADAS systems surround next-gen cars. Embedded vision may use cameras along with, or in lieu of, these systems for lower cost implementations.
(Courtesy: Analog Devices. As reported in
“Automotive sensors may usher in self-driving cars; EDN. See: http://edn.com/design/automotive/4368069/Automobile-sensors-may-usher-in-self-driving-cars )

Subaru’s EyeSight system uses cameras mounted alongside the rearview mirror. While Mercedes uses a combo camera/radar in its ADAS systems, cameras are by far the cheaper alternative—embedded vision provides added capability with these low cost sensors. Analyst firm Strategy Analytics estimates “100 million cameras will be fitted to light vehicles in 2020” (Roger C. Lanctot; GTC: Merging ADAS and Infotainment for Cloud Enhanced Safety).

Subaru's EyeSight system uses twin forward-facing cameras for lane departure and other adaptive safety features. (Courtesy: Subaru of America.)

Subaru’s EyeSight system uses twin forward-facing cameras for lane departure and other adaptive safety features. (Courtesy: Subaru of America.)

Embedded vision…coming to your next design project

Merging the vision sensor (typically one or more cameras) with DSP algorithms and processors creates a software-defined sensor that makes the end system dramatically smarter. Beyond automobiles, embedded vision is already installed in smartphones.

HDR (high dynamic range), panoramic stitching, facial recognition (in photos taken or in Android to unlock your device), red eye removal, back/fore ground blurring are some of the myriad examples of in-production embedded vision. And these are right in your pocket or purse.

According to the Embedded Vision Alliance (a key sponsor of the Summit), the augmented reality market could top $1B by 2018 (according to the market research firm Markets&Markets). Augmented reality applications provide overlay information on top of a live or stored image. At the grocery store or in your kitchen pantry, Amazon’s Flow app (available in the iTunes store) lets a user aim their smartphone camera at a product and order it through Amazon.  Ikea has a related augmented reality application that relies on embedded vision to superimpose Ikea furniture and products in your home environment. Now you can decide if blonde is really your color or not.

Amazon’s Flow app lets a user aim their smart phone at a product and order directly from Amazon. (Courtesy: Amazon.com.)

Amazon’s Flow app lets a user aim their smart phone at a product and order directly from Amazon. (Courtesy: Amazon.com.)

Get healthy; live better

Point-of-sale terminals or vending machines might use facial recognition to authenticate a user or go beyond a bar or QR code when searching for information about a product held in front of a sensor.  Even better, gesture recognition might provide for a better UI input—or perhaps a more sanitary one at markets that seem so obsessed with germicide wipes for shopping carts.

In medical situations, embedded vision could be of great benefit. Most designers could envision (no pun) a doctor pulling up a patient’s “chart” in their Google Glass display (geek factor notwithstanding). Google Glass, says Embedded Vision Summit’s Jeff Bier, is merely a platform and not at-present a complete embedded vision system.

But the company OrCam is going several steps further than Google Glass by offering a device that helps vision-impaired people “read”. A tiny eyeglasses-mounted camera performs text and object recognition and provides audio information to the wearer. Product labels can be “read”, along with newspaper text, bus numbers, and the state of street crossing signals. And there’s more capability on the way as algorithms and GPGPU processing power improves from companies like Nvidia.

OrCam’s augmented reality device helps vision-impaired people to “see” and “read” with audio cues. (Courtesy: OrCam.com .)

OrCam’s augmented reality device helps vision-impaired people to “see” and “read” with audio cues. (Courtesy: OrCam.com .)

Embedded Vision Summit Agenda

The one-day Summit agenda focuses on evangelizing and educating hardware, software and system designers. At its core, the briefings are compelling and mix “how to” with “how your future system could do this!”

By the way, I neglected to mention that there’s the obligatory exhibit hall showcase experience, too. Unlike other user events, this one promises to be pretty cool since the exhibitors are the “who’s who” of signal processing, and high-performance hardware/software companies.

Attendees should leave with an understanding of embedded vision…plus ideas for mixing a sensor with many embedded designs to help their machine see the future.

The full agenda schedule grid is shown below.

2014 Embedded Vision Summit agenda, held 29 May at the Santa Clara convention center.

2014 Embedded Vision Summit agenda, held 29 May at the Santa Clara convention center.

 

 

Baby, You Can Drive (the PCIe clock in) My Car

Among all the fancy ARM-based SoCs and peripheral ICs in an IVI system, the humble PCI Express clock generator is what really drives the connected car.

This particular blog posting is sponsored by Pericom Semiconductor.

With apologies to the Beatles, car tech is all the rage lately now that Apple’s CarPlay legitimizes running Apps in new cars. Although premium marques like BMW, Jaguar, Mercedes and others have been rolling out App-based in-vehicle infotainment (IVI) systems for a few years, Apple’s recent announcement has awakened the masses.

Everyone wants their smartphone’s UI on the car’s center console, whether they’re an Android or iOS (Apple) fanboy or girl. (Sorry RIM and Windows Phone; your market share is just too small.)

Figure 1:  Each PCI Express peripheral in the digital car’s in-vehicle infotainment (IVI) system needs a PCIe Clock Generator. (Courtesy: Pericom Semiconductor).

Figure 1: Each PCI Express peripheral in the digital car’s in-vehicle infotainment (IVI) system needs a PCIe Clock Generator. (Courtesy: Pericom Semiconductor).

But among the ARM-based media processors, Ethernet AVB networks, and Xilinx Zynq-based ADAS (advanced driver assistance system) safety features, it’s the humble PCI Express clock generator that really drives the car’s IVI systems (Figure 1).

All those LCD in-car displays—two or three in the dashboard, one in the rear view mirror, and several for rear seat passengers (think: minivan)—need high-speed I/O and multiple processor peripherals. PCI Express (PCIe) provides the needed 5 Gbps bandwidth to connect processors to peripherals, and each PCIe node needs its own (typically) 100 MHz clock source.  Without this reference clock, no digital bit twiddling takes place, and the car’s IVI system remains eerily silent.

Although considered “old hat”, “boring”, or maybe even “not elegant”, the basic PCIe clock generator has sophistication that matters to auto OEMs when they’re building millions of units. I’d argue they are elegant with thoughtful touches like: a single low-cost crystal input, up to four PCIe clock (HCSL) outputs to drive multiple ICs, and hassle-free long lifecycle availability.

Plus, in the harsh and noisy—electrically speaking—environment that is the modern automobile, AEQ-100 automotive qualification and low clock jitter all matter greatly to IVI designers deploying PCI Express.

Figure 2: PCI Express Clock Generators, like the ones shown here from Pericom, provide low jitter PCIe clocks to on-board processors and peripherals.

Figure 2: PCI Express Clock Generators, like the ones shown here from Pericom, provide low jitter PCIe clocks to on-board processors and peripherals.

For example, one of Pericom’s latest automotive grade PCIe clock generators, the PI6C557-05QLE, only requires an inexpensive 25 MHz crystal for a reference clock to create four PCIe Gen 2 or Gen 3 clocks. This device meets the PCI-SIG’s jitter spec of 3.1ps (<1ps RMS), but betters that considerably with a 2.2ps (typ.) phase jitter.  As shown above, this clock generator will drive the main processor, a GPS or wireless RF peripheral, an on-board SSD, and a USB 3.0 chipset. Pretty much all the things a designer expects in tomorrow’s embedded IVI.

So what’s driving the digital car? I argue it’s a PCIe clock generator.