Quiz question: I’m an embedded system, but I’m not a smartphone. What am I?

In the embedded market, there are smartphones, automotive, consumer….and everything else. I’ve figured out why AMD’s G-Series SoCs fit perfectly into the “everything else”.

amd-embedded-solutions-g-series-logo-100xSince late 2013 AMD has been talking about their G-Series of Accelerated Processing Unit (APU) x86 devices that mix an Intel-compatible CPU with a discrete-class GPU and a whole pile of peripherals like USB, serial, VGA/DVI/HDMI and even ECC memory. The devices sounded pretty nifty—in either SoC flavor (“Steppe Eagle”) or without the GPU (“Crowned Eagle”). But it was a head-scratcher where they would fit. After-all, we’ve been conditioned by the smartphone market to think that any processor “SoC” that didn’t contain an ARM core wasn’t an SoC.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

AMD’s Stephen Turnbull, Director of Marketing, Thin Client markets.

Yes, ARM dominates the smartphone market; no surprise there.

But there are plenty of other professional embedded markets that need CPU/GPU/peripherals where the value proposition is “Performance per dollar per Watt,” says AMD’s Stephen Turnbull, Director of Marketing, Thin Clients. In fact, AMD isn’t even targeting the smartphone market, according to General Manager Scott Aylor in his many presentations to analysts and the financial community.

AMD instead targets systems that need “visual compute”: which is any business-class embedded system that mixes computation with single- or multi-display capabilities at a “value price”. What this really means is: x86-class processing—and all the goodness associated with the Intel ecosystem—plus one or more LCDs. Even better if those LCDs are high-def, need 3D graphics or other fancy rendering, and if there’s industry-standard software being run such as OpenCL, OpenGL, or DirectX. AMD G-Series SoCs run from 6W up to 25W; the low end of this range is considered very power thrifty.

What AMD’s G-Series does best is cram an entire desktop motherboard and peripheral I/O, plus graphics card onto a single 28nm geometry SoC. Who needs this? Digital signs—where up to four LCDs make up the whole image—thin clients, casino gaming, avionics displays, point-of-sale terminals, network-attached-storage, security appliances, and oh so much more.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

G-Series SoC on the top with peripheral IC for I/O on the bottom.

According to AMD’s Turnbull, the market for thin client computers is growing at 6 to 8 percent CAGR (per IDC), and “AMD commands over 50 percent share of market in thin clients.” Recent design wins with Samsung, HP and Fujitsu validate that using a G-Series SoC in the local box provides more-than-ample horsepower for data movement, encryption/decryption of central server data, and even local on-the-fly video encode/decode for Skype or multimedia streaming.

Typical use cases include government offices where all data is server-based, bank branch offices, and “even classroom learning environments, where learning labs standardize content, monitor students and centralize control of the STEM experience,” says AMD’s Turnbull.

Samsung LFDs (large format displays) use AMD R-Series APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

Samsung LFDs (large format displays) use AMD APUs for flexible display features, like sending content to multiple displays via a network. (Courtesy: Samsung.)

But what about other x86 processors in these spaces? I’m thinking about various SKUs from Intel such as their recent Celeron and Pentium M offerings (which are legacy names but based on modern versions of Ivy Bridge and Haswell architectures) and various Atom flavors in both dual- and quad-core colors. According to AMD’s  published literature, G-Series SoC’s outperform dual-core Atoms by 2x (multi-display) or 3x (overall performance) running industry-standard benchmarks for standard and graphics computation.

And then there’s that on-board GPU. If AMD’s Jaguar-based CPU core isn’t enough muscle, the system can load-balance (in performance and power) to move algorithm-heavy loads to the GPU for General Purpose GPU (GPGPU) number crunching. This is the basis for AMD’s efforts to bring the Heterogeneous System Architecture (HSA) spec to the world. Even companies like TI and ARM have jumped onto this one for their own heterogeneous processors.

G-Series: more software than hardware.

G-Series: more software than hardware.

In a nutshell, after two years of reading about (and writing about) AMD’s G-Series SoCs, I’m beginning to “get religion” that the market isn’t all about smartphone processors. Countless business-class embedded systems need Intel-compatible processing, multiple high-res displays, lots of I/O, myriad industry-standard software specs…and all for a price/Watt that doesn’t break the bank.

So the answer to the question posed in the title above is simply this: I’m a visually-oriented embedded system. And I’m everywhere.

This blog was sponsored by AMD.

 

 

PCI Express Switch: the “Power Strip” of IC Design

Need more PCIe channels in your next board design? Add a PCIe switch for more fanout.

Editor’s notes:

1. Despite the fact that Pericom Semiconductor sponsors this particular blog post, your author learns that he actually knows very little about the complexities of PCIe.

2. Blog updated 3-27-14 to correct the link to Pericom P/N PI7C9X2G303EL.

Perhaps you’re like me; power cords everywhere. Anyone who has more than one mobile doodad—from smartphone to iPad to Kindle and beyond—is familiar with the ever-present power strip.

An actual power strip from under my desk. Scary...

An actual power strip from under my desk. Scary…

The power strip is a modern version of the age-old extension cord: it expands one wall socket into three, five or more.  Assuming there’s enough juice (AC amperage) to power it all, the power strip meets our growing hunger for more consumer devices (or rather: their chargers).

 

And so it is with IC design. PCI Express Gen 2 has become the most common interoperable, on-board way to add peripherals such as SATA ports, CODECs, GPUs, WiFi chipsets, USB hubs and even legacy peripherals like UARTs. The wall socket analogy applies here too: most new CPUs, SoCs, MCUs or system controllers lack sufficient PCI Express (PCIe) ports for all the peripheral devices designers need. Plus, as IC geometries shrink, system controllers also have lower drive capability per PCIe port and signals degrade rather quickly.

The solution to these host controller problems is a PCIe switch to increase fanout by adding two, three, or even eight additional PCIe ports with ample per-lane current sourcing capability.

Any Port in a Storm?

While our computers and laptops strangle everything in sight with USB cables, inside those same embedded boxes it’s PCIe as the routing mechanism of choice. Just about any standalone peripheral a system designer could want is available with a PCIe interface. Even esoteric peripherals—such as 4K complex FFT, range-finding, or OFDM algorithm IP blocks—usually come with a PCIe 2.0 interface.

Too bad then that modern device/host controllers are painfully short on PCIe ports. I did a little Googling and found that if you choose an Intel or AMD CPU, you’re in good shape. A 4th Gen Intel Core i7 with Intel 8 Series Chipset has six PCIe 2.0 ports spread across 12 lanes. Wow. Similarly, an AMD A10 APU has four PCIe (1x as x4, or 4x as x1). But these are desktop/laptop processors and they’re not so common in embedded.

AMD’s new G-Series SoC for embedded is an APU with a boatload of peripherals and it’s got only one PCIe Gen 2 port (x4). As for Intel’s new Bay Trail-based Atom processors running the latest red-hot laptop/tablet 2:1’s:  I couldn’t find an external PCIe port on the block diagram.

Similarly…Qualcomm Snapdragon 800? Nvidia Tegra 4 or even the new K1? Datasheets on these devices are closely held for customers only but I found Developer References that point to at best one PCIe port. ARM-based Freescale processors such as the i.MX6, popular in set-top boxes from Comcast and others have one lone PCIe 2.0 port (Figure 1).

What to do if a designer wants to add more PCIe-based stuff?

Figure 1: Freescale i.MX ARM-based CPU is loaded with peripheral I/O, yet has only one PCIe 2.0 port. (Courtesy: Freescale Semiconductor.)

Figure 1: Freescale i.MX ARM-based CPU is loaded with peripheral I/O, yet has only one PCIe 2.0 port. (Courtesy: Freescale Semiconductor.)

‘Mo Fanout

A PCIe switch solves the one-to-many dilemma. Add in a redriver at the Tx and Rx end, and signal integrity problems over long traces and connectors all but disappear. Switches from companies like Pericom come in many flavors, from simple lane switches that are essentially PCIe muxes, to packet switches with intelligent routing functions.

One simple example of a Pericom PCIe switch is the PI7C9X2G303EL. This PCIe 2.0 three port/three lane switch has one x1 Up and two x1 Down and would add two ports to the i.MX6 shown in Figure 1. This particular device, aimed at those low power consumer doodads I mentioned earlier, boasts some advanced power saving modes and consumes under 0.7W.

Hook Me Up

Upon researching this for Pericom, I was surprised to learn of all the nuances and variables to consider with PCIe switches. I won’t cover them here, other than mentioning some of the designer’s challenges: PCIe Gen 1 vs Gen 2, data packet routing, latency, CRC verification (for QoS), TLP layer inspection, auto re-send, and so on.

It seems that PCIe switches seem to come in all flavors, from the simplest “power strip”, to essentially an intelligent router-on-a-chip. And for maximum interoperability, of them need to be compliant to the PCI-SIG specs as verified by a plugfest.

So if you’re an embedded designer, the solution to your PCIe fanout problem is adding a PCI Express switch. 

Intel’s Atom Roadmap Makes Smartphone Headway

After being blasted by users and pundits over the lack of “low power” in the Atom product line, new architecture and design wins show Intel’s making progress.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

Intel EVP Dadi Permutter revealing early convertible tablet computer at IDF2012.

A 10-second Google search on “Intel AND smartphone” reveals endless pundit comments on how Intel hasn’t been winning enough in the low power, smartphone and tablet markets.  Business publications wax endlessly on the need for Intel’s new CEO Brian Krzanich to make major changes in company strategy, direction, and executive management in order to decisively win in the portable market. Indications are that Krzanich is shaking things up, and pronto.

Forecasts by IDC (June 2013) and reported by CNET.com (http://news.cnet.com/8301-1035_3-57588471-94/shipments-of-smartphones-tablets-and-oh-yes-pcs-to-top-1.7b/) peg the PC+smartphone+tablet TAM at 1.7B units by 2014, of which 82 percent (1.4B units, $500M USD) are low power tablets and smart phones. And until recently, I’ve counted only six or so public wins for Intel devices in this market (all based upon the Atom Medfield SoC with Saltwell ISA I wrote about at IDF 2012). Not nearly enough for the company to remain the market leader while capitalizing on its world-leading tri-gate 3D fab technology.

Behold the Atom, Again

Fortunately, things are starting to change quickly. In June, Samsung announced that the Galaxy Tab 3 10.1-inch SKU would be powered by Intel’s Z2560 “Clover Trail+” Atom SoC running at 1.2GHz.  According to PC Magazine, “it’ll be the first Intel Android device released in the U.S.” (http://www.pcmag.com/article2/0,2817,2420726,00.asp)and it complements other Galaxy Tab 3 offerings with competing processors. The 7-inch SKU uses a dual-core Marvell chip running Android 4.1, while the 8-inch SKU uses Samsung’s own Exynos dual-core Cortex-A9 ARM chip running Android 4.2. The Atom Z2560 also runs Android 4.2 on the 10.1-incher. Too bad Intel couldn’t have won all three sockets, especially since Intel’s previous lack of LTE cellular support has been solved by the company’s new XMM 7160 4G LTE chip, and supplemented by new GPS/GNSS silicon and IP from Intel’s ST-Ericsson navigation chip acquisition.

The Z2560 Samsung chose is one of three “Clover Trail+” platform SKUs (Z2760, Z2580, Z2560) formerly known merely as “Cloverview” when the dual-core, Saltwell-based, 32-nm Atom SoCs were leaked in Fall 2012. The Intel alphabet soup starts getting confusing because the Atom roadmap looks like rush hour traffic feeding out of Boston’s Sumner tunnel. It’s being pushed into netbooks (for maybe another quarter or two); value laptops and convertible tablets as standalone CPUs; smartphones and tablets as SoCs; and soon into the data center to compete against ARM’s onslaught there, too.

Clover Trail+ replaces Intel’s Medfield smartphone offering and was announced at February’s MWC 2013. According to Anandtech.com (thank you, guys!) Intel’s aforementioned design wins with Atom used the 32nm Medfield SoC for smartphones. Clover Trail is still at 32nm using the Saltwell microarchitecture but has targeted Windows 8 tablets, while Clover Trail+ targets only smartphones and non-Windows Tablets. That explains the Samsung Galaxy Tab 3 10.1-inch design win. The datasheet for Clover Trail+ is here, and shows a dual-core SoC with multiple video CODECs, integrated 2D/3D graphics, on-board crypto, multiple multimedia engines such as Intel Smart Sound, and it’s optimized for Android and presumably, Intel/Samsung’s very own HTML5-based Tizen OS (Figure 1).

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

Figure 1: Intel Clover Trail+ block diagram used in the Atom Z2580, Z2560, and Z2520 smartphone SoCs. This is 32nm geometry based upon the Saltwell microarchitecture and replaces the previous Medfield single core SoC. (Courtesy: Intel.)

I was unable to find meaningful power consumption numbers for Clover Trail+, but it’s 32nm geometry compares favorably to ARM’s Cortex-A15 28nm geometry so Intel should be in the ballpark (vs Medfield’s 45nm). Still, the market wonders if Intel finally has the chops to compete. At least it’s getting much, much closer–especially once the on-board graphics performance gets factored into the picture compared to ARM’s lack thereof (for now).

Silvermont and Bay Trail and…Many More Too Hard to Remember

But Intel knows they’ve got more work to do to compete against Qualcomm’s home-grown Krait ARM-based ISA, some nVidia offerings, and Samsung’s own in-house designs. Atom will soon be moving to 22nm and the next microarchitecture is called Silvermont. Intel is finally putting power curves up on the screen, and at product launch I’m hopeful there will be actual Watt numbers shown, too.

For example, Intel is showing off Silvermont’s “industry-leading performance-per-Watt efficiency” (Figure 2). Press data from Intel says the architecture will offer 3x peak performance, or 5x lower power compared to the Clover Trail+ Saltwell microarchitecture. More code names to track: the quad-core Bay Trail SoC for 2013 holiday tablets; Merrifield with increased performance and battery life; and finally Avoton that provides 64-bit energy efficiency for micro servers and boasts ECC, Intel VT and possibly vPro and other security features. Avoton will go head-to-head with ARM in the data center where Intel can’t afford to lose any ground.

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from "Device to Data Center", says Intel. (Courtesy: Intel.)

Figure 2: The 22nm Atom microarchitecture called Silvermont will appear in Bay Trail, Avoton and other future Atom SoCs from “Device to Data Center”, says Intel. (Courtesy: Intel.)

Oh Yeah? Who’s Faster Now?

As Intel steps up its game because it has to win or else, the competition is not sitting still. ARM licensees have begun shipping big.LITTLE SoCs, and the company has announced new graphics, DSP, and mid-range cores. (Read Jeff Bier and BDTi’s excellent recent ARM roadmap overview here.)

A recent report by ABI Research (June 2013) tantalized (or more appropriately galvanized) the embedded and smartphone markets with the headline “Intel Apps Processor Outperforms NVIDA, Qualcomm, Samsung”. In comparison tests, ABI Research VP of engineering Jim Mielke noted that that Intel Atom Z2580  ”not only outperformed the competition in performance but it did so with up to half the current drain.”

The embedded market didn’t necessarily agree with the results, and UBM Tech/EETimes published extensive readers’ comments with colorful opinions.  On a more objective note, Qualcomm launched its own salvo as we went to press, predicting “you’ll see a whole bunch of tablets based upon the Snapdragon 800 in the market this year,” said Raj Talluri, SVP at Qualcomm, as reported by Bloomberg Businessweek.

Qualcomm  has made its Snapdragon product line more user-friendly and appears to be readying the line for general embedded market sales in Snapdragon 200, 400, 600, and “premium” 800 SKU versions. The company has made available development tools (mydragonboard.org/dev-tools) and is selling COM-like Dragonboard modules through partners such as Intrinsyc.

Intel Still Inside

It’s looking like a sure thing that Intel will finally have competitive silicon to challenge ARM-based SoCs in the market that really matters: mobile, portable, and handheld. 22nm Atom offerings are getting power-competitive, and the game will change to an overall system integration and software efficiency exercise.

Intel has for the past five years been emphasizing a holistic all-system view of power and performance. Their work with Microsoft has wrung out inefficiencies in Windows and capitalizes on microarchitecture advantages in desktop Ivy Bridge and Haswell CPUs. Security is becoming important in all markets, and Intel is already there with built-in hardware, firmware, and software (through McAfee and Wind River) advantages. So too has the company radically improved graphics performance in Haswell and Clover Trail+ Atom SoCs…maybe not to the level of AMD’s APUs, but absolutely competitive with most ARM-based competitors.

And finally, Intel has hedged its bets in Android and HTML5. They are on record as writing more Android code (for and with Google) than any other company, and they’ve migrated past MeeGo failures to the might-be-successful HTML5-based Tizen OS which Samsung is using in select handsets.

As I’ve said many times, Intel may be slow to get it…but it’s never good to bet against them in the long run. We’ll have to see how this plays out.

Does Altera Have “Big Data” Communications on the Brain?

In wireless, wireline and financial “big data” applications, moving all those packets needs prodigious FPGA resources, not all of which Altera had before their recent series of acquisitions, partnerships, and otherwise wheeling-and-dealing.

Chris Balough of Altera (left) interviewed by Andy Frame from ARM. (Courtesy: YouTube.)

Chris Balough of Altera (left) interviewed by Andy Frame from ARM. (Courtesy: YouTube.)

I caught up with an old friend at April’s DESIGN West 2013 conference in San Jose: Chris Balough, Sr Director, Product Marketing for SoC products. I knew Chris from when he was at Triscend (purchased by Xilinx). Chris is now in charge of Altera’s SoC products which are Arria V, Stratix V and Cyclone FPGAs with ARM cores in them which compete with Xilinx’s Zynq devices. Chris shed some light on some of these announcements, but remained mum on what they all might mean taken collectively. I think they add up to something big in “Big Data”.

(Fun facts: Altera’s first “SoC” was Excalibur, no longer recommended for new designs. Altera’s most popular SoC processor is the soft Nios II, sold in roughly 30 percent of production SoCs, says Balough.)

X before A? We’ll See

Subconsciously I think of Xilinx first when the word “FPGA” is flashed in front of me, but Altera’s the company pushing more boundaries of late. Their rat-a-tat machine gun announcements this year got my attention.

In the summer of 2012, I did an interview with Altera’s Sr VP of R&D Brad Howe and he spread out as much of the roadmap on the table as he could. Things like HSA, OpenCL, and better gigabit transceivers were all on the horizon.  Shortly thereafter, Altera extended their  relationship with TSMC to 20nm for Arria and Cyclone FPGAs. Then in early 2013, they rocked the industry by locking up an exclusive FPGA relationship with Intel for the industry’s only production 14nm tri-gate FinFETs.

Spring Cleaning ; Altera’s Getting Ready For…?

Now in Spring 2013, Altera is making headlines like these:

- FPGA Design in the Cloud–Try It, You’ll Like It, Says Plunify. At DAC, Altera and Plunify are pushing cloud-based FPGA design tools. (See our February 2013 article with Plunify here.)

- Altera and AppliedMicro will Cooperate on Joint Solutions for High Growth Data Center Market.  Combines Stratix FPGAs and AppliedMicro’s Server on a Chip devices targeting data centers and optical transport networks (OTN).

- Altera Expands OTN Solution Capabilities with Acquisition of TPACK. Altera buys TPAK from AMCC to provide IP for FPGAs used in OTN for tasks like cross-bar switches used in 10/40/100Gbps PHYs.

- Altera Stratix V GX FPGAs Achieve PCIe Gen3 Compliance and Listing on PCI-SIG Integrators List. Right now, Gen2 and Gen3 PCIe is critical to data centers, cellular base stations, and all manner of high-speed long-haul/back-haul telco gear. Within 12 months, PCIe Gen2/3 will be “table stakes” in all manner of high-performance embedded systems like ATCA- or VME/VPX-based DSP systems for radar, sonar, SIGINT (signals intelligence) or data mining.

- Altera to Deliver Breakthrough Power Solutions for FPGAs with Acquisition of Power Technology Innovator Enpirion.  Maybe Enpirion’s DC-DC converter PowerSoCs with integrated inductors may some day end up inside a Stratix package (perhaps like Xilinx’s stacked chip interposer technology), but for now the two-chip solution reduces board space by 1/7 and simplifies system design considerably. The programmable DC-DC converters provide the multiple power rails–and power-up sequences–needed for big FPGAs.

The blue regions show places where FPGAs are used in wireless basestations.

The blue regions show places where FPGAs are used in wireless LTE basestations. (Courtesy: Altera.)

My Take: Altera’s Move in Big Data

Analysts estimate that nearly 50 percent of the revenue in  FPGAs comes from high end, high density, costly FPGAs like the Xilinx Vertex 7 and Altera Stratix V. Segments like wireless and wireline packet processing, plus financial or image processing algorithm processors increasingly rely on these kinds of FPGAs in lieu of ASICs, GPGPUs, or proprietary network processors. So every advantage in IP, process technology, or partnership that Altera has, gets the company one step closer to more design wins.

We’ll see what Altera does with all of these recent announcements. I’d expect to see something shake loose before the traditional “summer doldrums” set in when the semiconductor industry goes on its annual vacation next month in July.

Paris: Technology Observations from an American in Paris

I was looking for the hottest tech trends in cars and trucks on a recent week long trip to Paris attending the international auto show. But I found something else, too: everyday European technology mimics that found in American society (but with a few differences).

Before leaving I called ATT (my cellular carrier for my iPhone 4s) and activated the various way-too-expensive international calling, SMS/MMS, and data plans. I wasn’t at all sure I’d find Wi-Fi or at what price, and I wanted to stay minimally connected to the grid. Turns out I rarely used the text or direct dial phone features, thanks to heavy reliance on prevalent Wi-Fi, and the combination of my Google Voice account and a recently established Toktumi/Line2 VoIP app for the iPhone. While I added ten bucks to my GV account just in case, I needn’t have bothered. Three hours of calling home and the office didn’t make much of a dent at pennies per minute.

Line2 by Toktumi provides a nifty VoIP app for the iPhone using free Wi-Fi.

Wi-Fi Everywhere

But this connectivity was possible because of the excellent Wi-Fi found at my hotel on the outskirts of Paris. A Mac-based establishment, the hotel was in the process of adding additional Apple repeaters to all floors of the stone buildings and I had excellent and reasonable bandwidth reception at all times. Due to the language barrier (French/English/Geek) I wasn’t able to query the hotel on what technology they used, but it appeared to be Livebox hardware running on an Orange network. Unsure if it was an LTE cellular link or terrestrial, but I managed to sort-of stream an Amazon video through a US-based IP address. I say “sort of” because it dropped off after 20 minutes; guess the IP police found me.

I also had Wi-Fi at the auto show venue, the mini-Disneyland-esque Porte de Versailles which was like any tech convention you’ve been to times 10. Huge on the order of CES, Las Vegas. But there was Wi-Fi at all the cafes and restaurants, too. In fact, I’d bet there is better Wi-Fi coverage in Paris than cellular. The trick to using it is a bit like using a public washroom: you’ve gotta buy something first and ask for access. A €2 cup of cafe au lait was enough garner a password, and because of the hospitable French culture, I was able to nurse one or two coffees for an hour while surfing away catching up on email. I never tried Skype outside of the hotel, but my iPhone-based VoIP apps usually worked well from a Parisian cafe. I fit right in with all the locals chatting away on their often not-so-smart phones.

The Paris Porte-de-Versailles was the scene for Mondial de l’Automobile – the 2012 Paris Auto Show. (Courtesy: Google Maps.)

 

Feature Phones Still Common

The phenomenon of pulling out one’s phone and staring at it or texting on it is just as common in Paris as it is in the U.S. That is: none of us can sit still or be bored for more than 30 seconds without looking for instant gratification (or validation of our place in the cosmos) from our phones. But while I saw plenty of iPhones (no iPhone 5 handsets as this was soon after announcement), I also saw plenty of candy bar feature phones, too. Even saw my share of not-yet-extinct clamshell phones. I asked someone about this and was told: “We French would rather spend our money on a good meal with friends than on a cell phone.” And that apparently extends to their calling plans, too, as European carriers like Orange struggle to raise the average revenue per user (ARPU) with add-ons like location-based services and digital money transactions. The idea of an American spending $100 per month on a smart phone plan is insanity to a French citizen, I was told.

Location Based Services

Speaking of location-based services, Google Maps was a frequent life-saver for me. My “favorite” Metro mistake was to emerge from the wrong Sortie (exit) and find myself on a totally unfamiliar side street. That’s where the ATT roaming data plan came in handy, madly consuming megabytes loading up street data while I occasionally nervously followed the moving blue dot to a familiar location. My phone roamed onto multiple carriers, but the little screen badge always showed “3G” and I found the data rate usually acceptable and not all that slower than ATT’s “4G” badge in the Portland, OR area. I know that ATT doesn’t really have LTE in Portland, but the “4G” makes me feel special while at home and takes the edge off of wanting a Samsung Galaxy S III with real 4G LTE.

I’m lucky I didn’t “upgrade” to iOS 6.0 before my trip, else I’d have exchanged the native Google Maps for Apple’s Maps and I might never have found my way. Ironically, the whole “Map-Gate” fiasco – complete with Tim Cook’s apology – boiled over while I was in Paris. I met with Nokia, Garmin and TomTom at the auto show, each of which provided some colorful commentary on Apple’s situation that I can’t print here.  Since it’s inevitable that Apple will slowly break things in iOS 5.x until I “upgrade” to 6.x, I took the time to add Nokia Maps and the HTML-based Google Maps to my 4s. Just in case. Not that I have no faith in Apple, you see.

Other Tech Miscellany

I saw one guy wandering through the tradeshow taking pictures on his iPad, and that was the only time I saw a tablet anywhere in Paris. Now, I’m sure they exist, but my leisure time for people watching was confined to the show, the Metro, and the cafes. The only other interesting tech device I saw was a Kindle. I saw a lot of those on the subway. Then again, I also saw a lot of newspapers, too. I like that.

Lastly, since I was there to cover cars (look for a separate blog from me), I paid close attention to what I was seeing on the street. As expected, everything was small. Scooters and motorcycles, including motor bikes that had two wheels but full roofs and roll cages. Interesting. And there were lots of diesel automobiles, including billboards for the New! Diesel! Mini Cooper!  What I didn’t see anywhere was an electric car, though there were lots of announcements for them. We’ll see if that happens, since Paris might be a great place to have them and build up an electric charging infrastructure. Ironically, most of my taxi rides were in a Toyota Prius, the same as I find in San Francisco.

All in all the tech in Paris looks like the tech everywhere else in the Western world, which gives credence to what we tech journalists write about all time. It’s a connected world, and technology is a great enabler everywhere.