Connect the Dots with Three Simple Questions on Intel’s 5-Point Strategy
Since Intel’s April restructuring, the company’s message could use a little clarification from the semiconductor giant. Here are three questions I’d like to hear answered to help connect the dots in the company’s strategy.
In April Intel announced some 12,000 layoffs, a corporate restructuring, and a 5-point focus I described here. Gone is the focus on smartphones (along with billions of dollars spent trying) and there’s now a publicly acknowledged admission that the PC market is shrinking—despite Intel’s amazing processor introductions and Microsoft’s latest Windows 10. Instead, said Intel CEO Brian Krzanich (“BK”), the focus is on: 1) cloud, 2) “things” (including PCs), 3) memory and FPGAs, 4) 5G cellular, and 5) Moore’s Law.
With the lack of detail so far with Intel’s messaging, I can’t seem to connect the dots from Intel’s current line-up and technology…to Intel’s eventual success in these areas. If the company could answer these three questions, I’d be up-to-speed on its strategy.
Question #1: How to Stay Ahead of Competitors in the Data Center?
Analysts report that Intel commands 95+ percent market share in the data center with high-price, high-performance Xeon E5’s and 10 Gigabit Ethernet pipes. No doubt about it: Intel’s technology for barn-burning performance is exemplary. While Intel’s high-end E5 roll-outs seem slower than the market wants, the Fall 2015 Broadwell-DE Xeon D possibly opened up a new slice of the market. Maybe Intel is onto something; maybe Xeon D will be enough.
Xeon D slots between Intel’s lower end Xeon E3 and the high end E5…but offers up to 16 cores (and virtual machines) at a mere ~45W. Xeon D is much cheaper than the E5 but intentionally lacks some of the highest-end server/data center features. Still, Xeon D is so compelling that the industry group PICMG is defining a new Type 7 pinout to COM Express in order to allow Xeon D’s 10 GigE ports to route off small COM boards. Rumors are that server companies like Dell and HP might put 6, 10, 12 COMe modules on a server shelf—offering more performance and ports than a comparably sized E5-equipped server.
But despite everyone agreeing on the merits of cloud-based computing—which requires more servers in data centers, Intel’s bread and butter—this market is ripe for competition. ARM and AMD have both set their sights on it and are scoring some wins in the lower- to medium-performance areas. It’s a sure bet that Japan’s SoftBank acquisition of ARM Holdings for $32B sees the data center as Target #1 or #2 or #3 (mobile, IoT, data center—in some order). But will the “cloud” always be in the cloud? Maybe not.
As the IoT moves processing closer to the edge and sometimes becomes embedded in the smaller, lower power node, is Intel’s component mix correct? Xeon D addresses some of this when “reasonable” power is available, but Intel just hasn’t got the goods for embedded almost-cloud processing if “low power” ever enters into the conversation. And there’s a lot of that low power stuff being talked about in an IoT context. This is why Intel left mobile: the company’s current offerings never passed muster on low power. So if “cloud” and the data center are key to BK’s strategy, can we expect new product offerings?
Question #2: Flash and FPGAs Can’t Just be for the Data Center, Can They?
I correctly predicted—as did 100 other analysts—that the Altera acquisition offered Intel a co-processing strategy that dramatically increases the Xeon’s data center performance. FPGAs are the de facto algorithmic accelerators taking advantage of Intel processors’ QPI and coprocessor instruction set. Intel has announced products marrying Xeons with Altera FPGAs for High Performance Computing (HPC) and started shipping development modules in April. Yawn: this was predictable, albeit very essential for Intel.
But Altera did fine and dandy as a standalone company for quite a while, shipping FPGAs into machine vision, automotive ADAS (safety), military radars, and in SoCs with ARM processors. Surely Intel sees the value of monolithically combining x86 instruction set 32- and 64-bit “microcontroller” SoCs with FPGA gates and gigabit LVDS channels.
In fact, it was Intel’s original 8031 (no ROM), 8051 (masked ROM), and 8751 (EPROM) that awakened designers to the possibility of ultra-compact single-chip embedded systems. If Intel is serious about the IoT, combining a Quark or Curie (Intel’s best, lowest power compute unit to date) with FPGA gates could be a real winner. Especially since Altera and Xilinx already did/do this with ARM and proprietary CPUs (Microblaze and Nios).
The same question of branching beyond the data center applies to BK’s solid state storage pillar. Today, Intel’s SSD roadmap favors cloud and data center computing. The company’s announced roadmap shows good- to great-performance in both SATA and PCI Express (NVMe) drives—two form-factors important for Intel to marry up to high-performance Xeons.
But Intel offers nada that’s interesting in SSD, M.2, or other small form factor NV storage for IoT and embedded doodads—not even flash memories. So while Intel is all about the “Makers” (IDF 2015 and this year’s soon-to-occur IDF 2016), flash storage is de facto and Intel ain’t gots. Yet.
But maybe Intel is banking on something new for the IoT. The Intel/Micron OPTANE 3D XPoint memory—a cross between fast DRAM and non-volatile storage—might be a real game-changer for the entire tech market. Little concrete info is available, but the specs are impressive and those in the know are foaming with excitement. So far, Intel is only talking publicly about 3D XPoint in SSDs, but a recently “leaked” Intel OPTANE roadmap shows M.2 SSDs for embedded in Q1’17.
Question #3: What Else ’Ya Got?
I close with this final question: what else does Intel have in the closet besides Core processors, Xeons, SSDs and/or OPTANE, FPGAs and 5G modems? Sure, Intel has some pretty killer 10GigE controllers and Wi-Fi IP, but I’m talking chips, integrated circuits, and other controllers and peripherals. Every modern Intel Platform Controller Hub (PCH) that works with a CPU is chock full of I/O. Why not break out of the box?
Intel’s got PCIe Gen 3 and HDMI 1.4, there’s DVI and Wi-Di, multi-bank memory controllers, Thunderbolt 3 and SATA 3. The company either invented, standardized or catalyzed many of the fundamental technologies used by the whole industry (e.g: PCI, PCIe, Wi-Fi, USB). Yet Intel remains publicly absent from key emerging standards like HSA, which is designed to make easier the task of writing code for heterogeneous processors. ARM and AMD are all over that one because both processor companies recognize that intelligent peripherals contain processors that must play nice with somebody’s main CPU.
I’ve got to believe—and I was just arguing this with a high-ranking friend at Intel—that the company’s Core processor roadmap looks so weak going into 2018 because Intel plans on rolling out new SoC-like processors. Processors it has not yet revealed outside the Cone of Silence. Maybe the company will disaggregate the PCH into mundane things like intelligent USB 3.1 battery chargers and high-speed side channel controllers that pipe DVI/HDMI/PCIe 3.0. Or give us Thunderbolt 3 at greater than 40 Gbits/s. Or..or…or.
Perhaps the Intel RealSense™ camera IC will morph into a multimedia controller capable of Microsoft Cortana-like audio/video I/O while performing on-the-fly video transcoding for IoT machine vision sensors. Bolting up a stripped down recently leaked “Coffee Lake” Core i7 (14nm CY18) with a large OPTANE array would make a perfect IoT data aggregator that in one chip replaces a COM Express board with SSD module. Oh, the places you will go!
All of these things and more are possible if Intel chooses to bifurcate—not abandon—the company’s love affair with only a few product types. Come on, Intel, show us what you got. Help us connect the dots between today and your future.