How 25 Gigabit Ethernet Makes Networks Faster and Smarter
Until recently, ten Gigabit Ethernet (GbE) was the standard server and Top-of-Rack (ToR) switch speed in high-performance data centers. However, in the last few years a number of factors have necessitated that data centers increase bandwidth requirements, including the explosion of data from Internet of Things (IoT) devices, the surge in online video streaming and the increased throughput servers and storage solutions can support. As data centers are experiencing this massive data increase coming through the network, it’s becoming clear that even 10GbE speeds aren’t enough, so many companies are evaluating the adoption of higher Ethernet speeds to serve their bandwidth needs today and in the future. The question companies are facing is whether to adopt 25GbE, 40GbE or even 100GbE. This article will explore why 25GbE is an optimal Ethernet speed for companies looking to balance the cost and performance tradeoffs that accompany the transition to higher speeds.
Before 25GbE IEEE-standardized solutions hit the market, data centers typically upgraded to higher link speeds by aggregating multiple single-lane 10GbE network physical layers. For example, a company could aggregate four 10GbE physical lanes to achieve 40GbE speeds or aggregate 10 10GbE lanes to run 100GbE speeds. The next speeds standardized by IEEE were 40GbE and 100GbE, offering companies alternative approaches to transition to higher speeds. The evolution of high-speed signaling on a single pair of conductors moved from 10Gbps to 25Gbps. This allowed a 100Gbps link to be implemented by bundling four 25Gbps links together. The industry then looked into unbundling 100GbE technology into four independent 25GbE channels, paving the road for IEEE to approve a 25GbE standard in June 2016.
The introduction of 25GbE provided a solution with the benefits of enhanced compute and storage efficiency, delivering 2.5 times more data than 10GbE at a similar long-term cost structure. While 40GbE and 100GbE provide further increase in bandwidth, the tradeoff is that these solutions are more costly and require more power than 25GbE. Accordingly, for those companies seeking to transition from 10GbE to higher Ethernet speeds, a 25GbE solution provides a faster connection with more bandwidth, while balancing the capital and operational expenditures associated with moving to next-generation networks. Data centers can obtain even more performance for their expenditure by leveraging the same copper cabling and optical fibers, making 25GbE an ideal choice for organizations seeking a faster, smarter and more economical connection.
25GbE is Faster
It’s clear that as data center bandwidth requirements are increasing, 1GbE and 10GbE solutions are quickly becoming out of date. If a data center’s servers are unable to communicate with each other and their end users at a high capacity, they’re not maximizing utilization that their customers will require. Migrating to 25GbE not only provides companies with a huge jump in capacity, but also makes it easy for companies to quickly and cost-effectively upgrade to even greater speeds as needed. With 25GbE, companies can run two 25GbE channels to achieve 50GbE or four channels to attain 100GbE, making the migration to 25GbE future-proof.
Additionally, 25GbE takes advantage of movement at the switch level. Data does not only flow from the server to the core, but also between servers creating East-West centric traffic. Leveraging Clos networks—a fully connected mesh of access switches tied to the servers and spine switches of a network—25GbE is optimized to support this East-West centric traffic. By taking individual 25GbE lanes and spreading them out to spine switches, networks get more connectivity and are no longer limited to 100GbE ports.
25GbE is Smarter
With so much traffic travelling through networks, it’s crucial that they are being monitored to ensure the bandwidth is allocated efficiently. As next-generation switches were developed to take advantage of higher throughput, they also were enhanced to provide more visibility to the traffic flowing through them. Data center operators can fine tune how traffic moves across the network by using analytics systems to process and visualize switching fabric intelligence. Analytics provide operators with the visibility to see how their networks are performing in real time and make on-the-fly adjustments if needed. Despite deploying advanced networks, many companies do not properly utilize analytics or are using inaccurate measurements to access information; analytics enable data centers to get the most performance out of their networks and maximize their investments.
25GbE is More Economical
How is 25GbE a more cost effective solution than other options like 40GbE or 100GbE? The answer is in the simplified transition from existing networking systems. A 25GbE cable offers an easier migration from 10GbE, and 40GbE and 100GbE can be limiting if companies want to migrate from one standard to another. Compared to 40GbE solutions, 25GbE provides a lower cost per unit of bandwidth with greater port density. Another factor to consider is that the cost of optics is a very large part of a network’s data center; reducing this cost allows 25GbE to become much more pervasive. Additionally, what used to require multiple devices can now be done with a single switch device, driving economics at the switch level.
To drive the transition to 25GbE, data center operators are looking to semiconductor companies to create solutions that are optimized for 25GbE. One such example is Marvell’s recently announced new switches and Ethernet transceivers which provide a 25GbE optimized and cost-efficient end-to-end solution. With technological support for 25GbE, data centers can start benefitting from the 25GbE standard and meet the growing needs of the hyper-connected industry.
Nicholas (Nick) Ilyadis is Vice President of Portfolio and Technology Strategy at Marvell. He is responsible for setting strategy for Ethernet networking products including automotive connectivity, network switching, high speed controllers and physical layer transceivers. He also covers high performance processors, enterprise WLAN, security and storage. Prior to Marvell, Ilyadis was VP and Chief Technical Officer of the Infrastructure and Networking Group at Broadcom. He has held executive and engineering roles at Nortel Networks, Bay Networks, Digital Equipment Corp. and Itek Optical Systems. Nick holds master’s and bachelor’s degrees in electrical engineering. Ilyadis is a Senior Member of the IEEE and participates in both the Computer and Communications Societies. He currently has 52 issued patents.