Increased demand for home offices, streaming services for gaming, music and movies, and the rise of data-intensive applications such as machine learning and artificial intelligence (AI) are just a few of the factors driving the demand for bandwidth… These developments present challenges for hyperscalers as well as enterprise and colocation data centers because, in addition to increasing capacity requirements, they must also ensure low latency while meeting climate goals. One way to achieve this is to utilize existing switch architectures more efficiently (high radix ASICS). For example, a 32-port switch provides up to 12,800 Gb/s of bandwidth (32 x 400G), and an 800G transmission version is also available with up to 25,600 Gb/s. These high-speed ports can be easily divided into smaller bandwidths. This allows for more energy-efficient operation while increasing packaging or port density (32 x 400G = 128 x 100G). The need to support low-latency, high-availability, and ultra-high-bandwidth applications will continue to grow in the future. The question is not whether data center operators need to upgrade to meet the growing bandwidth needs, but when and how. Therefore, operators should prepare and adjust their network designs now. After all, with a flexible infrastructure, it is possible to upgrade from 100G to 400G to 800G with very little change. Network design is becoming increasingly complexHowever, higher data rates also increase solution and product complexity. As mentioned earlier, it is not necessary to fully utilize 800G for each port, but to support the bandwidth requirements of the end device. For example, a spine-leaf connection with 4 x 200G or a leaf-server connection with 400G ports operates as 8 x 50G ports while making the network more energy-efficient. To achieve this, there are multiple solutions as well as new transceiver interfaces. LC duplex and MPO/MTP connectors (12/24 fibers) are well-known interfaces for 10, 40, and 100G transmission speeds. For higher data rates, such as 400G and 800G and beyond, other connector types have been introduced, such as MDC, SN, and CS (subminiature connectors), as well as MTP/MPO connectors with 16 fibers in a single row. It is often a challenge for network operators to track and select the right technologies and network components for their needs. The requirement to increase bandwidth in network expansion often conflicts with the lack of space for additional racks and frames or the resulting costs. Therefore, network equipment vendors are constantly working on developing new solutions to achieve higher density in the same space and keep the network design scalable while being as simple as possible. Breakthrough application for ports to improve sustainabilityIn addition to better utilization of high-speed ports and associated port density, port breakout applications can also have a positive impact on power consumption of network components and transceivers. A 100G duplex transceiver for QSFP-DD consumes approximately 4.5 W, while a 400G parallel optic transceiver operating as four 100G ports in breakout mode consumes only 3 W per port. This equates to savings of up to 30%, despite additional savings on air conditioning/cooling and switch chassis power consumption and their contribution to space savings. Impact on network infrastructureWhen the lowest common multiple is used as the basis, the use of backbone or trunk cabling can be extended. For duplex applications, this usually corresponds to "factor 4", that is, base-8 cabling, on which the -R4 or -R8 transceiver model can be mapped. Therefore, this type of cabling supports current technology and future developments. In addition to choosing a fine-grained, scalable backbone network, it is also important to plan sufficient fiber reserves for future upgrades or to implement expansions with as little change work as possible. After planning sufficient fiber reserves, network adjustments can be achieved by replacing only a few components: for example, upgrading from 10G to 40/100G or 400/800G MTP adapter panels and MTP patch cords can be achieved by replacing MPO/MTP with LC modules and LC duplex patch cords, without making any changes to the backbone network (fiber equipment). Modular fiber housings also allow mixing different technologies and integrating new connector interfaces (subminiature connectors) in a few simple steps. Termination options are already available: 8-, 12-, 24- and 36-fiber modules. The use of bend-insensitive fiber also helps make the cabling infrastructure durable, reliable and fail-safe. Preparation pays offData rates of 400G or 800G are still a long way off for most enterprise data center operators, but bandwidth requirements are growing rapidly. Sales of 400G and 800G transceivers are already on the rise, and it pays to be prepared, rather than having to upgrade later under time pressure. Data center operators can best prepare for the future by making their facilities 400G and 800G ready by making just a few changes now. This, of course, also applies to Fibre Channel applications. |
At the 2020 China 5G+ Industrial Internet Confere...
A strong and efficient IT infrastructure is essen...
Recently, Cato Networks released a survey report ...
A400 Interconnect is a Chinese merchant founded i...
URL Uniform Resource Locator (URL) is a reference...
Colocation, which involves placing IT equipment i...
DesiVPS sent a new email saying that it has launc...
On October 12, China Broadcasting Network Co., Lt...
[[332737]] This article is reproduced from Leipho...
Recently, China Mobile announced the bidding resu...
After making an appointment on the mobile phone A...
On November 6, 2017, Guiyang National High-tech Z...
HOSTEROID is a British hosting company founded in...
Data center operators are committed to bringing t...
1. Introduction With the rapid development of mob...