Duplex connectivity emerges on the path to 400G

The QSFP-DD multi-source agreement recognizes three duplex optical connectors: the CS, SN, and MDC.

news

US Conec’s MDC connector increases density by a factor of three over LC connectors. The two-fiber MDC is manufactured with 1.25-mm ferrule technology.

By Patrick McLaughlin

Nearly four years ago, a group of 13 vendors formed the QSFP-DD (Quad Small Form-factor Pluggable Double Density) multi-source agreement (MSA) Group, with the goal of creating a double-density QSFP optical transceiver. In the years since its founding, the MSA group has created specifications for QSFPs to support 200- and 400-Gbit/sec Ethernet applications.

The previous-generation technology, QSFP28 modules, support 40- and 100-Gbit Ethernet applications. They feature four electrical lanes that can operate at 10 or 25 Gbits/sec. The QSFP-DD group has established specifications for eight lanes that operate at up to 25 Gbits/sec or 50 Gbits/sec—supporting 200 Gbits/sec and 400 Gbits/sec, respectively, in aggregate.

In July 2019 the QSFP-DD MSA group released version 4.0 of its Common Management Interface Specification (CMIS). The group also released version 5.0 of its hardware specification. The group explained at that time, “As the adoption of 400-Gbit Ethernet grows, CMIS was designed to cover a wide range of module form factors, functionalities and applications, ranging from passive copper cable assemblies to coherent DWDM [dense wavelength-division multiplexing] modules. CMIS 4.0 can be used as a common interface by other 2-, 4-, 8-, and 16-lane form factors, in addition to QSFP-DD.”

Additionally, the group noted that version 5.0 of its hardware specification “includes new optical connectors, SN and MDC. QSFP-DD is the premier 8-lane data center module form factor. Systems designed for QSFP-DD modules can be backwards-compatible with existing QSFP form factors and provide maximum flexibility for end users, network platform designers and integrators.”

Scott Sommers, a founding member and co-chair of the QSFP-DD MSA, commented, “Through strategic collaborations with our MSA companies, we continue to test the interoperability of multiple vendors’ modules, connectors, cages and DAC cables to assure a robust ecosystem. We remain committed to developing and providing next-generation designs that evolve with the changing technology landscape.”

The SN and MDC connector joined the CS connector as optical interfaces recognized by the MSA group. All three are duplex connectors that are characterized as very small form factor (VSFF).

MDC connector

US Conec offers the EliMent brand MDC connector. The company describes EliMent as being “designed for termination of multimode and singlemode fiber cables up to 2.0 mm in diameter. The MDC connector is manufactured with proven 1.25-mm ferrule technology used in industry-standard LC optical connectors, meeting IEC 61735-1 Grade B insertion loss requirements.”

US Conec further explains, “Multiple emerging MSAs have defined port-breakout architectures that require a duplex optical connector with a smaller footprint than the LC connector. The reduced size of the MDC connector will allow a single-array transceiver to accept multiple MDC patch cables, which are individually accessible directly at the transceiver interface.

“The new format will support four individual MDC cables in a QSFP footprint and two individual MDC cables in an SFP footprint. The increased connector density at the module/panel minimizes hardware size, which leads to reduced capital and operational expense. A 1-rack-unit housing can accommodate 144 fibers with LC duplex connectors and adapters. Using the smaller MDC connector increases the fiber count to 432 in the same 1 RU space.”

The company touts the MDC connector’s rugged housing, high-precision molding, and engagement length—saying these characteristics allow the MDC to exceed the same Telcordia GR-326 requirements as the LC connector. The MDC includes a push-pull boot that allows installers to insert and extract the connector in tighter, more-confined spaces without affecting neighboring connectors.

The MDC also enables simple polarity reversal, without exposing or twisting fibers. “To change polarity,” US Conec explains, “pull the boot from the connector housing, rotate the boot 180 degrees, and reassemble the boot assembly back onto the connector housing. Polarity marks on the top and side of the connector provide notification of reversed connector polarity.”

When US Conec introduced the MDC connector in February 2019, the company said, “This state-of-the-art connector design ushers in a new era in two-fiber connectivity by bringing unmatched density, simple insertion/extraction, field configurability and optimal carrier-grade performance to the EliMent brand single-fiber connector portfolio.

“Three-port MDC adapters fit directly into standard panel openings for duplex LC adapters, increasing fiber density by a factor of three,” US Conec continued. “The new format will support four individual MDC cables in a QSFP footprint and two individual MDC cables in an SFP footprint.”

CS and SN

The CS and SN connectors are products of Senko Advanced Components. In the CS connector, the ferrules sit side-by-side, similar in layout to the LC connector but smaller in size. In the SN connector, the ferrules are stacked top-and-bottom.

Senko introduces the CS in 2017. In a white paper co-authored with eOptolink, Senko explains, “Although LC duplex connectors can be used in QSFP-DD transceiver modules, the transmission bandwidth is either limited to a single WDM engine design either using a 1:4 mux/demux to reach a 200-GbE transmission, or a 1:8 mux/demux for 400 GbE. This increases the transceiver cost and cooling requirement on the transceiver.

“The smaller connector footprint of CS connectors allows two of them to be fitted within a QSFP-DD module, which LC duplex connectors cannot accomplish. This allows for a dual WDM engine design using a 1:4 mux/demux to reach a 2×100-GbE transmission, or 2×200-GbE transmission on a single QSFP-DD transceiver. In addition to QSFP-DD transceivers, the CS connector is also compatible with OSFP [octal small form-factor pluggable] and COBO [Consortium for On Board Optics] modules.”

Dave Aspray, Senko Advanced Components’ European sales manager, recently spoke about the use of the CS and SN connectors to reach speeds as high as 400 Gbits/sec. “We are helping to shrink the footprint of high-density data centers by shrinking the fiber connectors,” he said. “Current data centers predominantly use a combination of LC and MPO connectors as a high-density solution. This saves a lot of space compared to conventional SC and FC connectors.

“Although MPO connectors can increase capacity without increasing the footprint, they are laborious to manufacture and challenging to clean. We now offer a range of ultra-compact connectors that are more durable in the field as they are designed using proven technology, are easier to handle and clean, and offer considerable space-saving benefits. This is without a doubt the way forward.”

Senko describes the SN connector as an ultra-high-density duplex solution with a 3.1-mm pitch. It enables the connection of 8 fibers in a QSFP-DD transceiver.

“Today’s MPO-based transceivers are the backbone of data center topography, but data center design is transitioning from a hierarchical model to a leaf-and-spine model,” Aspray continued. “In a leaf-and-spine model, it is necessary to break out the individual channels in order to interconnect the spine switches to any of the leaf switches. Using MPO connectors, this would require a separate patch panel with either breakout cassettes or breakout cables. Because the SN-based transceivers are already broken out by having 4 individual SN connectors at the transceiver interface, they can be patched directly.

“The changes that operators make to their data centers now can futureproof them against inevitable increases in demand, which is why it is a good idea for operators to consider deploying higher-density solutions like the CS and SN connectors—even if it is not imperative to their current data center design.”

Patrick McLaughlin is our chief editor.


Post time: Mar-13-2020