The Latest Requirements and Key Technologies for Data Center Optical Transceivers

With the commercial use of cloud computing, big data and other new technologies, data center flow and bandwidth have an exponential increase. There is a huge opportunity for optical transceiver vendors.

At the same time, we can see that there are some differences in the application for optical transceivers between the data center and Telecom.

First of all, here will discuss the requirements for data center optical transceivers in detail.

The Requirements for Optical Transceivers

Low Cost

It’s the foundation for data centers to use a large number of optical transceivers and the power of facilitating the data center development.

Low Power Consumption

It complies with the concept of human green development and promotes industrial development under the premise of protecting the environment.

High Speed

It meets the requirement of data communication such as cloud computing and big data.

High Density

It increases the number of optical transmission channels in unit space and achieves the purpose of improving data transmission capacity.

Short Period

It’s the characteristics of the rapid development of recent data communications,  the general life cycle is 3-5 years.

Narrow Temperature

The data center optical transceivers are the indoor temperature and humidity control, hence the user proposed working temperature can be defined as 15 to 55 degrees between the narrow temperature range. This is a reasonable approach.

On the macro level, the data center optical transceiver market reasonably defines the life and working conditions of optical transceivers according to the requirements and fully optimizes the market for the cost performance of optical transceivers. Due to the open trend of several networks, this market has the characteristics of positivity and open, welcoming the characteristics of new technologies and the atmosphere of exploring new standards as well as application conditions. All of these provide excellent conditions for the development of data center optical transceiver technology.

The Key Technologies for Optical Transceivers

Non-Hermetic Package

As the cost of optical components (OSA) accounts for over 60% of the cost of optical transceivers, and the space for cost reduction of optical chips becomes smaller and smaller, the most likely cost reduction is the packaging cost. While ensuring the performance and reliability of optical transceivers, it is necessary to promote the packaging technology from the expensive hermetic package to the low-cost non-hermetic package. The key points of the non-hermetic package include the non-air tightness of the optical device itself, the optimization of the design of the optical components, the packaging materials and the improvement of the process. Among them, optical devices, especially lasers, are the most challenging. This is because if the laser device is not hermetic, the expensive hermetic package is not needed. Fortunately, in recent years several laser manufacturers have avowed that their lasers can be applied to non-airtight applications. In view of the large number of shipping data center optical transceivers, most of them are mainly non-hermetic package. It seems that the non-hermetic packaging technology has been well received by the data center optical transceiver industry and customers.

Hybrid and Integrated Technology

Under the drive of multi-channel, high speed and low power consumption demand, the same volume optical transceivers need to have more data transmission, and the photonic integration technology gradually becomes a reality. Photonic integration technology has a broader meaning: for example, based on the integration of silicon-based (planar optical waveguide hybrid integration, silicon photonics, etc.), based on the integration of indium phosphide. Hybrid and integrated technology usually refers to the integration of different materials. There is also the construction of partially free space optics and partially integrated optics called hybrid integration. The typical hybrid integrated active optical devices (laser, detector, etc.) are integrated into the passive optical path connection or some other function (points or wave, etc.) of the substrate (planar optical waveguide and silicon light, etc.). The hybrid integrated technology of optical components can be done very compactly, complying with the trend of miniaturization of optical transceivers, easy to use mature IC encapsulation process automation. It is beneficial to mass production, which is an effective technical method for recent data center optical transceivers.

Flip Chip Technology

Flip chip is a high-density chip interconnection technology from IC packaging industry. In the rapid development of optical transceivers today, the interconnection between short – shrinking chips is a valid option. It is better to weld optical chip directly onto the substrate through gold-gold welding or eutectic welding, which is much better than the high-frequency effect of gold wire bonding (short distance, small resistance, etc.). In addition to the laser, the heat generated by the laser is easily transferred from the solder to the substrate due to the proximity of the source area to the solder, which is helpful for improving the efficiency of the laser at high temperature. Because the backward welding is the mature technology of IC packaging industry, there are many kinds of commercial automatic reverse welding machines used in IC packaging. Optical components require optical path coupling, so the accuracy requirements are high. These years optical components processing with high precision inversion welding machine are very eye-catching and in many cases have realized the passive light, greatly improving the productivity. Due to the characteristics of high precision, high efficiency and high quality, the flip chip technology have become an important technology in the data center optical module industry.

Chip On Board Technology

COB (chip on board) technology also comes from the IC packaging industry, whose principle is through the rubber patch technology (epoxy die bonding) to fix chips or optical components on the PCB, and then gold wire bonding (wire bonding) uses the electrical connection, and lastly drip glue sealing on the top. Obviously, this is a non-hermetic package. The advantage of this process is that it can be automated. For example, the optical components can be viewed as a“chip” after it has been integrated by backloading and welding. Then the COB technology is used to fix it on the PCB. At present, COB technology has been widely adopted, especially in the use of VCSEL arrays in short distance data communication. The integrated silicon photonics can also be packaged by using COB technology.

Silicon Photonics Technology

The silicon photonics is a technology that discusses the technology and technique of optoelectronic devices and silicon-based integrated circuits, and science integrated into on the same silicon substrate. Silicon photonics technology will eventually go to photoelectric integration (OEIC: Opto – Electric Integrated Circuits), making the current separated photoelectric conversion (optical transceivers) into the local photoelectric conversion of photoelectric integration, further pushing the system integration. Silicon photonics technology can certainly do a lot of things, but for now, it’s the silicon modulator. From the industry, the threshold of new technology into the market must be the performance and cost is competitive and the need for huge upfront costs of silicon photonics technology is really a big challenge. The data center optical transceiver market, due to the large demand concentration within 2 kilometers, with the strong requirements of low cost, high speed and high density, is suitable for a large number of applications of silicon photonics.

Conclusion

The traditional 100G optical transceivers have been very successful, and they are not easy to get a lot of silicon photons. However, at the rate of 200G or 400G, since the traditional direct modulation type is close to the limit of bandwidth, the cost of EML is relatively high, which will be a good opportunity for the silicon photonics. A large number of applications of silicon photons also depends on the openness and acceptance of technology in the industry. If taking into account the characteristics of silicon photonics when setting the standards and agreements or relaxing some indicators (wavelength, extinction ratio, etc.) on the premise of meeting the transmission condition, they will greatly promote the development and application of silicon photonics.

On Board Optics If OEIC is the ultimate photoelectric integration scheme, On–Board Optics is a technology between OEIC and optical transceivers. On–Board Optics move the photoelectric conversion function from the panel to the motherboard processor or to the associated electrical chip. By saving space and increasing the density, it also reduces the distance of the high-frequency signal, thus reducing the power consumption. On–Board Optics is primarily focused on the short-range multimode fiber used in the VCSEL array, but recently there is a scheme for using silicon photonics technology in single-mode fiber. In addition to the composition of the simple photoelectric conversion function, there are also the forms (co-package) that encapsulate the photoelectric conversion function (I/O) and the associated electrical chip (processing). Although On–Board Optics has the advantages of high density, the manufacturing, installation and maintenance costs are relatively high and are currently used in the field of supercomputing. It is believed that with the development of technology and the need of the market, onboard optics will gradually enter into the field of the optical interconnection of the data center.

AOC Uses in Modern Data Centers

AOC is composed of integrated optoelectronic devices for high-speed, high-reliability interconnected transmission device between data centers, high-performance computers, and large-capacity memory devices. It usually meets the industry standard electrical interface and transmits data by the superiority of fiber optic cable and electrical-to-optical conversion.

While AOC reaches can extend to the limits of the optical technology used (100-200m), installing a long 100m cable, complete with an expensive transceiver end, is difficult in crowded data center racks so the average reach typically used is between 3-30m. Only one “oops” per cable allowed. Damaging the cable means replacing it as it cannot be repaired in the field. AOCs are typically deployed in open access areas such as within racks or in open cable trays for this reason.

Gigalight 25G SFP28 Active Optical Cables (AOCs) are direct-attach fiber assemblies with SFP28 connectors, compliant with 25G Ethernet IEEE 802.3by 25GBASE-SR standard. They are suitable for short distances and offer a cost-effective solution to connect within racks and across adjacent racks. The length is up to 70 meters using OM3 MMF and 100 meters using OM4 MMF.

The Advantages of Gigalight 25G SFP28 AOC

Low power consumption <1W

The pre-FEC bit error ratio (BER) is guaranteed to meet E10-8 25.78125Gb/s@PRBS31, 55℃,  Better than the IEEE pre-FEC BER of less than 5 E-5.

Mature COB technology

Low Cost

High capacity, timely delivery

CE, UL,  RoHS, GR-468 test report

Conclusion

The power and cost savings caught the eye of the Ethernet hyperscale and enterprise data center builders and has since become a popular way to link Top-of-Rack switches upwards to aggregation layer switches such as End-of-Row and leaf switches. Several hyperscale companies have publicly stated their preferred use of AOCs for linking Top-of-Rack switches. Additionally, single channel (SFP) AOCs have become very popular in high-speed, NVMe storage subsystems. Some hyperscale builders often run 10G or 25G AOCs from a Top-of-Rack switch to subsystems at reaches greater than DAC limits of 3-7m.

What Is SFP?

SFP stands for Small Form-Factor Pluggable. It is a compact, hot-pluggable transceiver used for both telecom and datacom applications.

SFP module has two ports, one port has laser inside, which is the transmitter side. The other port has a photodetector inside, which is the receiver side. So basically, SFP is a transceiver module, since it has transmitter and receiver in a single unit.

Which Components Make Up the SFP Optical Module?

The SFP optical module is composed of laser, circuit board IC and external accessories. The external accessories include shell, unlocking part, buckle, base, gripper, rubber plug, PCBA, and the color of gripper can help you to identify the parameter type of the module. For the types of SFP module, there are many types for SFP module such as BIDI-SFP, Electrical interface SFP, CWDM SFP, DWDM SFP, SFP+ transceivers and so on. In addition, for the same type of XFP, X2, XENPAK optical transceivers, SFP optical transceivers can not only be directly connected with it, but also have the feature of lower cost than it.

How Are the SFP modules used on the PCB board?

The following picture shows a perspective view of the SFP module, so you can clearly see its mechanical outlines.

SFP

Gigalight 10G SFP+

SFP module’s mechanical interface and electrical interface are specified by a multi-source agreement, also called MSA.

MSA is an industrial group composed of many network component vendors, such as Finisar, Fujikura, Lucent, Molex, Tyco, etc.

Engineers from these major vendors came together and made a design that everybody agreed upon. So based on this MSA specification agreement, these companies can make products that can work together in a system without compatibility issues. It is almost like an industry standard.

SFP was designed based on the bigger GBIC interface, but SFP has a much smaller footprint in order to increase port density. That is why SFP is also called mini-GBIC.

SFP modules are classified based on the working wavelength and its distance reach. Let’s take a look at the list here.

For multimode fibers, the SFP module is called SX. SX modules use 850nm wavelength. The distance that the SX module supports depends on the network speed. For 1.25 Gbps, the reach is 550 meters. For 4.25 Gbps, SX modules support 150 meters.

For single mode fibers, there are a lot of choices. I am listing the most common types here.

LX modules use 1310nm wavelength laser and support up to 10km reach. ZX modules use 1550nm wavelength laser, and supports reach up to 80km. ZX modules also use 1550nm laser but support up to 120km reach.

There are also CWDM and DWDM SFP modules, which use multiple wavelengths in the module to support even more bandwidth and distance.

And don’t forget, the MSA also defined a SFP module based on the UTP twisted pair copper cables. But this SFP module currently only supports Gigabit Ethernet.

Traditional SFP modules support the speed up to 4.25 Gbps. But an enhanced version, which is called SFP+, supports up to 10Gbps, and is becoming more popular on 10Gigabit Ethernet and 8Gbit Fibre Channel.

SFP transceivers are used on all types of network applications, including telecommunication, data communication, Storage Area Network.

On the protocol side, there are SFP modules that support SONET/SDH, Gigabit Ethernet, Fibre Channel, Optical Supervisory Channel, and more.

Conclusion

Gigalight is committing to providing cost-effective products for customers. 10G optical modules such as 10G SFP+, 10G CWDM SFP+, 10G DWDM SFP+ can be provided by Gigalight. Gigalight has been investing in the development of colored (CWDM/DWDM) transceivers which have been widely sold around the world. You can find more relevant information from Gigalight‘s official website.

The Path to Upgrade Data Center

With the increasing demand for high bandwidth from private cloud, public cloud data center and service providers, 25G and 100G are widely used. 200G and 400G optical devices will be successively produced and shipped from 2019. So far, most server vendors have started offering servers 25G of fiber-optic network CARDS as an I/O(input/output) option, and Ethernet’s signal transmission rate has increased from the earlier 10G to 25G, 100G or higher. While 1G, 10G and 40G currently dominate the Ethernet port market, the future demand for 25G and 100G is stronger than ever as high bandwidth is undeniably driving data centers toward greater scalability and flexibility.

Why Is 25G Coming to Data Centers?

Data centers are expanding at an unprecedented rate, driving the need for higher bandwidth connections between servers and switches. To accommodate this trend, the access network has been upgraded from 10G to 25G, providing high-density, low-cost, and low-power solutions for the connection between servers and ToR switches.

The Development History of 25G

Since its advent in 2014, Google, Microsoft, Arista Networks, Broadcom, and Mellanox have been driving the development of the 25G Ethernet standard, which is intended to enable a 25G top-shelf server network. With the increasing popularity of 25G and its rapid spread in the market, 25G will provide a comprehensive solution for the connection between server and switch in the future.

The Advantages of 25G

Before the release of 25G Ethernet standard, enterprises, operators and other data centers generally adopt the network upgrading method of 10G to 40G. With the official release of 25G Ethernet standard, 25G to 100G network upgrading method has gained more applications with the advantages of low cost, low power consumption and high density, promoting the rapid development of 100G Ethernet. Let’s take a look at the differences between 10G, 25G, 40G, and which upgrade is superior.

25G Can Provide Higher Performance Bandwidth Than 10G

In the current data center, the network connection between the server and the switch is generally between 10G and 25G. Compared with 10G, 25G is an improvement based on 10G packaging and chip technology, providing higher bandwidth and performance. The emergence of 25G enables the data center to be based on the existing network architecture without any cable interconnection, and can also support the transmission of higher rate (over 10G), meet the demand for higher bandwidth in the future network, and make the network upgrade more convenient and easy. The wiring infrastructure required for 25G and 10G transmission is basically the same, which can effectively avoid the expensive cost and the complexity of rewiring and make the network upgrade more convenient.

In addition, the 25G is similar to the 10G in that it uses a single channel of SerDes for backward compatibility, significantly reducing power consumption and costs and helping data center operators save capital and operating expenses.

Insert 25G SFP28 optical transceiver into 10G SFP+ port, what speed will we get?

Theoretically, the 25G SFP28 optical transceiver is backwards compatible with 10G SFP+ port, and its rate can reach 10G/s. However, this mode of use is not suitable for all brands of switches and optical transceivers. Considering the limitation of the fiber network card and switch port, this mode is generally not recommended.

25G Is More Suitable for High-density Requirements Than 40G

For large, high-end enterprises, the port density of the server largely determines the cost of cabling and switch infrastructure in the whole system. Therefore, compared with 40G, the cost of upgrading from 25G to 100G is relatively low. Because 25G to 100G network upgrade, the switch port is fully utilized, effectively reducing the cost of bandwidth.

It has been upgraded to a 40G network. Is it necessary to deploy a 25G network?

25G devices are expensive compared to 40G devices, so having upgraded to a 40G network, is it necessary to deploy a 25G network? Due to the cost rationalization of the 25G channel, 25G is definitely an important path to upgrade from 10G to 100G or higher in the future. If you need to increase the baud rate (signal transfer rate) or plan to upgrade the network to a higher speed (100G/200G/400G), then you must deploy the 25G network. If there is no requirement, then you do not need to deploy the 25G network.

The Prospect of 25G

At present, 25G servers and 100G switches can be seen everywhere in very large data centers. They gradually replace the earlier 10G servers and 40G switches. This network upgrade increases the throughput of the whole system by 2.5 times and reduces the incremental cost of equipment. As the Ethernet industry continues to innovate and lay the foundation for higher rate research and development, the 25G-100G upgrade model has become an important path for data centers.

25G Offers More Possibilities for 50G

As we all know, 25G can provide 2.5 times the bandwidth compared with 10G, and 50G can provide 1.25 times the bandwidth compared with 40G in the future. Currently, 50G has been proposed as the basis for 100G,200G, 400G network upgrade, but the implementation of 50G Ethernet standard still needs some time.

25G will provide more possibilities for 50G, as the implementation of 50G Ethernet can be based on two 25G channels, so it will be an alternative to the current use of four 10G channels up to 40G, reducing the cost of network equipment in the data center by reducing channel deflection. In the future, the network upgrade path may evolve from the traditional 10G-40G-100G to 10G-25G-50G-100G. In any case, upgrading a data center from multiple 25G channels to a 50G or 100G network will be simpler and more economical.

25G Lays the Foundation for 200G and 400G Network Upgrade

The 25G,50G,100G network architecture offers greater flexibility and is often used as a solution for large data centers, paving the way for later 200G/400G upgrades. At present, high-end enterprises and large data centers are shifting towards this, effectively promoting the implementation of large data centers and the interconnection between data centers. Nowadays, more and more suppliers in the market are engaged in the research and development of 200G and 400G optical devices, some of which have been successfully put into use. The implementation of 100G Ethernet is based on the development of 25G/50G. Similarly, the future network upgrade of 200G and 400G will be based on 100G. The following table lists the paths from 25G/50G/100G to 200G/400G.

Conclusion

The need for higher speed and performance in future data centers will never cease. Looking back at the evolution of 25G over the past few years, you can see that the emergence of 25G is a milestone in the next generation of data center network bandwidth and channel capacity expansion. The 25G to 100G network upgrade overturns the traditional 10G-40G network and improves the efficiency of the data center by providing higher bandwidth and port density, reducing power consumption and cost. It lays a solid foundation for the 200G/400G upgrade. Let’s wait and see how the continuous development and innovation of Ethernet will promote the new round of changes in the data center.

Gigalight supplies one-stop data center optical transceivers for 40G/50G/100G/200G/400G Ethernet interconnections. Gigalight has been keeping pace with the industry’s mainstream technology, focusing on boutique development to help users create a high-capacity, high-reliability, large cache cloud data center network. In the future, Gigalight will bring more innovative products to customers.

 

Three Hot Selling 200G Optical Transceivers for Data Center

 

A1

The dramatically grew in demand for 100G CWDM over the past year. While 100G continues to ramp, the promise of high volume 400G remains omnipresent, albeit a 2019 phenomenon. Customers need existing technologies that ship in production volumes to fill this industry gap. While 100G CWDM is a mature and well-understood technology and will continue to ramp in the coming year, many of the big Cloud Data Center OEMs are turning their sights to 200G, to meet the pressures of enabling faster connections at scale volumes.

200G optical transceivers with their many advantages such as significantly lower latency, power consumption and cost are coming to market now, and are seen by many as a viable, volume-scalable stepping-stone to 400G. The three hot selling 200G optical transceivers for the data center will be introduced in this article.

No. 1

200G QSFP-DD SR8 NRZ

QSFP-DD ports are backwards compatible with QSFP28 which is very important to provide a smooth upgrade path and links with older systems.  The backwards compatibility of the QSFP-DD allows for easy adoption of the new module type and accelerate overall network migration.

Application

It is a high-performance module for short-range multi-lane data communication and interconnects.

The Gigalight 200G QSFP-DD SR8 NRZ optical transceiver is designed for 2×100-Gigabit Ethernet 100GBASE-SR4 applications. The 200G QSFP-DD SR8 optical transceiver is designed to operate over multimode fiber systems using a nominal wavelength of 850nm. This module incorporates Gigalight proven circuit and VCSEL technology to provide reliable long life, high performance, and consistent service. Historically VCSEL-MMF links have been seen by many as the lowest cost short-reach interconnect.

No. 2

200G QSFP-DD PSM8 NRZ

The form-factor for 200G QSFP-DD PSM8 NRZ optical transceivers are similar to 200G QSFP-DD SR8 NRZ.

Application

It is a high-performance module for data communication and interconnects.

The Gigalight 200G QSFP-DD PSM8 NRZ optical transceiver is designed for 2×100-Gigabit Ethernet PSM4 and InfiniBand DDR/EDR applications. The 200G QSFP-DD PSM8 (dual PSM4) module integrates eight data lanes in each direction. Each lane can operate at 25.78Gbps up to 2km/10km over G.652 SMF. It is designed to operate over single-mode fiber systems using a nominal wavelength of 1310nm. The electrical interface uses a 76-contact edge type connector. The optical interface uses a 24-fiber MTP/MPO connector. This module incorporates Gigalight proven circuit and optical technology to provide reliable long life, high performance, and consistent service.   

 No. 3

200G QSFP56 SR4 PAM4

PCB layout and heat dissipation design are crucial challenges for 200G QSFP56 SR4 PAM4. Too many components, smaller QSFP56 package and the larger thermal power consumption which are the main reasons.

200G QSFP56 optical transceiver represents an evolution of the highly popular four-lane QSFP+ form factor is ideally suited for hyperscale data centers and high-performance computing (HPC) environments.

Application

It is compliant with the QSFP MSA and IEEE 802.3cd 200GBASE-SR4 specification.

The Gigalight 200G QSFP56 SR4 PAM4 optical transceiver is designed for  200-Gigabit Ethernet links over multimode fiber. This transceiver is a high-performance module for short-range multi-lane data communication and interconnects. It integrates four data lanes in each direction with 212.5Gbps bandwidth. Each lane can operate at PAM4 53.125Gbps (26.5625GBd) up to 70m using OM3 fiber or 100m using OM4/OM5 fiber. These modules are designed to operate over multimode fiber systems using a nominal wavelength of 850nm. The electrical interface uses a 38-contact edge type connector. The optical interface uses a 12-fiber MTP/MPO connector. This module incorporates Gigalight proven circuit and VCSEL technology to provide reliable long life, high performance, and consistent service. 

Conclusion

200G and even 400G transceivers will start to be commercially adopted starting 2019 and will start taking away the market share from the 100Gbps transceivers. 2019 will be a pivotal year to see how 200G takes hold in the cloud datacenter, and intensifying industry collaboration on 200G standards and interoperability could help position this technology for sustained mainstream adoption while 400G continues to mature.  

200G Optical Modules for the Next-generation Data Center Deployments

With 100G in wide-scale deployment today and the promise of mainstream 400G deployment seemingly ubiquitous, Cloud Data Centers are eager to take advantage of any and every opportunity to bridge the throughput gap and keep pace with the data deluge. 200G (4 x 50G) optical modules answer this immediate need head-on.

At the broader market level, while 100G technology is already mature and component integration is well established, 200G end-to-end interoperable chipsets have just recently hit the market. Looking to the past as our guide, in the short term, 200G modules are expected to emulate cost structures akin to 100G modules when they entered the market a few years ago and follow a similar downward cost curve as component integration is further standardized and volume shipments accelerate. In due course, 200G modules are expected to achieve a cost structure that’s comparable to today’s 100G modules.

While 100G CWDM is a mature and well-understood technology and will continue to ramp in the coming year, many of the big Cloud Data Center OEMs are turning their sights to 200G, to meet the pressures of enabling faster connections at scale volumes. 

Google

Google started deployments in 2x200GbE transceivers in 2018 and we expect that demand for these products will peak in 2022, as Googles starts to transition to 2x400GbE modules.

Amazon

The forecast for 400GbE includes 4x100GbE DR4 modules selected by Amazon. These DR4 modules will be deployed in a breakout configuration with DR1 modules on the opposite side of the link. Effectively, each fiber will be carrying 100GbE traffic, aggregated into a DR4 module on one side. Deployments of true 400GbE transceivers will be limited in 2019-2021 to upper levels of switching in mega-datacenters and core routers. Implementation of high-radix configurations in leaf and spine networks using 400GbE connectivity will be challenging until switching ASICs reach 51Tbps capacity, probably by 2022-2023.

Facebook

Facebook is staying with 100GbE for now and plans to use 200GbE next.

More than 2.6 billion people now use its services.

Facebook publicly stated their intent to stay with 100GbE optics for now and use 200GbE or 400GbE transceivers in the next upgrade cycle in 2021-2022. Facebook’s new F16 data center network architecture, will require 3-4 times more optical connections compared to their previous design (F4). The first implementation of F16 topology will rely on 100GbE CWDM4 transceivers, boosting demand for these modules in 2020-2022. 

Facebook is already the largest consumer of 100GbE CWDM4 modules. They use a sub-spec version of CWDM4 transceivers with 500-meter reach instead of 2km, also known as CWDM4-OCP (for Open Compute Project). The latest forecast database includes sub-spec CWDM4 modules as a separate category. Segmenting the sub-spec products also helped us to refine the market data collected for 2018, resulting in higher than previously reported sales.

Once these issues are resolved, the demand for CWDM4 is expected to skyrocket in the second half of 2019 and make a real difference to the market in 2020-2022. Sales of sub-spec CWDM4 modules are projected to peak in 2022, as Facebook starts the transition to 200GbE connectivity.

Conclusion

Though intermediate between 100G, and 400G – the customer demand for 200G, is shaping this market to be sizable, with deployments expected by the second half of next year. The good news for module vendors is that there are multiple component vendors, such as MACOM, who have 200G compatible components on the market today. 

Comparing the cost for 100G versus 200G, we have to look specifically at the cost of components themselves. While 100G is already at the point of integration, 200G end-to-end operable chipsets have just hit the market. 200G will therefore emulate a similar price point as 100G did when it entered the market a few years ago, following a similar cost curve as integration furthers. 

Gigalight is committed to leading the evolution of Data Center interconnects from 100G to 200G and 400G. Gigalight 200G products such as 200G QSFP-DD SR8 NRZ 100m, 200G QSFP-DD PSM8 NRZ 2km, 200G QSFP-DD PSM8 NRZ 10km, 200G QSFP56 SR4 PAM4 100m, 200G QSFP56 FR4 PAM4 2km, 200G QSFP56 LR4 PAM4 10km and so on. Among them, the Gigalight 200G QSFP-DD PSM8 NRZ 10km optical transceiver (GDM-SPO201-LR8C) is an eight-channel, hot-pluggable, parallel, fiber-optic QSFP Double Density module designed for 2×100-Gigabit Ethernet PSM4 and InfiniBand DDR/EDR applications. It is a high-performance module for data communication and interconnects.

The 200G QSFP-DD PSM8 (dual PSM4) module integrates eight data lanes in each direction. Each lane can operate at 25.78Gbps up to 10km over G.652 SMF. It is designed to operate over single-mode fiber systems using a nominal wavelength of 1310nm. The electrical interface uses a 76-contact edge type connector. The optical interface uses a 24-fiber MTP/MPO connector. This module incorporates Gigalight proven circuit and optical technology to provide reliable long life, high performance, and consistent service.

 

Source: LightCounting