In the era of explosive AI compute growth, data center networks are evolving at an unprecedented pace to match the demands of massive GPU clusters and trillion-parameter models. The leap from 400G to 800G Ethernet is far more than just doubling bandwidth—it is a necessary step to eliminate bottlenecks, improve efficiency, and support the next wave of AI scale-up.
However, as AI workloads intensify further (especially large-scale LLM training involving thousands to tens of thousands of GPUs), the limitations of 400G have become increasingly evident:
By the end of 2025 and into 2026, 800G has emerged as the new “standard configuration” for AI data centers, particularly in hyperscale new builds and greenfield/green data centers. Industry data shows that 800G optical module shipments grew 60–100% year-over-year in 2025, with revenue from high-speed switches experiencing explosive growth.
An 800G DR8 optical transceiver is a high-speed optical module designed for ultra-fast data center interconnects, delivering 800 Gbps of data transmission over short-reach links. It is widely used in AI data centers and hyperscale cloud networks.
Based on an 8×100G PAM4 parallel-optics architecture, the 800G DR8 transceiver transmits data simultaneously over eight electrical lanes and eight optical lanes, typically operating at the 1310 nm wavelength. It is available in QSFP-DD or OSFP form factors and uses an MPO-16 single-mode fiber interface, with a typical transmission reach of up to 500 meters, making it ideal for high-speed switch-to-switch and switch-to-GPU/server interconnects within data centers.

Key Features:
1. Total bandwidth of up to 800 Gbps
2. DR8 parallel-optics architecture with 8×100G PAM4
3. Low latency and high bandwidth density, ideal for AI training clusters
4. Compared to 400G, significantly increases per-port bandwidth while reducing power consumption per bit
5. Compliant with the IEEE 802.3df 800G DR8 standard
The 800G DR8 optical transceiver is a short-reach, high-density optical interconnect solution purpose-built for AI-scale computing and next-generation ultra-fast data centers.
Simply put, DR8 (Data Center Reach, 8 lanes) is the most widely adopted single-mode parallel-optics architecture for 800G (and future 1.6T) optical transceivers, based on 8×100G/106G PAM4 lanes. It is purpose-built for high-density, short-reach interconnects within AI and HPC data centers. Over the coming years, DR8 is expected to remain the preferred interconnect technology for hyperscale AI clusters, supporting training environments with thousands to tens of thousands of GPUs.

Ultra-High Bandwidth Optimized for AI Traffic
With 800 Gbps per port—and a clear evolution path toward 1.6T DR8—this architecture is well aligned with the bursty east–west traffic patterns of large-scale LLM training, including all-reduce operations, parameter synchronization, and checkpointing. Compared with 400G DR4, DR8 doubles per-port bandwidth, significantly reducing network wait time and improving GPU utilization, often shortening training cycles by 10–30%.
Practical Reach for Intra–Data Center Interconnects
DR8 supports up to 500 m over single-mode fiber (with extended versions reaching 2 km), effectively covering rack-to-rack, row-to-row, and adjacent pod interconnects within modern data centers.
High Density and Low Power Consumption
Using MPO-16 or dual MPO-12 connectors, DR8 enables extremely high front-panel density. With the emergence of LPO (Linear Pluggable Optics), power consumption can be reduced to 8–9 W, nearly 50% lower than traditional DSP-based designs, making DR8 well suited for liquid-cooled or immersion-cooled AI racks while significantly lowering total cost of ownership (TCO).
Smooth Upgrade Path and Strong Compatibility
DR8 supports breakout to 2×400G DR4, enabling backward compatibility with existing 400G infrastructure. Available in both QSFP-DD and OSFP form factors, it allows a smooth transition from 400G to 800G and onward to 1.6T, making DR8 an ideal choice for high-radix, lossless RoCEv2 networks in spine–leaf and AI super-pod architectures.
DR8 optical transceivers are particularly well suited for next-generation AI data centers due to their ability to deliver ultra-high bandwidth, low latency, and excellent scalability within short-reach environments. Built on an 8×100G PAM4 parallel-optics architecture, 800G DR8 enables massive east–west traffic between switches and GPU clusters, which is a core requirement for large-scale AI training workloads.
By doubling per-port bandwidth compared to 400G solutions, DR8 significantly increases switch port density while reducing network complexity, power consumption per bit, and overall cabling overhead. In addition, DR8’s parallel single-mode fiber design aligns well with modern leaf–spine architectures, making it a cost-effective and future-ready foundation for AI-driven data center networks.

In AI data centers, the leaf–spine architecture is the foundation for building scalable, non-blocking networks. 800G DR8 optical transceivers enable ultra-high-speed interconnects between spine and leaf switches, where bandwidth demand is the highest. With 800 Gbps per port, DR8 significantly increases uplink capacity, reduces oversubscription, and ensures consistent low-latency performance across the fabric. This is especially critical for large-scale AI workloads that rely on all-to-all communication patterns, where frequent data exchanges between thousands of nodes can quickly overwhelm lower-speed interconnects.
Within AI training pods and large-scale GPU clusters, 800G DR8 plays a key role in delivering high-throughput, low-latency connectivity between switches and accelerators. As AI models grow in size and complexity, efficient GPU-to-GPU communication becomes essential for maintaining training efficiency. DR8-based 800G links provide the bandwidth required to support intensive collective operations such as gradient synchronization. At the same time, 800G DR8 is well aligned with both InfiniBand and Ethernet-based AI networking architectures, giving operators the flexibility to deploy the optimal fabric while building a scalable and future-ready optical infrastructure.
800G DR8 optical transceivers deliver a compelling combination of ultra-high bandwidth, improved power efficiency, and scalable architecture, making them ideally suited for next-generation AI data centers. By doubling per-port capacity compared to 400G solutions, DR8 increases network throughput while reducing complexity, cabling requirements, and power consumption per bit. Its parallel-optics design integrates seamlessly with modern leaf–spine topologies and high-density GPU clusters.
More importantly, 800G DR8 plays a strategic role in enabling AI-scale networking. It supports the massive east–west traffic generated by distributed training workloads and provides a clear evolution path toward 1.6T and beyond. As data centers continue to push the limits of performance and scalability, DR8 stands out as a foundational technology that bridges today’s AI infrastructure with the ultra-fast networks of the future.