Compute Express Link

From Wikipedia, the free encyclopedia
Compute Express Link
ComputeExpressLinkLogo.png
Year created2019; 3 years ago (2019)
SpeedFull duplex
1.x, 2.x (32 GT/s):
  • 3.938 GB/s (×1)
  • 63.015 GB/s (×16)
Websitewww.computeexpresslink.org

Compute Express Link (CXL) is an open standard for high-speed central processing unit (CPU)-to-device and CPU-to-memory connections, designed for high performance data center computers.[1][2][3][4] CXL is built on the PCI Express (PCIe) physical and electrical interface and includes PCIe-based block input/output protocol (CXL.io) and new cache-coherent protocols for accessing system memory (CXL.cache) and device memory (CXL.mem).

History[]

The standard was primarily developed by Intel. The CXL Consortium was formed in March 2019 by founding members Alibaba Group, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise (HPE), Huawei, Intel and Microsoft,[5][6] and officially incorporated in September 2019.[7] As of January 2022, AMD, NVidia, Samsung and Xilinx joined the founders on the board of directors, while ARM, Broadcom, Ericsson, IBM, Keysight, Kioxia, Marvell, Mellanox, Microchip, Micron, Oracle, Qualcomm, Rambus, Renesas, Seagate, SK Hynix, Synopsys, and Western Digital, among others, joined as contributing members.[8][9] Industry partners include the PCI-SIG,[10] Gen-Z,[11] SNIA,[12] and DMTF.[13]

On April 2, 2020, the Compute Express Link and Gen-Z Consortiums announced plans to implement interoperability between the two technologies,[14][15] with initial results presented in January 2021.[16] On November 10, 2021, Gen-Z specifications and assets were transferred to CXL, to focus on developing a single industry standard moving forward.[17] At the time of this announcement, 70% of Gen-Z members already joined the CXL Consortium, which now includes companies behind memory coherent interconnect technologies such as OpenCAPI (IBM), CCIX (Xilinx), and Gen-Z (HPE) open standards, and proprietary InfiniBand / RoCE (Mellanox), Infinity Fabric (AMD), Omni-Path and QuickPath/Ultra Path (Intel), and NVLink/NVSwitch (Nvidia) protocols.[18][19]

Specifications[]

On March 11, 2019, the CXL Specification 1.0 based on PCIe 5.0 was released.[6] It allows host CPU to access shared memory on accelerator devices with a cache coherent protocol. The CXL Specification 1.1 was released in June, 2019.

On November 10, 2020, the CXL Specification 2.0 was released. The new version adds support for CXL switching, to allow connecting multiple CXL 1.x and 2.0 devices to a CXL 2.0 host processor, and/or pooling each device to multiple host processors, in distributed shared memory and disaggregated storage configurations; it also implements device integrity and data encryption.[20] There is no bandwidth increase from CXL 1.x, because CXL 2.0 still utilizes PCIe 5.0 PHY.

Next version of CXL specifications is expected in H1 2022, to be based on PCIe 6.0 PHY.[19][21]

Implementations[]

On April 2, 2019, Intel announced their family of Agilex FPGAs featuring CXL.[22]

On May 11, 2021, Samsung announced a DDR5 based memory expansion module that allows for terabyte level memory expansion along with high performance for use in data centres and potentially next generation PCs. [23]

In 2021, CXL 1.1 support was announced for Intel Sapphire Rapids processors[24] and AMD Zen 4 EPYC "Genoa" and "Bergamo" processors.[25]

CXL devices were shown at the SC21 conference by Intel,[26] Astera, Rambus, Synopsys, Samsung, and Teledyne LeCroy, among others.[27][28][29]

Protocols[]

The CXL standard defines three separate protocols:[30][20]

  • CXL.io - based on PCIe 5.0 with a few enhancements, it provides configuration, link initialization and management, device discovery and enumeration, interrupts, DMA, and register I/O access using non-coherent loads/stores.
  • CXL.cache - allows peripheral devices to coherently access and cache host CPU memory with a low latency request/response interface.
  • CXL.mem - allows host CPU to coherently access cached device memory with load/store commands for both volatile (RAM) and persistent non-volatile (flash memory) storage.

CXL.cache and CXL.mem protocols operate with a common link/transaction layer, which is separate from the CXL.io protocol link and transaction layer. These protocols/layers are multiplexed together by an Arbitration and Multiplexing (ARB/MUX) block before being transported over standard PCIe 5.0 PHY using fixed-width 528 bit (66 byte) Flow Control Unit (FLIT) block consisting of four 16-byte data 'slots' and a two-byte cyclic redundancy check (CRC) value.[30] CXL FLITs encapsulate PCIe standard Transaction Layer Packet (TLP) and Data Link Layer Packet (DLLP) data with a variable frame size format.[31][32]

Device types[]

CXL is designed to support three primary device types:[20]

  • Type 1 (CXL.io and CXL.cache) – specialised accelerators (such as smart NIC with no local memory. Devices rely on coherent access to host CPU memory .
  • Type 2 (CXL.io, CXL.cache and CXL.mem) – general-purpose accelerators (GPU, ASIC or FPGA) with high-performance GDDR or HBM local memory. Devices can coherently access host CPU's memory and/or provide coherent or non-coherent access to device local memory from the host CPU.
  • Type 3 (CXL.io and CXL.mem) – memory expansion boards and storage-class memory. Devices provide host CPU with low-latency access to local DRAM or non-volatile storage.

Type 2 devices implement two memory coherence modes, managed by device driver. In device bias mode, device directly accesses local memory and no caching is performed by the CPU; in host bias mode, the host CPU's cache controller handles all access to device memory. Coherence mode can be set individually for each 4 KB page, stored in a translation table in local memory of Type 2 devices. Unlike other CPU-to-CPU memory coherency protocols, this arrangement only requires the host CPU memory controller to implement the cache agent; such asymmetric approach reduces implementation complexity and reduces latency.[30]

See also[]

References[]

  1. ^ "ABOUT CXL". Compute Express Link. Retrieved 2019-08-09.
  2. ^ "Synopsys Delivers Industry's First Compute Express Link (CXL) IP Solution for Breakthrough Performance in Data-Intensive SoCs". finance.yahoo.com. Yahoo! Finance. Retrieved 2019-11-09.
  3. ^ "A Milestone in Moving Data". Intel Newsroom. Intel. Retrieved 2019-11-09.
  4. ^ "Compute Express Link Consortium (CXL) Officially Incorporates; Announces Expanded Board of Directors". www.businesswire.com. Business Wire. 2019-09-17. Retrieved 2019-11-09.
  5. ^ Comment, Will Calvert. "Intel, Google and others join forces for CXL interconnect". www.datacenterdynamics.com.
  6. ^ a b Cutress, Ian. "CXL Specification 1.0 Released: New Industry High-Speed Interconnect From Intel". Anandtech. Retrieved 2019-08-09.
  7. ^ "Compute Express Link Consortium (CXL) Officially Incorporates; Announces Expanded Board of Directors". www.businesswire.com. September 17, 2019.
  8. ^ "Compute Express Link: Our Members". CXL Consortium. 2020. Retrieved 2020-09-25.
  9. ^ Papermaster, Mark (July 18, 2019). "AMD Joins Consortia to Advance CXL, a New High-Speed Interconnect for Breakthrough Performance". Community.AMD. Retrieved 2020-09-25.
  10. ^ https://www.computeexpresslink.org/post/cxl-consortium-and-pci-sig-announce-marketing-mou-agreement
  11. ^ https://www.computeexpresslink.org/industry-liaisons
  12. ^ https://www.computeexpresslink.org/post/snia-and-cxl-consortium-form-strategic-alliance
  13. ^ https://www.computeexpresslink.org/post/dmtf-and-cxl-consortium-establish-work-register
  14. ^ "CXL Consortium and Gen-Z Consortium Announce MOU Agreement" (PDF). Beaverton, Oregon. April 2, 2020. Retrieved September 25, 2020.
  15. ^ "CXL Consortium and Gen-Z Consortium Announce MOU Agreement". April 2, 2020. Retrieved April 11, 2020.
  16. ^ https://www.computeexpresslink.org/post/cxl-consortium-and-gen-z-consortium-mou-update-a-path-to-protocol
  17. ^ Consortium, C. X. L. (November 10, 2021). "Exploring the Future". Compute Express Link.
  18. ^ Morgan, Timothy Prickett (November 23, 2021). "Finally, A Coherent Interconnect Strategy: CXL Absorbs Gen-Z". The Next Platform.
  19. ^ a b https://www.eetimes.com/cxl-will-absorb-gen-z/
  20. ^ a b c "Compute Express Link (CXL): All you need to know". Rambus.
  21. ^ https://www.eenewseurope.com/news/rambus-two-deals-datacentre-interface
  22. ^ "How do the new Intel Agilex FPGA family and the CXL coherent interconnect fabric intersect?". PSG@Intel. 2019-05-03. Retrieved 2019-08-09.
  23. ^ "Samsung Unveils Industry-First Memory Module Incorporating New CXL Interconnect Standard". Samsung. 2021-05-11. Retrieved 2021-05-11.
  24. ^ "Intel Architecture Day 2021". Intel.
  25. ^ Paul Alcorn (November 8, 2021). "AMD Unveils Zen 4 CPU Roadmap: 96-Core 5nm Genoa in 2022, 128-Core Bergamo in 2023". Tom's Hardware.
  26. ^ "Intel Sapphire Rapids CXL with Emmitsburg PCH Shown at SC21". December 7, 2021.
  27. ^ https://www.eetimes.com/cxl-put-through-its-paces/
  28. ^ "CXL Consortium Showcases First Public Demonstrations of Compute Express Link Technology at SC21". HPCwire.
  29. ^ Consortium, C. X. L. (December 16, 2021). "CXL Consortium Makes a Splash at Supercomputing 2021 (SC21)". Compute Express Link.
  30. ^ a b c "Compute Express Link Standard | DesignWare IP | Synopsys". www.synopsys.com.
  31. ^ Consortium, C. X. L. (September 23, 2019). "Introduction to Compute Express Link (CXL): The CPU-To-Device Interconnect Breakthrough". Compute Express Link.
  32. ^ https://www.flashmemorysummit.com/Proceedings2019/08-07-Wednesday/20190807_CTRL-202A-1_Lender.pdf

External links[]

Retrieved from ""