Kyocera Pegatron and SMART Show CXL Over Optics at Computex 2025

3
Kyocera Pegatron Optical CXL Concept Computex 2025 3
Kyocera Pegatron Optical CXL Concept Computex 2025 3

One thing we did not get to cover at Computex 2025 was a neat demo we saw. We saw CXL memory being connected over an optical connection to a server. This was a demo, not a commercial product, but it was one we thought we would share.

Kyocera Pegatron and SMART Show CXL Over Optics at Computex 2025

On the table, we saw one of SMART Modular Technologies’ CXL memory cards. These are fairly recognizable as they have the DIMMs mounted in opposing directions to fit into a single slot. That was mounted on a test fixture with the Kyocera PCIe to optical conversion card.

Kyocera Pegatron Optical CXL Concept Computex 2025 3
Kyocera Pegatron Optical CXL Concept Computex 2025 3

Those went to four MTP/MPO cables which were then brought to the rear of the Pegatron server.

Kyocera Pegatron Optical CXL Concept Computex 2025 4
Kyocera Pegatron Optical CXL Concept Computex 2025 4

Another one of these optical modules sits on another card in the server that is connected to the server’s PCIe lanes.

Kyocera Pegatron Optical CXL Concept Computex 2025 2
Kyocera Pegatron Optical CXL Concept Computex 2025 2

We found a quick near-packaged optics concept diagram here that shows a bit more about what is going on. Effectively this is running the CXL/ PCIe link over multimode fiber.

Kyocera Pegatron Optical CXL Concept Diagram Computex 2025 1
Kyocera Pegatron Optical CXL Concept Diagram Computex 2025 1

If this looks familar this is in some ways similar to the technology used in the Kioxia Optical Interface SSD Demoed at FMS 2024 that Kyocera was also a part of. The Computex 2025 version was using more optical engines and was for a higher-speed x16 link rather than for a x4 link.

Kioxia Optical SSD Demo FMS 2024 1
Kioxia Optical SSD Demo FMS 2024 1

In both cases, the idea is that one can move these components out of high-density compute racks. Instead of connecting a server and a remote memory card, imagine a CXL 3 switch where memory could be dynamically allocated. Another important use case can be removing devices from high-density AI compute clusters into lower density racks for other devices.

Final Words

This was clearly a demo to show what is possible. At the same time, we are getting close to seeing far more co-packaged and near packaged optical connections get deployed. Also, the CXL space will continue to become a bigger trend in 2026 as we start to see PCIe Gen6 generations of servers hit the market. For now, it is a cool demo that has a lot of neat future implications.

3 COMMENTS

  1. Is it known how ‘protocol aware’ this setup is? Is it a basically invisible(aside from whatever little config/status interface is presumably exposed to the host) copper to fiber converter that would take your choice of PCIe6/CXL things out of chassis; or is it a much more tightly coupled arrangement where the in-host card shows up as a CLX device and, opaque to the host, shoves its internal functions over fiber to the remote card?

  2. @fuzzy Stuffing as much of this as possible into a large chassis and connecting it with Ultra Ethernet switches would allow dynamic sharing amongst a whole rack; or one rack could be shared with a row of racks.

    No overprovisioning, each CPU gets the amount of memory it needs for as long as it needs it.

  3. Now think about NVMEoF behaving the same way over CXL and picture storage arrays, then picture the triumphant return of nonvolatile RAM

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.