One thing we did not get to cover at Computex 2025 was a neat demo we saw. We saw CXL memory being connected over an optical connection to a server. This was a demo, not a commercial product, but it was one we thought we would share.
Kyocera Pegatron and SMART Show CXL Over Optics at Computex 2025
On the table, we saw one of SMART Modular Technologies’ CXL memory cards. These are fairly recognizable as they have the DIMMs mounted in opposing directions to fit into a single slot. That was mounted on a test fixture with the Kyocera PCIe to optical conversion card.

Those went to four MTP/MPO cables which were then brought to the rear of the Pegatron server.

Another one of these optical modules sits on another card in the server that is connected to the server’s PCIe lanes.

We found a quick near-packaged optics concept diagram here that shows a bit more about what is going on. Effectively this is running the CXL/ PCIe link over multimode fiber.

If this looks familar this is in some ways similar to the technology used in the Kioxia Optical Interface SSD Demoed at FMS 2024 that Kyocera was also a part of. The Computex 2025 version was using more optical engines and was for a higher-speed x16 link rather than for a x4 link.

In both cases, the idea is that one can move these components out of high-density compute racks. Instead of connecting a server and a remote memory card, imagine a CXL 3 switch where memory could be dynamically allocated. Another important use case can be removing devices from high-density AI compute clusters into lower density racks for other devices.
Final Words
This was clearly a demo to show what is possible. At the same time, we are getting close to seeing far more co-packaged and near packaged optical connections get deployed. Also, the CXL space will continue to become a bigger trend in 2026 as we start to see PCIe Gen6 generations of servers hit the market. For now, it is a cool demo that has a lot of neat future implications.
Is it known how ‘protocol aware’ this setup is? Is it a basically invisible(aside from whatever little config/status interface is presumably exposed to the host) copper to fiber converter that would take your choice of PCIe6/CXL things out of chassis; or is it a much more tightly coupled arrangement where the in-host card shows up as a CLX device and, opaque to the host, shoves its internal functions over fiber to the remote card?
@fuzzy Stuffing as much of this as possible into a large chassis and connecting it with Ultra Ethernet switches would allow dynamic sharing amongst a whole rack; or one rack could be shared with a row of racks.
No overprovisioning, each CPU gets the amount of memory it needs for as long as it needs it.
Now think about NVMEoF behaving the same way over CXL and picture storage arrays, then picture the triumphant return of nonvolatile RAM