LLM inference infra (https://lnkd.in/gNDaQrzg) is evolving fast. What used to be “just scale GPUs” has become a game of routing, caching, and memory transfers at cluster scale. 1/ Routing A Radix Tree (compact prefix tree) is a Trie variant that merges single-child paths, reducing memory usage significantly compared to standard Tries. (Good explainer here: https://lnkd.in/ggxeGhBA). Radix Trees are now used to compute prefix-based KV cache hit rates. This enables KV-aware routing: sending a request to the worker most likely to already hold the right cache blocks, while still balancing load to avoid hotspots. Not every query needs a dedicated prefill step—if the decoder has a high prefix hit probability, the prompt is short, or prefill workers are saturated, it can be more efficient to prefill locally inside the decode worker. 2/ Independent Scaling Scaling is no longer just about QPS. A surge in long input sequences stresses prefills more than decoders, so the system must scale each tier independently. That only works if both Prefill and Decode workers are stateless with respect to KV ownership. Each decoder, on startup, allocates its KV blocks (reserved chunk of GPU / CPU memory that the worker sets aside exclusively for storing those KV tensors) and publishes their descriptors into a distributed store (e.g., etcd). Prefill workers lazily fetch these descriptors when they first interact with a decoder, cache them locally, and thereafter exchange only compact block IDs during handoff. This design makes both tiers elastic: you can add or remove prefill/decoder workers at runtime without disrupting active sessions. 3/ Caching Inference systems now look like CPUs in miniature, juggling a memory hierarchy: GPU caches (fast but small), CPU memory (larger but slower), and cross-device links (PCIe, NVLink, RDMA). Think of this as L1/L2/L3 caching logic, but applied to distributed GPU memory pools. 4/ Pub/Sub and Direct Reads Optimizations focus on reducing redundant transfers and speeding up prefill→decode KV cache handoffs. Communication between prefills and decoders increasingly relies on in-memory pub/sub for request coordination. But the heavy data movement is handled via RDMA (Remote Direct Memory Access). With RDMA, a prefill worker can directly read KV blocks from a decode worker’s GPU memory (or vice versa) without involving the remote CPU. That turns what would have been an RPC + copy into a zero-copy memory operation over the network fabric, cutting latency and freeing CPUs to do… nothing at all. The next generation of infra is as much about clever systems design as it is about FLOPs!!!
How LLM inference infra is evolving beyond GPU scaling
More Relevant Posts
-
🚀 Meet oLLM: Enabling 100K-Token Contexts on Just an 8 GB GPU Amid all the buzz about quantization, pruning, and multi-GPU setups, here’s a refreshing design pivot: oLLM, a lightweight Python library, pushes large-context LLM inference into the “affordable GPU” realm using SSD offload — no quantization required. 🎯 Why oLLM is a Game Changer Ultra-long context w/o VRAM blowup It streams layer weights and attention key/value caches from SSD to GPU, keeping VRAM within 8–10 GB even for 100K token contexts. No need for quantization compromises oLLM retains FP16/BF16 precision, trading off speed for accessibility rather than accuracy loss. Supports big models on modest hardware Examples include Llama-3 (1B / 3B / 8B), GPT-OSS 20B, Qwen3-Next-80B (sparse MoE). Transparent trade-offs = control You’ll pay in SSD I/O and latency. Throughput is modest (e.g. ~0.5 tokens/sec on Qwen-80B with 50K context), but for offline, large-document tasks (summarization, logs analysis, compliance) it’s extremely compelling. 🛠️ How It Works (in a Nutshell) Memory offload to SSD Weights and KV cache bypass host RAM and are stored on fast NVMe SSD, streamed or paged as needed. FlashAttention-2 & chunked MLPs These speedups reduce peak memory demands so the full attention matrix never fully materializes. DiskCache with efficient I/O Uses GPUDirect and fast I/O frameworks (e.g. KvikIO / cuFile) to reduce I/O latency overhead. ✅ Use Cases Where oLLM Shines 📄 Large-document summarization & compliance review Process books, legal documents, logs — whole context, not split windows. 🔎 Knowledge retrieval / RAG over huge corpora Full passage context rather than batching windows. 📊 Offline analytics / batch inference Where throughput is less critical than context fidelity. 💡 ML research & experimentation For anyone wanting to push context limits without needing server-grade hardware. 🔍 Key Limitations & Trade-Offs (Be Realistic) Latency & throughput are not production-grade Heavy I/O overhead means it's not ideal for real-time services. Storage load is heavy Long contexts demand massive SSD space (e.g., 100K tokens → tens to hundreds of GB of KV). Hardware dependencies You’ll need fast NVMe SSDs and GPUs with good I/O paths. Not a drop-in replacement For ultra-high throughput or low-latency production use, conventional multi-GPU pipelines still dominate. 📌 TL;DR (for the scroll-haters) 🚀 oLLM lets you run massive context LLMs offline on a single 8 GB GPU by offloading memory to SSD — no quantization needed. Great for deep dives, long content tasks, and pushing what’s possible on modest hardware. If you’re tinkering with long-context applications and don’t have access to server farms, this is one of the most interesting libraries to explore right now.
To view or add a comment, sign in
-
-
You've got thousands of users, each wanting their own specialized, fine-tuned LLM (think LoRA adapters). How do you serve all of them on the same set of GPUs without the system grinding to a halt? This is a massive challenge in multi-tenant LLM inference. You face two major "traffic jams": 🔹The Data Jam: Constantly loading different adapters from CPU to GPU memory (over the PCIe link) for every new request. 🔹The Request Jam: "Quick" user queries get stuck behind "heavy" ones (Head-of-Line blocking), leading to terrible tail latency. A new paper from MICRO '25, "Chameleon," offers a brilliant two-part solution. It's an adaptive serving system that acts like a smart traffic controller for both data and requests. 🔹Key Idea 1: Adaptive Adapter Caching Chameleon intelligently uses idle GPU memory (which fluctuates wildly during high load ) to cache popular adapters. This avoids the slow CPU->GPU trip. Its "cost-aware" eviction policy considers frequency of use, recency, and adapter size—it knows which adapters are most "expensive" to reload. 🔹 Key Idea 2: Adapter-Aware Scheduling It creates "express lanes" (a non-preemptive multi-queue scheduler ) for requests. It's not just about input/output length; it classifies jobs using a "Weighted Request Size" that includes the adapter's rank (size). This prevents small jobs from getting blocked and larger jobs from starving. The results are staggering: Chameleon achieves an 80.7% reduction in P99 tail latency (Time-To-First-Token) and a 1.5x throughput boost over the state-of-the-art S-LORA baseline under high load. This is a game-changer for building practical, large-scale, and responsive multi-tenant AI platforms. As models become more personalized (and adapter counts explode into the thousands ), will these sophisticated caching and scheduling systems become the most critical bottleneck—or the biggest enabler—in the AI stack? Check out the full paper here: https://lnkd.in/g_cWMkvy #AI #LLM #LLMInference #MachineLearning #GPU #LoRA #LLMServing #Caching #Scheduling #Performance #AIInfrastructure #MICRO25
To view or add a comment, sign in
-
MIG, MPS, and the Real Story of GPU Isolation ✳️ MIG (Multi-Instance GPU) is hard partitioning of one physical NVIDIA GPU into multiple independent mini-GPUs. Each MIG instance gets its own SMs (whole units), L2 slices, HBM address range, and copy engines, exposed to software as a separate CUDA device. ✳️ MPS (Multi-Process Service): a scheduler that lets multiple processes share one GPU context concurrently. Great for utilization, does not physically partition cache or DRAM bandwidth. Resource partitioning: what’s really isolated? 🔹 SMs / Front-end 🔺 MIG: carves the GPU at GPC granularity (a GPC = a cluster of SMs + front-end). Each instance owns whole SMs (not fractions) with independent warp schedulers, register files, and instruction caches inside that slice. One tenant can't starve the SMs assigned to another. 🔺 MPS: kernels from different processes time-share SMs. You can cap residency (e.g., via active-thread percentage) but it's cooperative scheduling, not hard fencing. A heavy, latency-sensitive kernel can still get jitter from others. 🔹 L2 cache 🔺 MIG: L2 is physically sliced. an instance maps to a disjoint set of L2 slices and tags. Result: tenants don't evict each other's cache lines and don't contend on the same slice pipelines. 🔺 MPS: everyone shares the full L2. Classic cache thrash and back-pressure show up under mixed workloads (e.g., small-working-set inference next to bandwidth-streaming training). 🔹 HBM capacity & channels 🔺 MIG: each instance gets its own HBM address range and a bound subset of memory controller channels (a fixed fraction of peak). Capacity and a corresponding chunk of sustained bandwidth are reserved. 🔺 MPS: all processes see the whole memory and all controllers. the heaviest streamer wins more bandwidth unless you engineer app-level throttles. 🔹 Copy / DMA engines & accelerators 🔺 MIG: instances are assigned dedicated copy engines (and often per-slice JPEG/Video/NVDEC/NVENC where present). Data movement for tenant A won't queue behind tenant B on the same engine. 🔺 MPS: engines are shared. large async memcpy/IPC bursts from one client can elongate latency for others. Still shared under MIG (important!) 🔹 Power/thermal budget & global clocks: a hot neighbour can trigger frequency drops visible to all slices. 🔹 External I/O: PCIe/NVLink/NVSwitch links are shared pipes. MIG doesn't guarantee per-slice link bandwidth. Choosing the right tool ✅ Strict SLOs / multi-tenant clouds / mixed inference → MIG You want predictable p99s, cache isolation, fixed memory capacity, and blast-radius containment. ✅ Throughput maximization within one team / cooperative workloads → MPS Great for packing small jobs, fast context switches, and keeping SMs busy, if you can tolerate latency variance. ✅ Hybrid Use MIG to define 2–4 strong fences, then run MPS inside each slice to keep utilization high while bounding cross-team interference. #NVIDIA #GPU #MIG #AI #ML
To view or add a comment, sign in
-
-
Moore Threads Unveils the MCCX D800 AI Server Moore Threads, a Chinese GPU maker started in 2020 by Zhang Jianzhong, who used to work at Nvidia China, is getting serious about AI gear with its new MCCX D800 AI server. What is the MCCX D800? The MCCX D800 is a server system designed by Moore Threads to handle both training and inference workloads for large AI models. It integrates the company’s latest-generation GPU acceleration hardware. Some of its key features: It houses eight of Moore Threads’ MTT S4000 GPUs. Those S4000s are built on Moore Threads’ third‑generation MUSA architecture. Each S4000 card has 48 GB VRAM and offers high memory bandwidth, with features meant to support large language model (LLM) training and other demanding AI workloads. The server uses Moore Threads’ MTLink interconnect technology (roughly ~240 GB/s bandwidth between GPUs) to ensure fast GPU‑to‑GPU communications which is critical for scaling up model training efficiently. This unveiling is significant because: It underlines China’s push for domestic AI hardware independence, especially under restrictions on GPU exports. Companies like Moore Threads are seen as key players for enabling local AI development with less reliance on foreign GPUs. It demonstrates scaling ambitions: Moore Threads claims its clusters built with MCCX D800 servers (each with 8 S4000 cards) can scale to thousands of GPUs. Specifically, clusters of 1,000+ GPUs are being constructed using many D800 units. Strengths & Challenges Strengths: The hardware is balanced for both training and inference: high VRAM, high interconnect speed, GPU scaling. Umbrella “full‑stack” approach: Moore Threads is not just selling chips or servers, but software, cluster management (WanKa / KUAE), and support for open‑source/model framework compatibility. Tools like Musify help translate or adapt software developed for CUDA environments. Challenges: Performance still lags behind some of the top Western offerings from Nvidia in raw floating‑point compute per watt or per dollar for certain precisions. Ecosystem maturity (software stack, developer tooling, reliability under long runs, cooling, power efficiency) will be under scrutiny as deployments scale. Utilizing clusters of thousands of GPUs brings engineering, reliability, and maintenance challenges. Implications The MCCX D800 isn't just another piece of tech. It's a big deal for China's growing AI scene, which wants to use tech made in China. Since everyone wants more computing power for things like big models and AI that makes stuff, it's super important to have servers and clusters made locally that can do it all, giving China a leg up. If Moore Threads can make the MCCX D800 reliable, cheap, and able to grow as needed, it could be a key part of China's AI power, mainly for cloud services, businesses, and researchers who want to control their own hardware.
To view or add a comment, sign in
-
-
⚡ Surviving Beyond GPU Memory: Why Pooled NVMe Beats the VRAM Dream (for Now) Act 1 — The GPU Fleet Problem GPUs are compute monsters. FLOPs everywhere. But in enterprise data processing, the real problem isn’t FLOPs. It’s that we can’t manage a fleet of GPUs as smartly as we do CPUs. With CPUs: OS schedulers and caches balance compute and memory. Distributed frameworks (such as Spark, Databricks, and MPI) spill data gracefully and pool resources. The system remains stable even when workloads exceed RAM capacity. With GPUs: Each GPU is a silo — 80–120 GB of HBM, insanely fast but brutally finite. FLOPs are tied to memory locality. If it doesn’t fit, performance collapses. Multi-GPU adds FLOPs, but doesn’t pool memory unless you’re in exotic NVSwitch systems. The result: the HBM cliff. Once you hit it, your workload doesn’t slow down — it crashes. For BFSI workloads like tensor decompositions of customer × product × agent × outcome datasets, this isn’t rare. It’s the norm. Act 2 — The VRAM Dream vs. the NVMe Reality There is a dream solution: pooled VRAM. In DGX-class systems with NVSwitch, multiple GPUs can share a single, flat address space for memory. In theory, that solves the HBM silo problem. In practice? It’s rare, expensive, and still finite. Once you blow past pooled VRAM (say, 640 GB across 8 GPUs), you’re still dead. And outside those premium boxes, you don’t get this at all. So what’s the practical bridge? Pooled NVMe over RDMA. NVMe disaggregated into a fabric pool, accessible directly by GPUs via GPUDirect Storage. HBM: ~3 TB/s, ~100 ns latency. NVMe over RDMA: 10–40 GB/s, ~5–20 µs latency. CPU spill: 1–5 GB/s, ms latency. On paper, NVMe is “100× slower than HBM.” But the proper comparison isn’t against HBM — it’s against CPU bounce buffers or Databricks-style disk spill. There, RDMA-backed NVMe is an order of magnitude faster. And, critically, it keeps the job alive. 👉 HBM = lungs. 👉 NVMe via RDMA = oxygen tank. 👉 Together = finish the race instead of collapsing at the memory cliff. Act 3 — Why This Matters for BFSI Financial services workloads don’t politely fit into 80 GB chunks: Retention analytics across millions of policies. Cross-product risk exposure models. Real-time drift detection in high-dimensional tensors. These workloads aren’t nice-to-have. They are the frontier of competitive analytics. And they only deliver if the system can survive its own data. #tPower #ml #ai #GenAI4FS #inc81starch
To view or add a comment, sign in
-
⚙️ From capacity → capability. A multi-year, multi-generation 6GW compute partnership between AMD and OpenAI isn’t just “more GPUs” — it’s a shift in system architecture and co-design. Scaling frontier AI now depends on aligning compute, interconnect, compiler, and scheduling strategies across hardware generations. The initial 1GW deployment in 2H 2026 will be a key test of efficiency, cost, and developer stability under heterogeneous hardware. #AI #LLM #AIInfrastructure #HPC #SystemsResearch #HeterogeneousCompute #DataCenters
To view or add a comment, sign in
-
For engineers who want to optimize LLMs for CPU or Edge, learn about GGUF. On HuggingFace, optimized LLMs stored as GGUF checkpoints boomed in popularity. Initially, we had .bin or .pt sharded checkpoints (4-5 years ago) for LLM models: `model-0001-of-0005.bin` `model-0002-of-0005.bin` Then, HF Safetensors fixed a critical aspect of PyTorch serialized models of "Executing arbitrary code during Unpickling". Safetensors are a standard currently, but models are in BF16 precision, which still requires a lot of compute. ✅ Recently, GGUF models, which are Quantized models, started to become popular. > GGUF is a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes, compatible with GGML. When you see a GGUF model like Q4, Q5_K, or IQ2_K_S, that suffix tells you how it was quantized. We can generally split the quants into 3 distinct groups: 𝟭/ 𝗟𝗲𝗴𝗮𝗰𝘆 𝗤𝘂𝗮𝗻𝘁𝘀 𝟮/ 𝗞-𝗤𝘂𝗮𝗻𝘁𝘀 𝟯/ 𝗜-𝗤𝘂𝗮𝗻𝘁𝘀 1. Legacy Quants (Q4_0, Q4_1, Q8_0) > Block-based quantization: weights split into fixed-size blocks > Uses 1 or 2 extra constants per block to scale weights back > Best: Fast and simple, but can't tweak accuracy that much. 2. K-Quants (Q3_K_S, Q5_K_M) > Block-wise quantization with per-block scaling > Mixed quantizations, for example, some weights in 4bits, others in FP32. > Popular with large models (8B+) > Best: Some layers could be compressed more to give more bits to critical layers. 3. I-Quants (IQ2_XXS, IQ3_S) > Builds on K-Quants > Introduces an Importance Matrix to identify critical weights. > Best: the most customizable quant type. ✅ Most GGUF models today use K-Quants or I-Quants, especially for larger LLMs. Understanding which quantization type to use improves inference speed while cutting down on memory usage. That's key for resource-constrained environments! --- ➕ Follow for more expert AI/ML insights! ➕ Join 6500+ engineers, learning production-ready AI. https://lnkd.in/ed6FRFCH Cheers!
To view or add a comment, sign in
-
-
Want to learn the AI stack for LLMs on Edge? Check this Live Coding Session unpacking, > llama.cpp - architecture, workflow, components > GGML - the ML Tensor Library built in C++ > GGUF - binary model format for storing highly quantized LLMs. Find it here: https://lnkd.in/dzAg8RA3
For engineers who want to optimize LLMs for CPU or Edge, learn about GGUF. On HuggingFace, optimized LLMs stored as GGUF checkpoints boomed in popularity. Initially, we had .bin or .pt sharded checkpoints (4-5 years ago) for LLM models: `model-0001-of-0005.bin` `model-0002-of-0005.bin` Then, HF Safetensors fixed a critical aspect of PyTorch serialized models of "Executing arbitrary code during Unpickling". Safetensors are a standard currently, but models are in BF16 precision, which still requires a lot of compute. ✅ Recently, GGUF models, which are Quantized models, started to become popular. > GGUF is a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes, compatible with GGML. When you see a GGUF model like Q4, Q5_K, or IQ2_K_S, that suffix tells you how it was quantized. We can generally split the quants into 3 distinct groups: 𝟭/ 𝗟𝗲𝗴𝗮𝗰𝘆 𝗤𝘂𝗮𝗻𝘁𝘀 𝟮/ 𝗞-𝗤𝘂𝗮𝗻𝘁𝘀 𝟯/ 𝗜-𝗤𝘂𝗮𝗻𝘁𝘀 1. Legacy Quants (Q4_0, Q4_1, Q8_0) > Block-based quantization: weights split into fixed-size blocks > Uses 1 or 2 extra constants per block to scale weights back > Best: Fast and simple, but can't tweak accuracy that much. 2. K-Quants (Q3_K_S, Q5_K_M) > Block-wise quantization with per-block scaling > Mixed quantizations, for example, some weights in 4bits, others in FP32. > Popular with large models (8B+) > Best: Some layers could be compressed more to give more bits to critical layers. 3. I-Quants (IQ2_XXS, IQ3_S) > Builds on K-Quants > Introduces an Importance Matrix to identify critical weights. > Best: the most customizable quant type. ✅ Most GGUF models today use K-Quants or I-Quants, especially for larger LLMs. Understanding which quantization type to use improves inference speed while cutting down on memory usage. That's key for resource-constrained environments! --- ➕ Follow for more expert AI/ML insights! ➕ Join 6500+ engineers, learning production-ready AI. https://lnkd.in/ed6FRFCH Cheers!
To view or add a comment, sign in
-
-
I've been running several GGUF models on my laptop. As someone who uses GGUF models, I can confirm it's a game-changing technology for local AI. This post provides a fantastic, clear insight into how it actually works. If you want to briefly understand the tech that lets you run LLMs on your own machine, this is it. #AI #EdgeAI #LLM #Developer #GGUF
For engineers who want to optimize LLMs for CPU or Edge, learn about GGUF. On HuggingFace, optimized LLMs stored as GGUF checkpoints boomed in popularity. Initially, we had .bin or .pt sharded checkpoints (4-5 years ago) for LLM models: `model-0001-of-0005.bin` `model-0002-of-0005.bin` Then, HF Safetensors fixed a critical aspect of PyTorch serialized models of "Executing arbitrary code during Unpickling". Safetensors are a standard currently, but models are in BF16 precision, which still requires a lot of compute. ✅ Recently, GGUF models, which are Quantized models, started to become popular. > GGUF is a binary format that is optimized for quick loading and saving of models, making it highly efficient for inference purposes, compatible with GGML. When you see a GGUF model like Q4, Q5_K, or IQ2_K_S, that suffix tells you how it was quantized. We can generally split the quants into 3 distinct groups: 𝟭/ 𝗟𝗲𝗴𝗮𝗰𝘆 𝗤𝘂𝗮𝗻𝘁𝘀 𝟮/ 𝗞-𝗤𝘂𝗮𝗻𝘁𝘀 𝟯/ 𝗜-𝗤𝘂𝗮𝗻𝘁𝘀 1. Legacy Quants (Q4_0, Q4_1, Q8_0) > Block-based quantization: weights split into fixed-size blocks > Uses 1 or 2 extra constants per block to scale weights back > Best: Fast and simple, but can't tweak accuracy that much. 2. K-Quants (Q3_K_S, Q5_K_M) > Block-wise quantization with per-block scaling > Mixed quantizations, for example, some weights in 4bits, others in FP32. > Popular with large models (8B+) > Best: Some layers could be compressed more to give more bits to critical layers. 3. I-Quants (IQ2_XXS, IQ3_S) > Builds on K-Quants > Introduces an Importance Matrix to identify critical weights. > Best: the most customizable quant type. ✅ Most GGUF models today use K-Quants or I-Quants, especially for larger LLMs. Understanding which quantization type to use improves inference speed while cutting down on memory usage. That's key for resource-constrained environments! --- ➕ Follow for more expert AI/ML insights! ➕ Join 6500+ engineers, learning production-ready AI. https://lnkd.in/ed6FRFCH Cheers!
To view or add a comment, sign in
-
-
AI Infrastructure Myths Busted: What You Actually Need ⚙ “You need a $10M data center for AI.” “Requires 50 GPUs minimum.” “Must rebuild everything.” All myths. All unnecessary costs. 💡 90% of businesses already have the infrastructure they need to deploy AI. 💸 $2.4M — the average overspend caused by outdated infrastructure assumptions. ⚠ Vendors push complexity because it sells — not because it’s required. That’s why CrewAgent built the LIGHT Infrastructure Framework — to help you launch AI with what you already have: 💻 1. Right-Sized Compute — 4–8 CPU cores, 16–32GB RAM, standard cloud instance. No supercomputers needed. 🚫 2. No GPUs Required — GPUs are for training, not running. 95% of AI agents run flawlessly on CPUs. ☁ 3. Cloud Advantage — Start in hours, scale in minutes, pay only for what you use. ⚙ 4. Hybrid Smartness — Keep sensitive data on-prem, process AI securely in the cloud. 📉 5. True Cost Reality — $200–$1K/month vs $2M+ legacy spend — that’s a 90% cost reduction. Because modern AI doesn’t demand megawatts — it demands smart architecture and practical scaling. 👇 Swipe through to see how CrewAgent helps businesses deploy powerful AI without a single server upgrade. ___________ #CrewAgent #AIInfrastructure #AIProjects #AIConsulting #AIDeployment #AIEngineering #CloudAI #HybridAI #AICostOptimization #AIExecution #DigitalTransformation #AITechStack #AIArchitecture #AIEfficiency #TechSimplified
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development