Skip to content

fix: macOS build with `-DGGML_BACKEND_DL=ON`

44e2c2d
Select commit
Loading
Failed to load commit list.
Open

UPSTREAM PR #17581: cmake: fix macOS build with -DGGML_BACKEND_DL=ON #354

fix: macOS build with `-DGGML_BACKEND_DL=ON`
44e2c2d
Select commit
Loading
Failed to load commit list.
LOCI Agentic AI / Performance Review #354 succeeded Nov 28, 2025 in 30m 47s

Performance unchanged

0 binaries improved · 16 binaries unchanged · 0 binaries stable ~ within threshold · 0 binaries degraded ~ beyond threshold

Binary Δ % Response Δ % Throughput Performance (based on response time)
build.bin.libggml-base.so 0 0 unchanged
build.bin.libggml-cpu.so 0 0 unchanged
build.bin.libggml.so 0 0 unchanged
build.bin.libllama.so 0 0 unchanged
build.bin.libmtmd.so 0 0 unchanged
build.bin.llama-bench 0 0 unchanged
build.bin.llama-cvector-generator 0 0 unchanged
build.bin.llama-gemma3-cli 0 0 unchanged
build.bin.llama-gguf-split 0 0 unchanged
build.bin.llama-llava-cli 0 0 unchanged
build.bin.llama-minicpmv-cli 0 0 unchanged
build.bin.llama-quantize 0 0 unchanged
build.bin.llama-qwen2vl-cli 0 0 unchanged
build.bin.llama-run 0 0 unchanged
build.bin.llama-tokenize 0 0 unchanged
build.bin.llama-tts 0 0 unchanged

Performance threshold: 30%
Default configuration used.
Note: Performance status is evaluated only from Δ% Response. Throughput is displayed for reference.

Explore the complete analysis inside the Version Insights.