-
Notifications
You must be signed in to change notification settings - Fork 13.9k
Open
Labels
bugSomething isn't workingSomething isn't workingneed more infoThe OP should provide more details about the issueThe OP should provide more details about the issue
Description
Name and Version
Core tl;dr of the issue:
- Having single GPU = 20b 4bit OSS loads and runs with no issues
- Running 2/3/4 = no model suddenly is able to be be loaded and all attempts regardless of llama-server params end up in Segmentation fault (core dumped) error
Side issues/questions:
- Do you explicitly need to have all GPUs installed at build time? I noticed that having single working GPU (on which I built initial llama.cpp on) and then installing 3 more causes llama-server to auto-crash with immediate effect and Killed console output (due to ROCm failing with
ERROR Received I2C_NAK_7B0ADDR_NOACK
ERROR WriteI2CData() - I2C error occured
Failed to read EEPROM table header
Honestly speaking, don't know what I'm doing wrong and I'm 3 days deep into trying to understand based on docs and tutorials if I'm missing something in build/llama-server params 🙈
What I tried:
- played around with llama-server params, including of/off for
no-mmapandngl - played around with build commands, including solution from Eval bug: usr/lib/gcc/x86_64-pc-linux-gnu/14/include/g++-v14/bits/random.tcc:2668: void std::discrete_distribution<_IntType>::param_type::_M_initialize() [with _IntType = int]: Assertion '__sum > 0' failed. #15551
- updated motherboard BIOS from P1.10 to P1.50
- switched BIOSes Radeon VII <> Radeon Pro VII
- reinstalled Ubuntu 4 times and repeated setup
- tried tensorfiles from
rocblas6.4.4 and 7.1.0-2 (both while running ROCm 7.1.1 as dedicated ROCm 5.6 cannot be installed on Ubuntu 24 apparently)
Operating systems
Linux
GGML backends
BLAS
Hardware
- Ubuntu 24.04.3 LTS (fresh and clean install, only for llama.cpp purposes)
- AsRock H510 PRO BTC+ (@ P1.50 BIOS)
- Celeron G5905
- 4GB RAM
- 4x Radeon Pro VII (gfx906)
Models
No response
Problem description & steps to reproduce
- Install more than 1 AMD GPU
- Run any model, even one that would fit within 1 GPU's memory
- llama-server ends with
Segmentation fault (core dumped)
First Bad Commit
No response
Relevant log output
llama-server -m /home/llm/llama.cpp/models/gpt-oss-20b-Q8_0.gguf --host 0.0.0.0 --port 8080
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 4 ROCm devices:
Device 0: AMD Radeon (TM) Pro VII, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
Device 1: AMD Radeon (TM) Pro VII, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
Device 2: AMD Radeon (TM) Pro VII, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
Device 3: AMD Radeon (TM) Pro VII, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
main: setting n_parallel = 4 and kv_unified = true (add -kvu to disable this)
build: 7179 (4abef75f2) with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
system info: n_threads = 2, n_threads_batch = 2, total_threads = 2
system_info: n_threads = 2 (n_threads_batch = 2) / 2 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
init: using 6 threads for HTTP server
start: binding port with default address family
main: loading model
srv load_model: loading model '/home/llm/llama.cpp/models/gpt-oss-20b-Q8_0.gguf'
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon (TM) Pro VII) (0000:03:00.0) - 16348 MiB free
llama_model_load_from_file_impl: using device ROCm1 (AMD Radeon (TM) Pro VII) (0000:06:00.0) - 16348 MiB free
llama_model_load_from_file_impl: using device ROCm2 (AMD Radeon (TM) Pro VII) (0000:09:00.0) - 16348 MiB free
llama_model_load_from_file_impl: using device ROCm3 (AMD Radeon (TM) Pro VII) (0000:0c:00.0) - 16348 MiB free
llama_model_loader: loaded meta data with 37 key-value pairs and 459 tensors from /home/llm/llama.cpp/models/gpt-oss-20b-Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = gpt-oss
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.name str = Gpt-Oss-20B
llama_model_loader: - kv 3: general.basename str = Gpt-Oss-20B
llama_model_loader: - kv 4: general.quantized_by str = Unsloth
llama_model_loader: - kv 5: general.size_label str = 20B
llama_model_loader: - kv 6: general.license str = apache-2.0
llama_model_loader: - kv 7: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 8: general.tags arr[str,2] = ["vllm", "text-generation"]
llama_model_loader: - kv 9: gpt-oss.block_count u32 = 24
llama_model_loader: - kv 10: gpt-oss.context_length u32 = 131072
llama_model_loader: - kv 11: gpt-oss.embedding_length u32 = 2880
llama_model_loader: - kv 12: gpt-oss.feed_forward_length u32 = 2880
llama_model_loader: - kv 13: gpt-oss.attention.head_count u32 = 64
llama_model_loader: - kv 14: gpt-oss.attention.head_count_kv u32 = 8
llama_model_loader: - kv 15: gpt-oss.rope.freq_base f32 = 150000.000000
llama_model_loader: - kv 16: gpt-oss.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 17: gpt-oss.expert_count u32 = 32
llama_model_loader: - kv 18: gpt-oss.expert_used_count u32 = 4
llama_model_loader: - kv 19: gpt-oss.attention.key_length u32 = 64
llama_model_loader: - kv 20: gpt-oss.attention.value_length u32 = 64
llama_model_loader: - kv 21: gpt-oss.attention.sliding_window u32 = 128
llama_model_loader: - kv 22: gpt-oss.expert_feed_forward_length u32 = 2880
llama_model_loader: - kv 23: gpt-oss.rope.scaling.type str = yarn
llama_model_loader: - kv 24: gpt-oss.rope.scaling.factor f32 = 32.000000
llama_model_loader: - kv 25: gpt-oss.rope.scaling.original_context_length u32 = 4096
llama_model_loader: - kv 26: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 27: tokenizer.ggml.pre str = gpt-4o
llama_model_loader: - kv 28: tokenizer.ggml.tokens arr[str,201088] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 29: tokenizer.ggml.token_type arr[i32,201088] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 30: tokenizer.ggml.merges arr[str,446189] = ["Ġ Ġ", "Ġ ĠĠĠ", "ĠĠ ĠĠ", "...
llama_model_loader: - kv 31: tokenizer.ggml.bos_token_id u32 = 199998
llama_model_loader: - kv 32: tokenizer.ggml.eos_token_id u32 = 200002
llama_model_loader: - kv 33: tokenizer.ggml.padding_token_id u32 = 200017
llama_model_loader: - kv 34: tokenizer.chat_template str = {# Chat template fixes by Unsloth #}\n...
llama_model_loader: - kv 35: general.quantization_version u32 = 2
llama_model_loader: - kv 36: general.file_type u32 = 7
llama_model_loader: - type f32: 289 tensors
llama_model_loader: - type q8_0: 98 tensors
llama_model_loader: - type mxfp4: 72 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q8_0
print_info: file size = 11.27 GiB (4.63 BPW)
load: printing all EOG tokens:
load: - 199999 ('<|endoftext|>')
load: - 200002 ('<|return|>')
load: - 200007 ('<|end|>')
load: - 200012 ('<|call|>')
load: special_eog_ids contains both '<|return|>' and '<|call|>' tokens, removing '<|end|>' token from EOG list
load: special tokens cache size = 21
load: token to piece cache size = 1.3332 MB
print_info: arch = gpt-oss
print_info: vocab_only = 0
print_info: n_ctx_train = 131072
print_info: n_embd = 2880
print_info: n_embd_inp = 2880
print_info: n_layer = 24
print_info: n_head = 64
print_info: n_head_kv = 8
print_info: n_rot = 64
print_info: n_swa = 128
print_info: is_swa_any = 1
print_info: n_embd_head_k = 64
print_info: n_embd_head_v = 64
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 512
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-05
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 2880
print_info: n_expert = 32
print_info: n_expert_used = 4
print_info: n_expert_groups = 0
print_info: n_group_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = yarn
print_info: freq_base_train = 150000.0
print_info: freq_scale_train = 0.03125
print_info: n_ctx_orig_yarn = 4096
print_info: rope_finetuned = unknown
print_info: model type = 20B
print_info: model params = 20.91 B
print_info: general.name = Gpt-Oss-20B
print_info: n_ff_exp = 2880
print_info: vocab type = BPE
print_info: n_vocab = 201088
print_info: n_merges = 446189
print_info: BOS token = 199998 '<|startoftext|>'
print_info: EOS token = 200002 '<|return|>'
print_info: EOT token = 199999 '<|endoftext|>'
print_info: PAD token = 200017 '<|reserved_200017|>'
print_info: LF token = 198 'Ċ'
print_info: EOG token = 199999 '<|endoftext|>'
print_info: EOG token = 200002 '<|return|>'
print_info: EOG token = 200012 '<|call|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = true)
load_tensors: offloading 24 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 25/25 layers to GPU
load_tensors: CPU_Mapped model buffer size = 586.82 MiB
load_tensors: ROCm0 model buffer size = 3022.41 MiB
load_tensors: ROCm1 model buffer size = 2590.64 MiB
load_tensors: ROCm2 model buffer size = 2590.64 MiB
load_tensors: ROCm3 model buffer size = 2745.70 MiB
........................Segmentation fault (core dumped)Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingneed more infoThe OP should provide more details about the issueThe OP should provide more details about the issue