-
Notifications
You must be signed in to change notification settings - Fork 13.9k
Description
Name and Version
Model loads fine and warms up, prompt is processed but after few tokens of response segentation fault and core is dumped
works fine on cpu, tried different batch sizes but no change
$ llama-cli --version
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 ROCm devices:
Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
version: 7193 (d82b7a7)
built with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
HIP version: 7.1.25424-4179531dcd
Operating systems
Linux
GGML backends
HIP
Hardware
xeon 4216r + 2x amd MI50 32gb
Models
qwen3 next unsloth q_4_xl
Problem description & steps to reproduce
run it, ask it something, dies
cat chatqn.sh
HIP_VISIBLE_DEVICES=0,1 llama-cli -hf unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF:Q4_0
--gpu-layers 999
-t 8
-b 256
-ub 128
--ctx-size 32000
--flash-attn on
--jinja
--no-mmap
--cache-type-k q8_0
--cache-type-v q8_0
First Bad Commit
No response
Relevant log output
x@noisia:~$ ./chatqn.sh
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 2 ROCm devices:
Device 0: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
Device 1: AMD Radeon Graphics, gfx906:sramecc+:xnack- (0x906), VMM: no, Wave Size: 64
* Host huggingface.co:443 was resolved.
* IPv6: 2600:9000:28fd:8a00:17:b174:6d00:93a1, 2600:9000:28fd:de00:17:b174:6d00:93a1, 2600:9000:28fd:d800:17:b174:6d00:93a1, 2600:9000:28fd:1400:17:b174:6d00:93a1, 2600:9000:28fd:4200:17:b174:6d00:93a1, 2600:9000:28fd:8400:17:b174:6d00:93a1, 2600:9000:28fd:fa00:17:b174:6d00:93a1, 2600:9000:28fd:dc00:17:b174:6d00:93a1
* IPv4: 3.174.141.117, 3.174.141.51, 3.174.141.63, 3.174.141.96
* Trying [2600:9000:28fd:8a00:17:b174:6d00:93a1]:443...
* Connected to huggingface.co (2600:9000:28fd:8a00:17:b174:6d00:93a1) port 443
* ALPN: curl offers h2,http/1.1
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: /etc/ssl/certs
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256 / X25519 / RSASSA-PSS
* ALPN: server accepted h2
* Server certificate:
* subject: CN=huggingface.co
* start date: Apr 13 00:00:00 2025 GMT
* expire date: May 12 23:59:59 2026 GMT
* subjectAltName: host "huggingface.co" matched cert's "huggingface.co"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 1: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* Certificate level 2: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://huggingface.co/v2/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF/manifests/Q4_0
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: huggingface.co]
* [HTTP/2] [1] [:path: /v2/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF/manifests/Q4_0]
* [HTTP/2] [1] [user-agent: llama-cpp]
* [HTTP/2] [1] [accept: application/json]
> GET /v2/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF/manifests/Q4_0 HTTP/2
Host: huggingface.co
User-Agent: llama-cpp
Accept: application/json
< HTTP/2 200
< content-type: application/json; charset=utf-8
< content-length: 998
< date: Sat, 29 Nov 2025 00:41:14 GMT
< etag: W/"3e6-Dpob+BD5jYUUsRGniI5cv5ntgDA"
< x-powered-by: huggingface-moon
< x-request-id: Root=1-692a4129-389342695341d06100692441
< ratelimit: "pages";r=98;t=175
< ratelimit-policy: "fixed window";"pages";q=100;w=300
< cross-origin-opener-policy: same-origin
< referrer-policy: strict-origin-when-cross-origin
< access-control-max-age: 86400
< access-control-allow-origin: https://huggingface.co
< vary: Origin
< access-control-expose-headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash
< x-cache: Miss from cloudfront
< via: 1.1 d34a2a1d2fc10c8d442572df271a2cc2.cloudfront.net (CloudFront)
< x-amz-cf-pop: LHR3-P3
< x-amz-cf-id: 6Skwbe6-Rf9qTBSufUEtJjxsJMpjIfNiRz4dtRCGuK66CdrDa_Kktg==
<
* Connection #0 to host huggingface.co left intact
common_download_file_single_online: using cached file: /home/[---]/.cache/llama.cpp/unsloth_Qwen3-Next-80B-A3B-Instruct-GGUF_Qwen3-Next-80B-A3B-Instruct-Q4_0.gguf
build: 7193 (d82b7a7c1) with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) (0000:67:00.0) - 32730 MiB free
llama_model_load_from_file_impl: using device ROCm1 (AMD Radeon Graphics) (0000:b5:00.0) - 32730 MiB free
llama_model_loader: loaded meta data with 49 key-value pairs and 807 tensors from /home/[---]/.cache/llama.cpp/unsloth_Qwen3-Next-80B-A3B-Instruct-GGUF_Qwen3-Next-80B-A3B-Instruct-Q4_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = qwen3next
llama_model_loader: - kv 1: general.type str = model
llama_model_loader: - kv 2: general.sampling.top_k i32 = 20
llama_model_loader: - kv 3: general.sampling.top_p f32 = 0.800000
llama_model_loader: - kv 4: general.sampling.temp f32 = 0.700000
llama_model_loader: - kv 5: general.name str = Qwen3-Next-80B-A3B-Instruct
llama_model_loader: - kv 6: general.finetune str = Instruct
llama_model_loader: - kv 7: general.basename str = Qwen3-Next-80B-A3B-Instruct
llama_model_loader: - kv 8: general.quantized_by str = Unsloth
llama_model_loader: - kv 9: general.size_label str = 80B-A3B
llama_model_loader: - kv 10: general.license str = apache-2.0
llama_model_loader: - kv 11: general.license.link str = https://huggingface.co/Qwen/Qwen3-Nex...
llama_model_loader: - kv 12: general.repo_url str = https://huggingface.co/unsloth
llama_model_loader: - kv 13: general.base_model.count u32 = 1
llama_model_loader: - kv 14: general.base_model.0.name str = Qwen3 Next 80B A3B Instruct
llama_model_loader: - kv 15: general.base_model.0.organization str = Qwen
llama_model_loader: - kv 16: general.base_model.0.repo_url str = https://huggingface.co/Qwen/Qwen3-Nex...
llama_model_loader: - kv 17: general.tags arr[str,2] = ["unsloth", "text-generation"]
llama_model_loader: - kv 18: qwen3next.block_count u32 = 48
llama_model_loader: - kv 19: qwen3next.context_length u32 = 262144
llama_model_loader: - kv 20: qwen3next.embedding_length u32 = 2048
llama_model_loader: - kv 21: qwen3next.feed_forward_length u32 = 5120
llama_model_loader: - kv 22: qwen3next.attention.head_count u32 = 16
llama_model_loader: - kv 23: qwen3next.attention.head_count_kv u32 = 2
llama_model_loader: - kv 24: qwen3next.rope.freq_base f32 = 10000000.000000
llama_model_loader: - kv 25: qwen3next.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 26: qwen3next.expert_used_count u32 = 10
llama_model_loader: - kv 27: qwen3next.attention.key_length u32 = 256
llama_model_loader: - kv 28: qwen3next.attention.value_length u32 = 256
llama_model_loader: - kv 29: qwen3next.expert_count u32 = 512
llama_model_loader: - kv 30: qwen3next.expert_feed_forward_length u32 = 512
llama_model_loader: - kv 31: qwen3next.expert_shared_feed_forward_length u32 = 512
llama_model_loader: - kv 32: qwen3next.ssm.conv_kernel u32 = 4
llama_model_loader: - kv 33: qwen3next.ssm.state_size u32 = 128
llama_model_loader: - kv 34: qwen3next.ssm.group_count u32 = 16
llama_model_loader: - kv 35: qwen3next.ssm.time_step_rank u32 = 32
llama_model_loader: - kv 36: qwen3next.ssm.inner_size u32 = 4096
llama_model_loader: - kv 37: qwen3next.rope.dimension_count u32 = 64
llama_model_loader: - kv 38: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 39: tokenizer.ggml.pre str = qwen2
llama_model_loader: - kv 40: tokenizer.ggml.tokens arr[str,151936] = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 41: tokenizer.ggml.token_type arr[i32,151936] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 42: tokenizer.ggml.merges arr[str,151387] = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv 43: tokenizer.ggml.eos_token_id u32 = 151645
llama_model_loader: - kv 44: tokenizer.ggml.padding_token_id u32 = 151654
llama_model_loader: - kv 45: tokenizer.ggml.add_bos_token bool = false
llama_model_loader: - kv 46: tokenizer.chat_template str = {%- if tools %}\n {{- '<|im_start|>...
llama_model_loader: - kv 47: general.quantization_version u32 = 2
llama_model_loader: - kv 48: general.file_type u32 = 2
llama_model_loader: - type f32: 313 tensors
llama_model_loader: - type q4_0: 301 tensors
llama_model_loader: - type q5_K: 96 tensors
llama_model_loader: - type q6_K: 49 tensors
llama_model_loader: - type bf16: 48 tensors
print_info: file format = GGUF V3 (latest)
print_info: file type = Q4_0
print_info: file size = 42.00 GiB (4.53 BPW)
load: printing all EOG tokens:
load: - 151643 ('<|endoftext|>')
load: - 151645 ('<|im_end|>')
load: - 151662 ('<|fim_pad|>')
load: - 151663 ('<|repo_name|>')
load: - 151664 ('<|file_sep|>')
load: special tokens cache size = 26
load: token to piece cache size = 0.9311 MB
print_info: arch = qwen3next
print_info: vocab_only = 0
print_info: n_ctx_train = 262144
print_info: n_embd = 2048
print_info: n_embd_inp = 2048
print_info: n_layer = 48
print_info: n_head = 16
print_info: n_head_kv = 2
print_info: n_rot = 64
print_info: n_swa = 0
print_info: is_swa_any = 0
print_info: n_embd_head_k = 256
print_info: n_embd_head_v = 256
print_info: n_gqa = 8
print_info: n_embd_k_gqa = 512
print_info: n_embd_v_gqa = 512
print_info: f_norm_eps = 0.0e+00
print_info: f_norm_rms_eps = 1.0e-06
print_info: f_clamp_kqv = 0.0e+00
print_info: f_max_alibi_bias = 0.0e+00
print_info: f_logit_scale = 0.0e+00
print_info: f_attn_scale = 0.0e+00
print_info: n_ff = 5120
print_info: n_expert = 512
print_info: n_expert_used = 10
print_info: n_expert_groups = 0
print_info: n_group_used = 0
print_info: causal attn = 1
print_info: pooling type = 0
print_info: rope type = 2
print_info: rope scaling = linear
print_info: freq_base_train = 10000000.0
print_info: freq_scale_train = 1
print_info: n_ctx_orig_yarn = 262144
print_info: rope_finetuned = unknown
print_info: ssm_d_conv = 4
print_info: ssm_d_inner = 4096
print_info: ssm_d_state = 128
print_info: ssm_dt_rank = 32
print_info: ssm_n_group = 16
print_info: ssm_dt_b_c_rms = 0
print_info: model type = ?B
print_info: model params = 79.67 B
print_info: general.name = Qwen3-Next-80B-A3B-Instruct
print_info: vocab type = BPE
print_info: n_vocab = 151936
print_info: n_merges = 151387
print_info: BOS token = 11 ','
print_info: EOS token = 151645 '<|im_end|>'
print_info: EOT token = 151645 '<|im_end|>'
print_info: PAD token = 151654 '<|vision_pad|>'
print_info: LF token = 198 'Ċ'
print_info: FIM PRE token = 151659 '<|fim_prefix|>'
print_info: FIM SUF token = 151661 '<|fim_suffix|>'
print_info: FIM MID token = 151660 '<|fim_middle|>'
print_info: FIM PAD token = 151662 '<|fim_pad|>'
print_info: FIM REP token = 151663 '<|repo_name|>'
print_info: FIM SEP token = 151664 '<|file_sep|>'
print_info: EOG token = 151643 '<|endoftext|>'
print_info: EOG token = 151645 '<|im_end|>'
print_info: EOG token = 151662 '<|fim_pad|>'
print_info: EOG token = 151663 '<|repo_name|>'
print_info: EOG token = 151664 '<|file_sep|>'
print_info: max token length = 256
load_tensors: loading model tensors, this can take a while... (mmap = false)
load_tensors: offloading 48 repeating layers to GPU
load_tensors: offloading output layer to GPU
load_tensors: offloaded 49/49 layers to GPU
load_tensors: CPU model buffer size = 0.00 MiB
load_tensors: ROCm0 model buffer size = 22188.86 MiB
load_tensors: ROCm1 model buffer size = 20655.47 MiB
load_tensors: ROCm_Host model buffer size = 166.92 MiB
....................................................................................................
llama_context: constructing llama_context
llama_context: n_seq_max = 1
llama_context: n_ctx = 32000
llama_context: n_ctx_seq = 32000
llama_context: n_batch = 256
llama_context: n_ubatch = 128
llama_context: causal_attn = 1
llama_context: flash_attn = enabled
llama_context: kv_unified = false
llama_context: freq_base = 10000000.0
llama_context: freq_scale = 1
llama_context: n_ctx_seq (32000) < n_ctx_train (262144) -- the full capacity of the model will not be utilized
llama_context: ROCm_Host output buffer size = 0.58 MiB
llama_kv_cache: ROCm0 KV buffer size = 199.22 MiB
llama_kv_cache: ROCm1 KV buffer size = 199.22 MiB
llama_kv_cache: size = 398.44 MiB ( 32000 cells, 12 layers, 1/1 seqs), K (q8_0): 199.22 MiB, V (q8_0): 199.22 MiB
llama_memory_recurrent: ROCm0 RS buffer size = 39.78 MiB
llama_memory_recurrent: ROCm1 RS buffer size = 35.59 MiB
llama_memory_recurrent: size = 75.38 MiB ( 1 cells, 48 layers, 1 seqs), R (f32): 3.38 MiB, S (f32): 72.00 MiB
llama_context: pipeline parallelism enabled (n_copies=4)
llama_context: ROCm0 compute buffer size = 1288.61 MiB
llama_context: ROCm1 compute buffer size = 782.39 MiB
llama_context: ROCm_Host compute buffer size = 353.90 MiB
llama_context: graph nodes = 11401 (with bs=128), 8449 (with bs=1)
llama_context: graph splits = 204 (with bs=128), 151 (with bs=1)
common_init_from_params: added <|endoftext|> logit bias = -inf
common_init_from_params: added <|im_end|> logit bias = -inf
common_init_from_params: added <|fim_pad|> logit bias = -inf
common_init_from_params: added <|repo_name|> logit bias = -inf
common_init_from_params: added <|file_sep|> logit bias = -inf
common_init_from_params: setting dry_penalty_last_n to ctx_size = 32000
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 8
main: chat template is available, enabling conversation mode (disable it with -no-cnv)
main: chat template example:
<|im_start|>system
You are a helpful assistant<|im_end|>
<|im_start|>user
Hello<|im_end|>
<|im_start|>assistant
Hi there<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
system_info: n_threads = 8 (n_threads_batch = 8) / 16 | ROCm : NO_VMM = 1 | PEER_MAX_BATCH_SIZE = 128 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | AVX512 = 1 | AVX512_VNNI = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 |
main: interactive mode on.
sampler seed: 419957296
sampler params:
repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = 32000
top_k = 20, top_p = 0.800, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, top_n_sigma = -1.000, temp = 0.700
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-n-sigma -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
generate: n_ctx = 32000, n_batch = 256, n_predict = -1, n_keep = 0
== Running in interactive mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to the AI.
- To return control without starting a new line, end your input with '/'.
- If you want to submit another line, end your input with '\'.
- Not using system message. To change it, set a different value via -sys PROMPT
> Who was copernicus?
**./chatqn.sh: line 11: 7672 Segmentation fault (core dumped) HIP_VISIBLE_DEVICES=0,1 llama-cli -hf unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF:Q4_0 --gpu-layers 999 -t 8 -b 256 -ub 128 --ctx-size 32000 --flash-attn on --jinja --no-mmap --cache-type-k q8_0 --cache-type-v q8_0