Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GGML_ASSERT: llama.cpp/ggml-cuda/argsort.cu:48: (ncols & (ncols - 1)) == 0 #6527

Closed
schmorp opened this issue Apr 7, 2024 · 3 comments
Closed

Comments

@schmorp
Copy link

schmorp commented Apr 7, 2024

I get this error when trying to calculate an imatrix for

https://huggingface.co/nbeerbower/HeroBophades-3x7B

The gguf file is created by running

convert.py --skip-unknown --vocab-type spm,hfft,bpe --pad-vocab

The switches are my default switches, so are likely irrelevant. The resulting gguf file seems to work fine when used with main, but crashes when used with imatrix:

$ imatrix -ofreq 10 -t 1 -ngl 0 -mg 0 -m HeroBophades-3x7B.gguf -o "HeroBophades-3x7B.imatrix~ -f imatrix-training.txt
main: build = 2569 (5106ef4)
main: built with gcc-12 (Debian 12.2.0-14) 12.2.0 for x86_64-linux-gnu
main: seed = 1712520807
llama_model_loader: loaded meta data with 24 key-value pairs and 515 tensors from HeroBophades-3x7B.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = .
llama_model_loader: - kv 2: llama.context_length u32 = 32768
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 32
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 14336
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 8
llama_model_loader: - kv 9: llama.expert_count u32 = 3
llama_model_loader: - kv 10: llama.expert_used_count u32 = 2
llama_model_loader: - kv 11: llama.attention.layer_norm_rms_epsilon f32 = 0.000010
llama_model_loader: - kv 12: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 13: general.file_type u32 = 1
llama_model_loader: - kv 14: tokenizer.ggml.model str = llama
llama_model_loader: - kv 15: tokenizer.ggml.tokens arr[str,32000] = ["", "", "", "<0x00>", "<...
llama_model_loader: - kv 16: tokenizer.ggml.scores arr[f32,32000] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 17: tokenizer.ggml.token_type arr[i32,32000] = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv 18: tokenizer.ggml.bos_token_id u32 = 1
llama_model_loader: - kv 19: tokenizer.ggml.eos_token_id u32 = 2
llama_model_loader: - kv 20: tokenizer.ggml.unknown_token_id u32 = 0
llama_model_loader: - kv 21: tokenizer.ggml.padding_token_id u32 = 1
llama_model_loader: - kv 22: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 23: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - type f32: 65 tensors
llama_model_loader: - type f16: 450 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = SPM
llm_load_print_meta: n_vocab = 32000
llm_load_print_meta: n_merges = 0
llm_load_print_meta: n_ctx_train = 32768
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 8
llm_load_print_meta: n_layer = 32
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 4
llm_load_print_meta: n_embd_k_gqa = 1024
llm_load_print_meta: n_embd_v_gqa = 1024
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-05
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale = 0.0e+00
llm_load_print_meta: n_ff = 14336
llm_load_print_meta: n_expert = 3
llm_load_print_meta: n_expert_used = 2
llm_load_print_meta: causal attn = 1
llm_load_print_meta: pooling type = 0
llm_load_print_meta: rope type = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 32768
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: ssm_d_conv = 0
llm_load_print_meta: ssm_d_inner = 0
llm_load_print_meta: ssm_d_state = 0
llm_load_print_meta: ssm_dt_rank = 0
llm_load_print_meta: model type = 7B
llm_load_print_meta: model ftype = F16
llm_load_print_meta: model params = 18.52 B
llm_load_print_meta: model size = 34.49 GiB (16.00 BPW)
llm_load_print_meta: general.name = .
llm_load_print_meta: BOS token = 1 ''
llm_load_print_meta: EOS token = 2 '
'
llm_load_print_meta: UNK token = 0 ''
llm_load_print_meta: PAD token = 1 ''
llm_load_print_meta: LF token = 13 '<0x0A>'
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: yes
ggml_cuda_init: CUDA_USE_TENSOR_CORES: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Ti, compute capability 8.9, VMM: yes
llm_load_tensors: ggml ctx size = 0.20 MiB
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/33 layers to GPU
llm_load_tensors: CPU buffer size = 35317.77 MiB
....................................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: n_batch = 512
llama_new_context_with_model: n_ubatch = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: CUDA_Host KV buffer size = 64.00 MiB
llama_new_context_with_model: KV self size = 64.00 MiB, K (f16): 32.00 MiB, V (f16): 32.00 MiB
llama_new_context_with_model: CUDA_Host output buffer size = 0.12 MiB
llama_new_context_with_model: CUDA0 compute buffer size = 1064.00 MiB
llama_new_context_with_model: CUDA_Host compute buffer size = 9.01 MiB
llama_new_context_with_model: graph nodes = 1670
llama_new_context_with_model: graph splits = 388

system_info: n_threads = 4 / 28 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 |
compute_imatrix: tokenizing the input ..
compute_imatrix: tokenization took 171.649 ms
compute_imatrix: computing over 307 chunks with batch_size 512
GGML_ASSERT: llama.cpp/ggml-cuda/argsort.cu:48: (ncols & (ncols - 1)) == 0
[New LWP 46883]
[New LWP 46887]
[New LWP 46888]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007fffeeef2b57 in __GI___wait4 (pid=46892, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
Download failed: Invalid argument. Continuing without source file ./posix/../sysdeps/unix/sysv/linux/wait4.c.
30 ../sysdeps/unix/sysv/linux/wait4.c: Inappropriate ioctl for device.
#0 0x00007fffeeef2b57 in __GI___wait4 (pid=46892, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x00005555555e68eb in ggml_print_backtrace () at /llama.cpp/ggml.c:145
warning: Source file is more recent than executable.
145 waitpid(pid, NULL, 0);
#2 0x00005555556588a3 in argsort_f32_i32_cuda(float const*, int*, int, int, ggml_sort_order, CUstream_st*) ()
#3 0x0000555555658b60 in ggml_cuda_op_argsort(ggml_backend_cuda_context&, ggml_tensor*) ()
#4 0x000055555564f778 in ggml_cuda_compute_forward(ggml_backend_cuda_context&, ggml_tensor*) ()
#5 0x0000555555650279 in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) ()
#6 0x00005555555dce8a in ggml_backend_graph_compute_async (cgraph=0x7fffffffa510, backend=0x555557092fd0) at llama.cpp/ggml-backend.c:282
282 return backend->iface.graph_compute(backend, cgraph);
#7 ggml_backend_sched_compute_splits (sched=0x7fffc017e010) at llama.cpp/ggml-backend.c:1685
1685 enum ggml_status ec = ggml_backend_graph_compute_async(split_backend, &gv);
#8 ggml_backend_sched_graph_compute_async (graph=, sched=0x7fffc017e010) at llama.cpp/ggml-backend.c:1839
1839 return ggml_backend_sched_compute_splits(sched);
#9 llama_graph_compute (lctx=..., gf=, n_threads=) at llama.cpp/llama.cpp:9756
warning: Source file is more recent than executable.
9756 //
#10 0x0000555555642711 in llama_decode_internal(llama_context&, llama_batch) [clone .isra.0] (lctx=..., batch_all=...) at llama.cpp/llama.cpp:10001
10001
#11 0x000055555557e07b in llama_decode (batch=..., ctx=0x5555563bec20) at llama.cpp/llama.cpp:15135
15135 const auto & cell = kv_self.cells[i];
#12 compute_imatrix (ctx=ctx@entry=0x5555563bec20, params=..., compute_ppl=compute_ppl@entry=true, from_chunk=from_chunk@entry=0) at llama.cpp/examples/imatrix/imatrix.cpp:428
warning: Source file is more recent than executable.
428 }
#13 0x0000555555574734 in main (argc=, argv=) at llama.cpp/examples/imatrix/imatrix.cpp:632
632 fprintf(stderr, "%s\n", get_system_info(params).c_str());
[Inferior 1 (process 46881) detached]
/tmp/ai/imatrix-training: line 33: 46881 Aborted

@slaren
Copy link
Collaborator

slaren commented Apr 7, 2024

This assert no longer exists in master.

@slaren slaren closed this as completed Apr 7, 2024
@schmorp
Copy link
Author

schmorp commented Apr 8, 2024

It still crashes all the same, though. Should I open a new issue for this?

GGML_ASSERT: llama.cpp/examples/imatrix/imatrix.cpp:112: n_as*ggml_nrows(ids)sizeof(int) == GGML_PAD(ggml_nbytes(ids), n_assizeof(int))

@schmorp
Copy link
Author

schmorp commented Apr 8, 2024

To make it easier, I'll open a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants