Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support SYCL backend windows build #5208

Merged
merged 27 commits into from
Jan 31, 2024

Conversation

NeoZhangJianyu
Copy link
Collaborator

Support SYCL backend windows build.
Update the guide for windows build & usage.
Add CI for Windows SYCL build.

README-sycl.md Outdated Show resolved Hide resolved
@sorasoras
Copy link

compilelog.txt
I tried to compile this with win-build-sycl
It's still not working

@characharm
Copy link

characharm commented Jan 30, 2024

Using device 0 (Intel(R) Arc(TM) A770 Graphics) as main device

model size params backend ngl test t/s
llama 7B Q6_K 5.53 GiB 7.24 B GPU BLAS 99 pp 512 734.90 ± 149.16
llama 7B Q6_K 5.53 GiB 7.24 B GPU BLAS 99 tg 128 22.00 ± 0.17

ggml_vulkan: Using Intel(R) Arc(TM) A770 Graphics | fp16: 1 | warp size: 32

model size params backend ngl test t/s
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 99 pp 512 73.15 ± 2.76
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 99 tg 128 21.69 ± 0.12

Using device 0 (Intel(R) Arc(TM) A770 Graphics) as main device

model size params backend ngl test t/s
llama 13B Q5_K - Medium 8.60 GiB 13.02 B GPU BLAS 99 pp 512 430.52 ± 38.83
llama 13B Q5_K - Medium 8.60 GiB 13.02 B GPU BLAS 99 tg 128 16.14 ± 0.13

ggml_vulkan: Using Intel(R) Arc(TM) A770 Graphics | fp16: 1 | warp size: 32

model size params backend ngl test t/s
llama 13B Q5_K - Medium 8.60 GiB 13.02 B Vulkan 99 pp 512 41.42 ± 2.06
llama 13B Q5_K - Medium 8.60 GiB 13.02 B Vulkan 99 tg 128 9.83 ± 0.08

sycl FP32

I want to note that during the benchmark, while running the SYCL pass, the computer remained very responsive, unlike Vulkan, which resulted in increased fan noise and mouse cursor freezes. It seems the load was higher in the latter case.

update: unfortunately, when using SYCL, all models generate gibberish

@Jacoby1218
Copy link

update: unfortunately, when using SYCL, all models generate gibberish

That's happening on Linux too. not sure when it broke, but it did. That being said, perf at least looks similar to linux, which is good.

@airMeng
Copy link
Collaborator

airMeng commented Jan 31, 2024

update: unfortunately, when using SYCL, all models generate gibberish

That's happening on Linux too. not sure when it broke, but it did. That being said, perf at least looks similar to linux, which is good.

hi @Jacoby1218 I don't know the standard test method. I ran some cases manually:

gta@DUT109DG2MRB:~/llama.cpp/build$ GGML_SYCL_DEVICE=0 ./bin/main -m ~/llama-2-7b.Q4_K_S.gguf -p "Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun" -n 128 -e -ngl
 33 --no-mmap
...
 Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun. She was never quite happy with her life the way it was, because she knew that there were bigger things out there waiting for her.
The problem was that when you’re ten years old, you don’t know how to find these adventures on your own, or how to ask for them. She often told stories of what she wanted and needed, but no one listened. They just said “Oh, you’ll be fine”, or “You have a good life here.”
That was what they all thought: that she had a good life here, with her friends in the neighborhood, with the games and to
llama_print_timings:        load time =   10103.39 ms
llama_print_timings:      sample time =      21.70 ms /   128 runs   (    0.17 ms per token,  5899.71 tokens per second)
llama_print_timings: prompt eval time =     757.28 ms /    33 tokens (   22.95 ms per token,    43.58 tokens per second)
llama_print_timings:        eval time =    5357.27 ms /   127 runs   (   42.18 ms per token,    23.71 tokens per second)
llama_print_timings:       total time =    6189.14 ms /   160 tokens
Log end

can you share how to reproduce?

@NeoZhangJianyu
Copy link
Collaborator Author

Using device 0 (Intel(R) Arc(TM) A770 Graphics) as main device

model size params backend ngl test t/s
llama 7B Q6_K 5.53 GiB 7.24 B GPU BLAS 99 pp 512 734.90 ± 149.16
llama 7B Q6_K 5.53 GiB 7.24 B GPU BLAS 99 tg 128 22.00 ± 0.17
ggml_vulkan: Using Intel(R) Arc(TM) A770 Graphics | fp16: 1 | warp size: 32

model size params backend ngl test t/s
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 99 pp 512 73.15 ± 2.76
llama 7B Q6_K 5.53 GiB 7.24 B Vulkan 99 tg 128 21.69 ± 0.12
Using device 0 (Intel(R) Arc(TM) A770 Graphics) as main device

model size params backend ngl test t/s
llama 13B Q5_K - Medium 8.60 GiB 13.02 B GPU BLAS 99 pp 512 430.52 ± 38.83
llama 13B Q5_K - Medium 8.60 GiB 13.02 B GPU BLAS 99 tg 128 16.14 ± 0.13
ggml_vulkan: Using Intel(R) Arc(TM) A770 Graphics | fp16: 1 | warp size: 32

model size params backend ngl test t/s
llama 13B Q5_K - Medium 8.60 GiB 13.02 B Vulkan 99 pp 512 41.42 ± 2.06
llama 13B Q5_K - Medium 8.60 GiB 13.02 B Vulkan 99 tg 128 9.83 ± 0.08
sycl FP32

I want to note that during the benchmark, while running the SYCL pass, the computer remained very responsive, unlike Vulkan, which resulted in increased fan noise and mouse cursor freezes. It seems the load was higher in the latter case.

update: unfortunately, when using SYCL, all models generate gibberish

Thank you for your sharing!

What's the input in your test case with gibberish?
or what's your test cmd?
Because the unit test cases are passed, the result should be as correct too.

@NeoZhangJianyu
Copy link
Collaborator Author

compilelog.txt I tried to compile this with win-build-sycl It's still not working

Please clean the compile env before try again.

@abhilash1910
Copy link
Collaborator

abhilash1910 commented Jan 31, 2024

@sorasoras @characharm @Jacoby1218 I think the build works correctly, please let us know if you face issues in compilation or building. Could you share the test cases which you were running ?
Since PR addresses Win build development , this will be merged on the master as completed.
cc @ggerganov

@abhilash1910 abhilash1910 merged commit 0168413 into ggerganov:master Jan 31, 2024
52 checks passed
@Nuullll
Copy link

Nuullll commented Jan 31, 2024

In case anyone is experiencing the cmake error No rule to make target 'CMakeFiles/ggml.dir/ggml.c.obj', needed by 'llama.lib' like me, the solution is to disable ccache by specifying -DLLAMA_CCACHE=OFF to the cmake command.

@Nuullll
Copy link

Nuullll commented Jan 31, 2024

In case anyone is experiencing the cmake error No rule to make target 'CMakeFiles/ggml.dir/ggml.c.obj', needed by 'llama.lib' like me, the solution is to disable ccache by specifying -DLLAMA_CCACHE=OFF to the cmake command.

@sorasoras Ahh, exactly what you encountered in compilelog.txt. Have a try with CCACHE disabled.

@characharm
Copy link

characharm commented Jan 31, 2024

What's the input in your test case with gibberish? or what's your test cmd?

Using the server yields the same result. I've tried various models with different parameter counts and quantization methods.

in Intel oneAPI command prompt for Intel 64 for Visual Studio 2022
@call "I:\Tools\Intel\oneAPI\setvars.bat" intel64 --force
set GGML_SYCL_DEVICE=0

/bin/main -m I:\deepseek-coder-33B-base-GGUF\deepseek-coder-7b-instruct-v1.5-Q8_0.gguf -p "Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun" -n 128 -e -ngl 33 --no-mmap

Log start

main: build = 2036 (47cba0d)
main: built with IntelLLVM 2024.0.2 for
main: seed = 1706702147
GGML_SYCL_DEBUG=0
ggml_init_sycl: GGML_SYCL_FP16: no
ggml_init_sycl: SYCL_USE_XMX: yes
found 4 SYCL devices:
Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 3819835392
Device 1: Intel(R) FPGA Emulation Device, compute capability 1.2,
max compute_units 12, max work group size 67108864, max sub group size 64, global mem size 4218748928
Device 2: AMD Ryzen 5 3600X 6-Core Processor , compute capability 3.0,
max compute_units 12, max work group size 8192, max sub group size 64, global mem size 4218748928
Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0,
max compute_units 512, max work group size 1024, max sub group size 32, global mem size 3819835392
Using device 0 (Intel(R) Arc(TM) A770 Graphics) as main device
llama_model_loader: loaded meta data with 24 key-value pairs and 273 tensors from I:\deepseek-coder-33B-base-GGUF\deepseek-coder-7b-instruct-v1.5-Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv 0: general.architecture str = llama
llama_model_loader: - kv 1: general.name str = LLaMA v2
llama_model_loader: - kv 2: llama.context_length u32 = 4096
llama_model_loader: - kv 3: llama.embedding_length u32 = 4096
llama_model_loader: - kv 4: llama.block_count u32 = 30
llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008
llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128
llama_model_loader: - kv 7: llama.attention.head_count u32 = 32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001
llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000
llama_model_loader: - kv 11: general.file_type u32 = 7
llama_model_loader: - kv 12: tokenizer.ggml.model str = gpt2
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,102400] = ["!", """, "#", "$", "%", "&", "'", ...
llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,102400] = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e...
llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 100000
llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 100015
llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 100001
llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true
llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false
llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de...
llama_model_loader: - kv 23: general.quantization_version u32 = 2
llama_model_loader: - type f32: 61 tensors
llama_model_loader: - type q8_0: 212 tensors
llm_load_vocab: mismatch in special tokens definition ( 2387/102400 vs 2400/102400 ).
llm_load_print_meta: format = GGUF V3 (latest)
llm_load_print_meta: arch = llama
llm_load_print_meta: vocab type = BPE
llm_load_print_meta: n_vocab = 102400
llm_load_print_meta: n_merges = 99757
llm_load_print_meta: n_ctx_train = 4096
llm_load_print_meta: n_embd = 4096
llm_load_print_meta: n_head = 32
llm_load_print_meta: n_head_kv = 32
llm_load_print_meta: n_layer = 30
llm_load_print_meta: n_rot = 128
llm_load_print_meta: n_embd_head_k = 128
llm_load_print_meta: n_embd_head_v = 128
llm_load_print_meta: n_gqa = 1
llm_load_print_meta: n_embd_k_gqa = 4096
llm_load_print_meta: n_embd_v_gqa = 4096
llm_load_print_meta: f_norm_eps = 0.0e+00
llm_load_print_meta: f_norm_rms_eps = 1.0e-06
llm_load_print_meta: f_clamp_kqv = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff = 11008
llm_load_print_meta: n_expert = 0
llm_load_print_meta: n_expert_used = 0
llm_load_print_meta: rope scaling = linear
llm_load_print_meta: freq_base_train = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx = 4096
llm_load_print_meta: rope_finetuned = unknown
llm_load_print_meta: model type = ?B
llm_load_print_meta: model ftype = Q8_0
llm_load_print_meta: model params = 6.91 B
llm_load_print_meta: model size = 6.84 GiB (8.50 BPW)
llm_load_print_meta: general.name = LLaMA v2
llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>'
llm_load_print_meta: EOS token = 100015 '<|EOT|>'
llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>'
llm_load_print_meta: LF token = 126 'Ä'
llm_load_tensors: ggml ctx size = 0.21 MiB
llm_load_tensors: offloading 30 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 31/31 layers to GPU
llm_load_tensors: buffer size = 6577.84 MiB
llm_load_tensors: CPU buffer size = 425.00 MiB
.........................................................................................
llama_new_context_with_model: n_ctx = 512
llama_new_context_with_model: freq_base = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init: KV buffer size = 240.00 MiB
llama_new_context_with_model: KV self size = 240.00 MiB, K (f16): 120.00 MiB, V (f16): 120.00 MiB
llama_new_context_with_model: CPU input buffer size = 9.01 MiB
llama_new_context_with_model: compute buffer size = 228.80 MiB
llama_new_context_with_model: CPU compute buffer size = 8.80 MiB
llama_new_context_with_model: graph splits (measure): 3

system_info: n_threads = 6 / 12 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
sampling:
repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order:
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp
generate: n_ctx = 512, n_batch = 512, n_predict = 128, n_keep = 0

Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
llama_print_timings: load time = 25822.93 ms
llama_print_timings: sample time = 33.06 ms / 128 runs ( 0.26 ms per token, 3871.40 tokens per second)
llama_print_timings: prompt eval time = 194.71 ms / 32 tokens ( 6.08 ms per token, 164.35 tokens per second)
llama_print_timings: eval time = 5405.04 ms / 127 runs ( 42.56 ms per token, 23.50 tokens per second)
llama_print_timings: total time = 5689.59 ms / 159 tokens
Log end

@NeoZhangJianyu
Copy link
Collaborator Author

What's the input in your test case with gibberish? or what's your test cmd?

Using the server yields the same result. I've tried various models with different parameter counts and quantization methods.

in Intel oneAPI command prompt for Intel 64 for Visual Studio 2022 @call "I:\Tools\Intel\oneAPI\setvars.bat" intel64 --force set GGML_SYCL_DEVICE=0

/bin/main -m I:\deepseek-coder-33B-base-GGUF\deepseek-coder-7b-instruct-v1.5-Q8_0.gguf -p "Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun" -n 128 -e -ngl 33 --no-mmap

Log start
main: build = 2036 (47cba0d) main: built with IntelLLVM 2024.0.2 for main: seed = 1706702147 GGML_SYCL_DEBUG=0 ggml_init_sycl: GGML_SYCL_FP16: no ggml_init_sycl: SYCL_USE_XMX: yes found 4 SYCL devices: Device 0: Intel(R) Arc(TM) A770 Graphics, compute capability 1.3, max compute_units 512, max work group size 1024, max sub group size 32, global mem size 3819835392 Device 1: Intel(R) FPGA Emulation Device, compute capability 1.2, max compute_units 12, max work group size 67108864, max sub group size 64, global mem size 4218748928 Device 2: AMD Ryzen 5 3600X 6-Core Processor , compute capability 3.0, max compute_units 12, max work group size 8192, max sub group size 64, global mem size 4218748928 Device 3: Intel(R) Arc(TM) A770 Graphics, compute capability 3.0, max compute_units 512, max work group size 1024, max sub group size 32, global mem size 3819835392 Using device 0 (Intel(R) Arc(TM) A770 Graphics) as main device llama_model_loader: loaded meta data with 24 key-value pairs and 273 tensors from I:\deepseek-coder-33B-base-GGUF\deepseek-coder-7b-instruct-v1.5-Q8_0.gguf (version GGUF V3 (latest)) llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output. llama_model_loader: - kv 0: general.architecture str = llama llama_model_loader: - kv 1: general.name str = LLaMA v2 llama_model_loader: - kv 2: llama.context_length u32 = 4096 llama_model_loader: - kv 3: llama.embedding_length u32 = 4096 llama_model_loader: - kv 4: llama.block_count u32 = 30 llama_model_loader: - kv 5: llama.feed_forward_length u32 = 11008 llama_model_loader: - kv 6: llama.rope.dimension_count u32 = 128 llama_model_loader: - kv 7: llama.attention.head_count u32 = 32 llama_model_loader: - kv 8: llama.attention.head_count_kv u32 = 32 llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32 = 0.000001 llama_model_loader: - kv 10: llama.rope.freq_base f32 = 10000.000000 llama_model_loader: - kv 11: general.file_type u32 = 7 llama_model_loader: - kv 12: tokenizer.ggml.model str = gpt2 llama_model_loader: - kv 13: tokenizer.ggml.tokens arr[str,102400] = ["!", """, "#", "$", "%", "&", "'", ... llama_model_loader: - kv 14: tokenizer.ggml.scores arr[f32,102400] = [0.000000, 0.000000, 0.000000, 0.0000... llama_model_loader: - kv 15: tokenizer.ggml.token_type arr[i32,102400] = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ... llama_model_loader: - kv 16: tokenizer.ggml.merges arr[str,99757] = ["Ġ Ġ", "Ġ t", "Ġ a", "i n", "h e... llama_model_loader: - kv 17: tokenizer.ggml.bos_token_id u32 = 100000 llama_model_loader: - kv 18: tokenizer.ggml.eos_token_id u32 = 100015 llama_model_loader: - kv 19: tokenizer.ggml.padding_token_id u32 = 100001 llama_model_loader: - kv 20: tokenizer.ggml.add_bos_token bool = true llama_model_loader: - kv 21: tokenizer.ggml.add_eos_token bool = false llama_model_loader: - kv 22: tokenizer.chat_template str = {% if not add_generation_prompt is de... llama_model_loader: - kv 23: general.quantization_version u32 = 2 llama_model_loader: - type f32: 61 tensors llama_model_loader: - type q8_0: 212 tensors llm_load_vocab: mismatch in special tokens definition ( 2387/102400 vs 2400/102400 ). llm_load_print_meta: format = GGUF V3 (latest) llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = BPE llm_load_print_meta: n_vocab = 102400 llm_load_print_meta: n_merges = 99757 llm_load_print_meta: n_ctx_train = 4096 llm_load_print_meta: n_embd = 4096 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 32 llm_load_print_meta: n_layer = 30 llm_load_print_meta: n_rot = 128 llm_load_print_meta: n_embd_head_k = 128 llm_load_print_meta: n_embd_head_v = 128 llm_load_print_meta: n_gqa = 1 llm_load_print_meta: n_embd_k_gqa = 4096 llm_load_print_meta: n_embd_v_gqa = 4096 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-06 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 11008 llm_load_print_meta: n_expert = 0 llm_load_print_meta: n_expert_used = 0 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 4096 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = Q8_0 llm_load_print_meta: model params = 6.91 B llm_load_print_meta: model size = 6.84 GiB (8.50 BPW) llm_load_print_meta: general.name = LLaMA v2 llm_load_print_meta: BOS token = 100000 '<|begin▁of▁sentence|>' llm_load_print_meta: EOS token = 100015 '<|EOT|>' llm_load_print_meta: PAD token = 100001 '<|end▁of▁sentence|>' llm_load_print_meta: LF token = 126 'Ä' llm_load_tensors: ggml ctx size = 0.21 MiB llm_load_tensors: offloading 30 repeating layers to GPU llm_load_tensors: offloading non-repeating layers to GPU llm_load_tensors: offloaded 31/31 layers to GPU llm_load_tensors: buffer size = 6577.84 MiB llm_load_tensors: CPU buffer size = 425.00 MiB ......................................................................................... llama_new_context_with_model: n_ctx = 512 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_kv_cache_init: KV buffer size = 240.00 MiB llama_new_context_with_model: KV self size = 240.00 MiB, K (f16): 120.00 MiB, V (f16): 120.00 MiB llama_new_context_with_model: CPU input buffer size = 9.01 MiB llama_new_context_with_model: compute buffer size = 228.80 MiB llama_new_context_with_model: CPU compute buffer size = 8.80 MiB llama_new_context_with_model: graph splits (measure): 3

system_info: n_threads = 6 / 12 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | sampling: repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000 top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800 mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000 sampling order: CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp generate: n_ctx = 512, n_batch = 512, n_predict = 128, n_keep = 0

Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" llama_print_timings: load time = 25822.93 ms llama_print_timings: sample time = 33.06 ms / 128 runs ( 0.26 ms per token, 3871.40 tokens per second) llama_print_timings: prompt eval time = 194.71 ms / 32 tokens ( 6.08 ms per token, 164.35 tokens per second) llama_print_timings: eval time = 5405.04 ms / 127 runs ( 42.56 ms per token, 23.50 tokens per second) llama_print_timings: total time = 5689.59 ms / 159 tokens Log end

I test with the same input, there is corrent output.
Please check:
1.oneAPI version: 2024.0.1
2. llama-2-7b.Q4_0.gguf: llama-2-7b.Q4_0.gguf
3. remove folder "build" and build again.

/build/bin/main -m models/llama-2-7b.Q4_0.gguf -p "Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun" -n 128 -e -ngl 33 --no-mmap
Log start
main: build = 2029 (d62520e)
main: built with Intel(R) oneAPI DPC++/C++ Compiler 2024.0.0 (2024.0.0.20231017) for x86_64-unknown-linux-gnu
main: seed  = 1706777444
GGML_SYCL_DEBUG=0
ggml_init_sycl: GGML_SYCL_FP16:   no
ggml_init_sycl: SYCL_USE_XMX: yes
found 6 SYCL devices:
  Device 0: Intel(R) Arc(TM) A770 Graphics,	compute capability 1.3,
	max compute_units 512,	max work group size 1024,	max sub group size 32,	global mem size 16225243136
  Device 1: Intel(R) FPGA Emulation Device,	compute capability 1.2,
	max compute_units 24,	max work group size 67108864,	max sub group size 64,	global mem size 67065057280
  Device 2: 13th Gen Intel(R) Core(TM) i7-13700K,	compute capability 3.0,
	max compute_units 24,	max work group size 8192,	max sub group size 64,	global mem size 67065057280
  Device 3: Intel(R) Arc(TM) A770 Graphics,	compute capability 3.0,
	max compute_units 512,	max work group size 1024,	max sub group size 32,	global mem size 16225243136
  Device 4: Intel(R) UHD Graphics 770,	compute capability 3.0,
	max compute_units 32,	max work group size 512,	max sub group size 32,	global mem size 53652045824
  Device 5: Intel(R) UHD Graphics 770,	compute capability 1.3,
	max compute_units 32,	max work group size 512,	max sub group size 32,	global mem size 53652045824
Using device 0 (Intel(R) Arc(TM) A770 Graphics) as main device
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from models/llama-2-7b.Q4_0.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special tokens definition check successful ( 259/32000 ).
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_yarn_orig_ctx  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW) 
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_tensors: ggml ctx size =    0.22 MiB
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/33 layers to GPU
llm_load_tensors:            buffer size =  3577.56 MiB
llm_load_tensors:        CPU buffer size =    70.31 MiB
.................................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:            KV buffer size =   256.00 MiB
llama_new_context_with_model: KV self size  =  256.00 MiB, K (f16):  128.00 MiB, V (f16):  128.00 MiB
llama_new_context_with_model:        CPU input buffer size   =     9.01 MiB
llama_new_context_with_model:            compute buffer size =    77.55 MiB
llama_new_context_with_model:        CPU compute buffer size =     8.80 MiB
llama_new_context_with_model: graph splits (measure): 3

system_info: n_threads = 12 / 24 | AVX = 1 | AVX_VNNI = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | 
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.100, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temp 
generate: n_ctx = 512, n_batch = 512, n_predict = 128, n_keep = 0


 Once upon a time, there existed a little girl, who liked to have adventures. She wanted to go to places and meet new people, and have fun with animals and learn about the world around her. One day, she decided she was going to be an author. She would write books that were filled with magic and mystery and wonder. And maybe just a little bit of danger.
The girl had a younger brother who wanted to do everything she did but never seemed able to stay up late enough to go on adventures until the wee hours of the morning, so he'd often accompany her by reading books about their adventures instead.
She didn't know that one day, this little boy would grow into a man with his own book and that she'd meet
llama_print_timings:        load time =    5496.59 ms

@Jacoby1218
Copy link

was able to trigger this reliably, #5250

@sorasoras
Copy link

In case anyone is experiencing the cmake error No rule to make target 'CMakeFiles/ggml.dir/ggml.c.obj', needed by 'llama.lib' like me, the solution is to disable ccache by specifying -DLLAMA_CCACHE=OFF to the cmake command.

@sorasoras Ahh, exactly what you encountered in compilelog.txt. Have a try with CCACHE disabled.

That works for me lol.

@characharm
Copy link

confirm #5250

jordankanter pushed a commit to jordankanter/llama.cpp that referenced this pull request Feb 3, 2024
* support SYCL backend windows build

* add windows build in CI

* add for win build CI

* correct install oneMKL

* fix install issue

* fix ci

* fix install cmd

* fix install cmd

* fix install cmd

* fix install cmd

* fix install cmd

* fix win build

* fix win build

* fix win build

* restore other CI part

* restore as base

* rm no new line

* fix no new line issue, add -j

* fix grammer issue

* allow to trigger manually, fix format issue

* fix format

* add newline

* fix format

* fix format

* fix format issuse

---------

Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
hodlen pushed a commit to hodlen/llama.cpp that referenced this pull request Apr 1, 2024
* support SYCL backend windows build

* add windows build in CI

* add for win build CI

* correct install oneMKL

* fix install issue

* fix ci

* fix install cmd

* fix install cmd

* fix install cmd

* fix install cmd

* fix install cmd

* fix win build

* fix win build

* fix win build

* restore other CI part

* restore as base

* rm no new line

* fix no new line issue, add -j

* fix grammer issue

* allow to trigger manually, fix format issue

* fix format

* add newline

* fix format

* fix format

* fix format issuse

---------

Co-authored-by: Abhilash Majumder <30946547+abhilash1910@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants