You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: CONTRIBUTING.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
-
# Contributing to CacheFlow
1
+
# Contributing to vLLM
2
2
3
-
Thank you for your interest in contributing to CacheFlow!
3
+
Thank you for your interest in contributing to vLLM!
4
4
Our community is open to everyone and welcomes all kinds of contributions, no matter how small or large.
5
5
There are several ways you can contribute to the project:
6
6
@@ -11,9 +11,9 @@ There are several ways you can contribute to the project:
11
11
However, remember that contributions aren't just about code.
12
12
We believe in the power of community support; thus, answering queries, assisting others, and enhancing the documentation are highly regarded and beneficial contributions.
13
13
14
-
Finally, one of the most impactful ways to support us is by raising awareness about CacheFlow.
14
+
Finally, one of the most impactful ways to support us is by raising awareness about vLLM.
15
15
Talk about it in your blog posts, highlighting how it's driving your incredible projects.
16
-
Express your support on Twitter if CacheFlow aids you, or simply offer your appreciation by starring our repository.
16
+
Express your support on Twitter if vLLM aids you, or simply offer your appreciation by starring our repository.
17
17
18
18
19
19
## Setup for development
@@ -70,5 +70,5 @@ If a comment isn't clear or you disagree with a suggestion, feel free to ask for
70
70
71
71
### Thank You
72
72
73
-
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to CacheFlow.
74
-
Your contributions make CacheFlow a great tool for everyone!
73
+
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM.
74
+
Your contributions make vLLM a great tool for everyone!
Copy file name to clipboardexpand all lines: csrc/attention/attention_generic.cuh
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
/*
2
2
* Adapted from https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention_utils.h
3
-
* Copyright (c) 2023, The CacheFlow team.
3
+
* Copyright (c) 2023, The vLLM team.
4
4
* Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved.
5
5
*
6
6
* Licensed under the Apache License, Version 2.0 (the "License");
Copy file name to clipboardexpand all lines: csrc/attention/attention_kernels.cu
+4-4
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
/*
2
2
* Adapted from https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention/decoder_masked_multihead_attention_template.hpp
3
-
* Copyright (c) 2023, The CacheFlow team.
3
+
* Copyright (c) 2023, The vLLM team.
4
4
* Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved.
5
5
*
6
6
* Licensed under the Apache License, Version 2.0 (the "License");
Copy file name to clipboardexpand all lines: csrc/attention/attention_utils.cuh
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
/*
2
2
* Adapted from https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention/decoder_masked_multihead_attention_template.hpp
3
-
* Copyright (c) 2023, The CacheFlow team.
3
+
* Copyright (c) 2023, The vLLM team.
4
4
* Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved.
5
5
*
6
6
* Licensed under the Apache License, Version 2.0 (the "License");
@@ -22,7 +22,7 @@
22
22
#include<float.h>
23
23
#include<type_traits>
24
24
25
-
namespacecacheflow {
25
+
namespacevllm {
26
26
27
27
// Q*K^T operation.
28
28
template<int THREAD_GROUP_SIZE, typename Vec, int N>
Copy file name to clipboardexpand all lines: csrc/attention/dtype_bfloat16.cuh
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
/*
2
2
* Adapted from https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention/decoder_masked_multihead_attention_template.hpp
3
3
* and https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention_utils.h
4
-
* Copyright (c) 2023, The CacheFlow team.
4
+
* Copyright (c) 2023, The vLLM team.
5
5
* Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved.
6
6
*
7
7
* Licensed under the Apache License, Version 2.0 (the "License");
Copy file name to clipboardexpand all lines: csrc/attention/dtype_float16.cuh
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
/*
2
2
* Adapted from https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention/decoder_masked_multihead_attention_template.hpp
3
3
* and https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention_utils.h
4
-
* Copyright (c) 2023, The CacheFlow team.
4
+
* Copyright (c) 2023, The vLLM team.
5
5
* Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved.
6
6
*
7
7
* Licensed under the Apache License, Version 2.0 (the "License");
Copy file name to clipboardexpand all lines: csrc/attention/dtype_float32.cuh
+3-3
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
/*
2
2
* Adapted from https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention/decoder_masked_multihead_attention_template.hpp
3
3
* and https://github.com/NVIDIA/FasterTransformer/blob/release/v5.3_tag/src/fastertransformer/kernels/decoder_masked_multihead_attention_utils.h
4
-
* Copyright (c) 2023, The CacheFlow team.
4
+
* Copyright (c) 2023, The vLLM team.
5
5
* Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved.
6
6
*
7
7
* Licensed under the Apache License, Version 2.0 (the "License");
0 commit comments