Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: doubt on computational complexity #4620

Closed
Juelianqvq opened this issue May 6, 2024 · 2 comments
Closed

[Usage]: doubt on computational complexity #4620

Juelianqvq opened this issue May 6, 2024 · 2 comments
Labels
usage How to use vllm

Comments

@Juelianqvq
Copy link
Contributor

Juelianqvq commented May 6, 2024

TODO: The current hashing function is O(L^2). We should optimize this in the future.
return hash((tuple(self.data.get_token_ids()[0:num_tokens]), self.lora_int_id))

May I ask why is O(L^2) here?

@Juelianqvq Juelianqvq added the usage How to use vllm label May 6, 2024
@simon-mo
Copy link
Collaborator

simon-mo commented May 6, 2024

For a string prompt, we do not hash it incrementally.

For example, (for illustrative purpose), string ABCD is hashed: hash("A"), hash("AB"), hash("ABC"), hash("ABCD") to produce 4 hash values and used for prefix detection.

This is O(L^2) because each hash is O(L). However, the hash can be computed incrementally so each hash is O(1), namely: hash("AB") = incremental_hash("B", hash("A"))

@cadedaniel
Copy link
Collaborator

btw if you're interested in fixing this, see #4536

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

No branches or pull requests

3 participants