Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix] Fix speculative decode seeded test #6733

Closed

Conversation

tdoublep
Copy link
Member

@tdoublep tdoublep commented Jul 24, 2024

I was looking into some issues that @njhill was facing with #6449 and realized that the test to verify per-request seeding for spec decode is not functioning correctly. This is because currently the baseline_llm_generator and the test_llm_generator are setting the same global seed.

In order to ensure that the test cover the desired functionality, we need to allow for the baseline and test to get seeded differently, which is what is implemented in this PR.

I have verified that applying either of the following two changes would now cause the test to fail:

diff --git a/vllm/spec_decode/spec_decode_worker.py b/vllm/spec_decode/spec_decode_worker.py
index 8cf0aa5b..17e54a31 100644
--- a/vllm/spec_decode/spec_decode_worker.py
+++ b/vllm/spec_decode/spec_decode_worker.py
@@ -598,11 +598,14 @@ class SpecDecodeWorker(LoraNotSupportedWorkerBase):
             # Get sequence group state
             generators = []
             for seq_group_metadata in seq_group_metadata_list:
+                '''
                 if (seq_group_metadata.state is not None
                         and seq_group_metadata.state.generator is not None):
                     generators.append(seq_group_metadata.state.generator)
                 else:
                     generators.append(None)
+                '''
+                generators.append(None)
 
             sampler_extra_kwargs["generators"] = generators

or

diff --git a/vllm/spec_decode/batch_expansion.py b/vllm/spec_decode/batch_expansion.py
index 41f0aebf..df160134 100644
--- a/vllm/spec_decode/batch_expansion.py
+++ b/vllm/spec_decode/batch_expansion.py
@@ -293,6 +293,7 @@ class BatchExpansionTop1Scorer(SpeculativeScorer):
         for data in new_seq_data_dict.values():
             data.update_num_computed_tokens(data.get_len() - 1)
 
+        '''
         if (seq_group_metadata.state is not None
                 and seq_group_metadata.state.generator is not None):
             generator = torch.Generator(
@@ -301,6 +302,8 @@ class BatchExpansionTop1Scorer(SpeculativeScorer):
             state = SequenceGroupState(generator=generator)
         else:
             state = None
+        '''
+        state = None

… global seeds.

Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Signed-off-by: Thomas Parnell <tpa@zurich.ibm.com>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@tdoublep
Copy link
Member Author

tdoublep commented Jul 24, 2024

Closing as this will be addressed by #6743

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant