-
Notifications
You must be signed in to change notification settings - Fork 26.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix typo in Llama docstrings #24020
Fix typo in Llama docstrings #24020
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing!
Have you run the doc examples locally to confirm they still work and the generated responses are still the same (with the typo fixed)?
I don't think we can pass in the inputs directly like this, as generate
expects a tensor as its first argument and inputs
is a dictionary
@amyeroberts yes, I ran it locally, got the error and fixed it, here are the types in my code
|
@Kh4L Out of interest - could you share the checkpoint being used? Could you also run this snippet with the checkpoint and share the output? from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(MODEL_CHECKPOINT)
prompt = "Hey, are you conscious? Can you talk to me?"
inputs = tokenizer(prompt, return_tensors="pt")
print(inputs)
print(type(inputs))
print(type(tokenizer)) The current changes need to be checked with a standard checkpoint for all the models affected here. For instance, if I run the snippet with the OPT checkpoint in the example
I get the following output:
|
@amyeroberts Checkpoint is the 7B LLama converted to HF, I get the same output! |
Signed-off-by: Serge Panev <spanev@nvidia.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When changing doc examples or tests, it's important to run them to make sure they still pass / the outputs are as expected.
Llama and Open Llama don't have default checkpoints, so can't be checked, but the output for the OPT model does change with the spelling fix so needs to be updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing these!
The style checks are currently failing - you'll need to run make style
and push any changes made by the linter. Once the CI is green we can merge :)
Signed-off-by: Serge Panev <spanev@nvidia.com>
Thanks for the detailed review! |
The documentation is not available anymore as the PR was closed or merged. |
@Kh4L Github PRs were down for part of yesterday - I think it was just that. I can see the commit now and all tests are passing :) |
* Fix typo in Llama docstrings Signed-off-by: Serge Panev <spanev@nvidia.com> * Update Signed-off-by: Serge Panev <spanev@nvidia.com> * make style Signed-off-by: Serge Panev <spanev@nvidia.com> --------- Signed-off-by: Serge Panev <spanev@nvidia.com>
What does this PR do?
Fix typos in docs.
Before submitting