Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add time to first token for llama runner #2141

Closed
wants to merge 1 commit into from

Conversation

vmpuri
Copy link

@vmpuri vmpuri commented Feb 26, 2024

Summary: Add time to first generated

Differential Revision: D54223564

Copy link

pytorch-bot bot commented Feb 26, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/2141

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 6c28e7f with merge base 3e414fb (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Feb 26, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 6, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.

To implement:
* Sample time - amount of time spent sampling per token (present in llama.cpp)

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 7, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.

To implement:
* Sample time - amount of time spent sampling per token (present in llama.cpp)

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 7, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.

To implement:
* Sample time - amount of time spent sampling per token (present in llama.cpp)

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 11, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.

To implement:
* Sample time - amount of time spent sampling per token (present in llama.cpp)

Reviewed By: digantdesai

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 11, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.

To implement:
* Sample time - amount of time spent sampling per token (present in llama.cpp)

Reviewed By: digantdesai

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 12, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

Reviewed By: digantdesai

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 12, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks

Reviewed By: digantdesai

Differential Revision: D54223564
vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 12, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks

Reviewed By: digantdesai

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 13, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@vmpuri vmpuri force-pushed the export-D54223564 branch 2 times, most recently from 5f0067c to d0e6269 Compare March 13, 2024 17:15
vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 13, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 13, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 13, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks
bypass-github-pytorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 13, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks
bypass-github-pytorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 14, 2024
Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks
bypass-github-pytorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

Summary:

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks
bypass-github-pytorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D54223564

vmpuri pushed a commit to vmpuri/executorch-1 that referenced this pull request Mar 14, 2024
Summary:
bypass-github-pytorch-ci-checks

Add time to first generated token & other features



- Since we're measuring the first token time, the token rate is measured both at the

* Model Load Time - just a timer around   ET_CHECK_OK_OR_RETURN_ERROR(load());
* Total inference time - Immediately after model load until the end of the inference loop
* >>First token time - From immediately after the model load until the first generated (not prompt) token is printed.
* >>>>Prompt eval - (comparable to llama.cpp prompt_eval_time) prompt array allocation and tokenization. Ends right before the inference loop starts
* >>Remaining tokens - immediately after the first token is outputted until the end of the inference loop
* >>Net eval time - (comparable to llama.cpp eval_time) Total time spent generating tokens.
* Sample time - amount of time spent sampling per token (present in llama.cpp)

bypass-github-executorch-ci-checks
bypass-github-pytorch-ci-checks

Reviewed By: digantdesai, Jack-Khuu

Differential Revision: D54223564
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in caee336.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported Merged
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants