-
-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use shared benchmark CI #105
Changes from all commits
87f6d96
ccce6c5
9fd02a1
4ea418c
c2e0e5d
381b328
2b05d52
fa0919f
7846524
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,8 @@ | ||
name: benchmark | ||
on: | ||
workflow_dispatch: | ||
push: | ||
branches: [main] | ||
pull_request_review: | ||
types: [submitted] | ||
pull_request: | ||
|
@@ -12,172 +14,7 @@ on: | |
- .github/workflows/benchmark.yml | ||
|
||
jobs: | ||
benchmark-vs-thresholds: | ||
# Run the job only if it's a manual workflow dispatch, or if this event is a pull-request approval event, or if someone has rerun the job. | ||
if: github.event_name == 'workflow_dispatch' || github.event.review.state == 'approved' || github.run_attempt > 1 | ||
|
||
# https://runs-on.com/features/custom-runners/ | ||
runs-on: | ||
labels: | ||
- runs-on | ||
- runner=2cpu-4ram | ||
- run-id=${{ github.run_id }} | ||
|
||
container: swift:noble | ||
|
||
defaults: | ||
run: | ||
shell: bash | ||
|
||
env: | ||
PR_COMMENT: null # will be populated later | ||
|
||
steps: | ||
- name: Configure RunsOn | ||
uses: runs-on/action@v1 | ||
|
||
- name: Check out code | ||
uses: actions/checkout@v4 | ||
with: | ||
fetch-depth: 0 | ||
|
||
- name: Configure git | ||
run: git config --global --add safe.directory "${GITHUB_WORKSPACE}" | ||
|
||
# jemalloc is a dependency of the Benchmarking package | ||
# actions/cache will detect zstd and will become much faster. | ||
- name: Install jemalloc, curl, jq and zstd | ||
run: | | ||
set -eu | ||
|
||
apt-get update -y | ||
apt-get install -y libjemalloc-dev curl jq zstd | ||
|
||
- name: Restore .build | ||
id: restore-cache | ||
uses: actions/cache/restore@v4 | ||
with: | ||
path: Benchmarks/.build | ||
key: "swiftpm-benchmark-build-${{ runner.os }}-${{ github.event.pull_request.base.sha || github.event.after }}" | ||
restore-keys: "swiftpm-benchmark-build-${{ runner.os }}-" | ||
|
||
- name: Run benchmarks for branch '${{ github.head_ref || github.ref_name }}' | ||
run: | | ||
swift package -c release --disable-sandbox \ | ||
--package-path Benchmarks \ | ||
benchmark baseline update \ | ||
'${{ github.head_ref || github.ref_name }}' | ||
|
||
- name: Read benchmark result | ||
id: read-benchmark | ||
run: | | ||
set -eu | ||
|
||
swift package -c release --disable-sandbox \ | ||
--package-path Benchmarks \ | ||
benchmark baseline read \ | ||
'${{ github.head_ref || github.ref_name }}' \ | ||
--no-progress \ | ||
--format markdown \ | ||
>> result.text | ||
|
||
# Read the result to the output of the step | ||
echo 'result<<EOF' >> "${GITHUB_OUTPUT}" | ||
cat result.text >> "${GITHUB_OUTPUT}" | ||
echo 'EOF' >> "${GITHUB_OUTPUT}" | ||
|
||
- name: Compare branch '${{ github.head_ref || github.ref_name }}' against thresholds | ||
id: compare-benchmark | ||
run: | | ||
set -eu | ||
|
||
TIMESTAMP=$(date -u +"%Y-%m-%d %H:%M:%S UTC") | ||
ENCODED_TIMESTAMP=$(date -u +"%Y-%m-%dT%H%%3A%M%%3A%SZ") | ||
TIMESTAMP_LINK="https://www.timeanddate.com/worldclock/fixedtime.html?iso=$ENCODED_TIMESTAMP" | ||
echo "## Benchmark check running at [$TIMESTAMP]($TIMESTAMP_LINK)" >> summary.text | ||
|
||
# Disable 'set -e' to prevent the script from exiting on non-zero exit codes | ||
set +e | ||
swift package -c release --disable-sandbox \ | ||
--package-path Benchmarks \ | ||
benchmark thresholds check \ | ||
'${{ github.head_ref || github.ref_name }}' \ | ||
--path "$PWD/Benchmarks/Thresholds/" \ | ||
--no-progress \ | ||
--format markdown \ | ||
>> summary.text | ||
echo "exit-status=$?" >> "${GITHUB_OUTPUT}" | ||
set -e | ||
|
||
echo 'summary<<EOF' >> "${GITHUB_OUTPUT}" | ||
cat summary.text >> "${GITHUB_OUTPUT}" | ||
echo 'EOF' >> "${GITHUB_OUTPUT}" | ||
|
||
- name: Cache .build | ||
if: steps.restore-cache.outputs.cache-hit != 'true' | ||
uses: actions/cache/save@v4 | ||
with: | ||
path: Benchmarks/.build | ||
key: "swiftpm-benchmark-build-${{ runner.os }}-${{ github.event.pull_request.base.sha || github.event.after }}" | ||
|
||
- name: Construct comment | ||
run: | | ||
set -eu | ||
|
||
EXIT_CODE='${{ steps.compare-benchmark.outputs.exit-status }}' | ||
|
||
echo 'PR_COMMENT<<EOF' >> "${GITHUB_ENV}" | ||
|
||
echo '## [Benchmark](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}) Report' >> "${GITHUB_ENV}" | ||
|
||
case "${EXIT_CODE}" in | ||
0) | ||
echo '**✅ Pull request has no significant performance differences ✅**' >> "${GITHUB_ENV}" | ||
;; | ||
1) | ||
echo '**❌ Pull request has significant performance differences 📊**' >> "${GITHUB_ENV}" | ||
;; | ||
2) | ||
echo '**❌ Pull request has significant performance regressions 📉**' >> "${GITHUB_ENV}" | ||
;; | ||
4) | ||
echo '**❌ Pull request has significant performance improvements 📈**' >> "${GITHUB_ENV}" | ||
;; | ||
*) | ||
echo '**❌ Benchmark comparison failed to complete properly with exit code $EXIT_CODE ❌**' >> "${GITHUB_ENV}" | ||
;; | ||
esac | ||
|
||
echo '<details>' >> "${GITHUB_ENV}" | ||
echo ' <summary> Click to expand comparison result </summary>' >> "${GITHUB_ENV}" | ||
echo '' >> "${GITHUB_ENV}" | ||
echo '${{ steps.compare-benchmark.outputs.summary }}' >> "${GITHUB_ENV}" | ||
echo '' >> "${GITHUB_ENV}" | ||
echo '</details>' >> "${GITHUB_ENV}" | ||
|
||
echo '' >> "${GITHUB_ENV}" | ||
|
||
echo '<details>' >> "${GITHUB_ENV}" | ||
echo ' <summary> Click to expand benchmark result </summary>' >> "${GITHUB_ENV}" | ||
echo '' >> "${GITHUB_ENV}" | ||
echo '${{ steps.read-benchmark.outputs.result }}' >> "${GITHUB_ENV}" | ||
echo '' >> "${GITHUB_ENV}" | ||
echo '</details>' >> "${GITHUB_ENV}" | ||
|
||
echo 'EOF' >> "${GITHUB_ENV}" | ||
|
||
- name: Output the comment as job summary | ||
run: echo '${{ env.PR_COMMENT }}' >> "${GITHUB_STEP_SUMMARY}" | ||
|
||
- name: Comment in PR | ||
if: startsWith(github.event_name, 'pull_request') | ||
uses: thollander/actions-comment-pull-request@v3 | ||
with: | ||
message: ${{ env.PR_COMMENT }} | ||
comment-tag: benchmark-ci-comment | ||
|
||
- name: Exit with correct status | ||
run: | | ||
EXIT_CODE='${{ steps.compare-benchmark.outputs.exit-status }}' | ||
echo "Previous exit code was: $EXIT_CODE" | ||
exit $EXIT_CODE | ||
benchmark: | ||
if: github.run_attempt > 1 || github.event.review.state == 'approved' || startsWith(github.event_name, 'pull_request') != true | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. keeping the if condition here ... should i move it to the reusable CI? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No this should probably be in the reusable CI There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Disagree - different repos may not want to use the same conditions. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hmm hard decision. We generally don't add the if condition to the reusable CI so I guess we can keep at that. |
||
uses: vapor/ci/.github/workflows/run-benchmark.yml@main | ||
secrets: inherit |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added this. This helps with caching, as well as keeping a history of the benchmarks.
caches have branch-rules. meaning you can only retrieve a cache if it was saved by the current branch, OR the primary branch of the repo.
So the first run of a benchmark event in CI won't have a cache if we don't run benchmarks on push to main.
Though we could workaround this even without push to main. For example manually running the workflow on the main branch so it saves a base cache.