Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

perf(cmd-api-server): add continuous benchmarking with JMeter #2672

Closed
petermetz opened this issue Sep 10, 2023 · 3 comments · Fixed by #3073
Closed

perf(cmd-api-server): add continuous benchmarking with JMeter #2672

petermetz opened this issue Sep 10, 2023 · 3 comments · Fixed by #3073
Assignees
Labels
API_Server Besu Corda P1 Priority 1: Highest Performance Everything related to how fast/efficient the software or it's tooling (e.g. build) is.
Milestone

Comments

@petermetz
Copy link
Contributor

Description

https://github.com/benchmark-action/github-action-benchmark

Acceptance Criteria

  1. Performance regressions are reported for each pull-request
  2. Besu and Corda plugins' endpoints are covered by the benchmark.
  3. Use JMeter for the performance tests (open to other implementations but the NodeJS package ecosystem seems very immature compared to JMeter in this regard)
@petermetz petermetz added API_Server Corda Besu Performance Everything related to how fast/efficient the software or it's tooling (e.g. build) is. P1 Priority 1: Highest labels Sep 10, 2023
@petermetz petermetz added this to the v2.0.0 milestone Sep 10, 2023
@petermetz petermetz self-assigned this Sep 10, 2023
@petermetz petermetz removed their assignment Oct 2, 2023
@ruzell22
Copy link
Contributor

Hello @petermetz , upon trying the benchmarkjs, it is working well and fine when testing the fibonacci sequence as benchmark performance testing. However, when trying to make it to test the connector besu contract, which is in typescript file format, it is requiring that the ts files would be transpiled to js file format before it can be used for performance testing. Transpiling was not the ideal solution because it would need to transpile whenever it needs to test with different contract or package.

With that, we tried looking for another library under jmeter that is for typescript but to no avail. What can you suggest, should we find a different library that can use the contract in typescript file format or should we park the ticket for now? Thank you.

@petermetz petermetz self-assigned this Jan 29, 2024
@petermetz
Copy link
Contributor Author

Hello @petermetz , upon trying the benchmarkjs, it is working well and fine when testing the fibonacci sequence as benchmark performance testing. However, when trying to make it to test the connector besu contract, which is in typescript file format, it is requiring that the ts files would be transpiled to js file format before it can be used for performance testing. Transpiling was not the ideal solution because it would need to transpile whenever it needs to test with different contract or package.

With that, we tried looking for another library under jmeter that is for typescript but to no avail. What can you suggest, should we find a different library that can use the contract in typescript file format or should we park the ticket for now? Thank you.

@ruzell22 No worries, I'll help out!

petermetz referenced this issue in petermetz/cacti Jan 30, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Jan 30, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Jan 30, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Jan 30, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Jan 30, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Jan 30, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Jan 30, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Jan 31, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
@petermetz
Copy link
Contributor Author

@ruzell22 Please work backwards from this: https://github.com/hyperledger/cacti/pull/3007

I highly recommend using your fork for testing changes:

  1. Disable all the github workflows on the web gui except for ci.yaml
  2. Add besu and corda benchmark tests using the linked PR as inspiration.

petermetz referenced this issue in petermetz/cacti Jan 31, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue in petermetz/cacti Feb 2, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
petermetz referenced this issue Feb 2, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
zondervancalvez referenced this issue in zondervancalvez/cactus Feb 7, 2024
Primary change:
---------------

This is the ice-breaker for some work that got stuck related to this issue:
https://github.com/hyperledger/cacti/issues/2672

The used benchamking library (benchmark.js) is old but solid and has
almost no dependencies which means that we'll be in the clear longer term
when it comes to CVEs popping up.

The benchmarks added here are very simple and measure the throughput of
the API server's Open API spec providing endpoints.

The GitHub action that we use is designed to do regression detection and
reporting but this is hard to verify before actually putting it in place
as we cannot simulate the CI environment's clone on a local environment.

The hope is that this change will make it so that if someone tries to
make a code change that will lower performance significantly, then we
can catch that at the review stage instead of having to find out later.

Secondary change:
-----------------

1. Started using tsx in favor of ts-node because it appers to be about
5% faster when looking at the benchmark execution. It also claims to have
less problems with ESM compared to ts-node so if this initial trial
goes well we could later on decide to swap out ts-node with it project-wide.

Signed-off-by: Peter Somogyvari <peter.somogyvari@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue Mar 8, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue Mar 31, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue Mar 31, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue Apr 5, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue Apr 5, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue Apr 8, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue Apr 10, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
ruzell22 added a commit to ruzell22/cactus that referenced this issue May 6, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
petermetz pushed a commit to ruzell22/cactus that referenced this issue May 10, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
petermetz pushed a commit that referenced this issue May 10, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: #2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
sandeepnRES pushed a commit to sandeepnRES/cacti that referenced this issue Jul 30, 2024
Primary Changes
---------------

1. Added continuous benchmarking using JMeter that reports performance
in cactus-plugin-ledger-connector-besu using one of its endpoint.

fixes: hyperledger-cacti#2672

Signed-off-by: ruzell22 <ruzell.vince.aquino@accenture.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API_Server Besu Corda P1 Priority 1: Highest Performance Everything related to how fast/efficient the software or it's tooling (e.g. build) is.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants