diff --git a/.github/workflows/release-go-module.yml b/.github/workflows/release-go-module.yml index fb5c26d10..5dc08839d 100644 --- a/.github/workflows/release-go-module.yml +++ b/.github/workflows/release-go-module.yml @@ -136,6 +136,6 @@ jobs: goarch: ${{ matrix.goarch }} binary_name: ${{ env.PACKAGE_NAME }} release_name: ${{ env.PACKAGE_NAME }} - release_tag: ${{ env.PACKAGE_NAME}}-${{ env.VERSION }} + release_tag: ${{ env.PACKAGE_NAME}}/${{ env.VERSION }} project_path: ${{ env.PACKAGE_NAME }}/cmd asset_name: ${{ env.PACKAGE_NAME }}-${{ env.VERSION }}-${{ matrix.platform }}-${{ matrix.goarch }} diff --git a/RELEASE.md b/RELEASE.md deleted file mode 100644 index ae3bb8f88..000000000 --- a/RELEASE.md +++ /dev/null @@ -1,52 +0,0 @@ -## Releasing Go modules - -The Chainlink Testing Framework (CTF) repository contains multiple independent modules. To release any of them, we follow some best practices about breaking changes. - -### Release strategy - -Use only [lightweight tags](https://git-scm.com/book/en/v2/Git-Basics-Tagging) - -**Do not move tags between commits. If something need to be fixed increment patch or minor version.** - -Steps to release: - -- When all your PRs are merged to `main` check the `main` branch [breaking changes badge](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/rc-breaking-changes.yaml) -- If there are no breaking changes for external methods, create a branch and explain all your module changes in `vX.X.X.md` committed under `.changeset` dir in your module. If changes are really short, and you run the [script](#check-breaking-changes-locally) locally you can push `.changeset` as a part of your final feature PR -- If there are accidental breaking changes, and it is possible to make them backward compatible - fix them -- If there are breaking changes, and we must release them change `go.mod` path, add prefix `/vX`, merge your PR(s) -- When all the changes are merged, and there are no breaking changes in the [pipeline](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/rc-breaking-changes.yaml) then proceed with releasing -- Tag `main` branch in format `$pkg/$subpkg/vX.X.X` according to your changes and push it, example: - ``` - git tag $pkg/$subpkg/v1.1.0 && git push --tags - git tag $pkg/$subpkg/v1.1.1 && git push --tags - git tag $pkg/$subpkg/v2.0.0 && git push --tags - ``` -- Check the [release page](https://github.com/smartcontractkit/chainlink-testing-framework/releases) - -### Binary releases - -If your module have `cmd/main.go` we build binary automatically for various platforms and attach it to the release page. - -## Debugging release pipeline and `gorelease` tool - -Checkout `dummy-release-branch` and release it: - -- `git tag dummy-module/v0.X.0` -- Add `vX.X.X.md` in `.changeset` -- `git push --no-verify --force && git push --tags` -- Check [releases](https://github.com/smartcontractkit/chainlink-testing-framework/releases) - -Pipelines: - -- [Main branch breaking changes](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/rc-breaking-changes.yaml) -- [Pipeline for releasing Go modules](.github/workflows/release-go-module.yml) - -## Check breaking changes locally - -We have a simple wrapper to check breaking changes for all the packages. Commit all your changes and run: - -``` -go run ./tools/breakingchanges/cmd/main.go -go run ./tools/breakingchanges/cmd/main.go --subdir wasp # check recursively starting with subdir -go run ./tools/breakingchanges/cmd/main.go --ignore tools,wasp,havoc,seth # to ignore some packages -``` diff --git a/SECRETS.md b/SECRETS.md deleted file mode 100644 index d407b1ac5..000000000 --- a/SECRETS.md +++ /dev/null @@ -1,24 +0,0 @@ -## Using AWSSecretsManager from code - -`client/secretsmanager.go` has a simple API to read/write/delete secrets. - -It uses a struct to protect such secrets from accidental printing or marshalling, see an [example](client/secretsmanager_test.go) test - -## Using AWSSecretsManager via CLI - -To create a static secret use `aws cli` - -``` -aws --region us-west-2 secretsmanager create-secret \ - --name MyTestSecret \ - --description "My test secret created with the CLI." \ - --secret-string "{\"user\":\"diegor\",\"password\":\"EXAMPLE-PASSWORD\"}" -``` - -Example of reading the secret - -``` -aws --region us-west-2 secretsmanager get-secret-value --secret-id MyTestSecret -``` - -For more information check [AWS CLI Reference](https://docs.aws.amazon.com/cli/v1/userguide/cli_secrets-manager_code_examples.html) diff --git a/book/src/SUMMARY.md b/book/src/SUMMARY.md index b6ec24e3c..dd94cbfb1 100644 --- a/book/src/SUMMARY.md +++ b/book/src/SUMMARY.md @@ -14,6 +14,7 @@ - [CLI](./framework/cli.md) - [Configuration](./framework/configuration.md) - [Test Configuration](./framework/test_configuration_overrides.md) + - [Caching](framework/components/caching.md) - [Secrets]() - [Observability Stack](framework/observability/observability_stack.md) - [Metrics]() @@ -34,10 +35,12 @@ - [Chainlink]() - [RPC]() - [Loki]() +- [Continuous Integration](ci/ci.md) - [Libraries](./libraries.md) - [Seth](./libs/seth.md) - [WASP](./libs/wasp.md) - [Havoc](./libs/havoc.md) + - [K8s Test Runner](k8s-test-runner/k8s-test-runner.md) --- @@ -45,5 +48,13 @@ - [Components](developing/developing_components.md) - [Releasing modules](releasing_modules.md) +--- +- [Lib (*Deprecated*)](lib.md) + - [Blockchain](lib/blockchain.md) + - [Kubernetes](lib/k8s/KUBERNETES.md) + - [K8s Remote Run](lib/k8s/REMOTE_RUN.md) + - [K8s Tutorial](lib/k8s/TUTORIAL.md) + - [Config](lib/config/config.md) + - [CRIB Connector](lib/crib.md) --- - [Build info](build_info.md) diff --git a/book/src/ci/ci.md b/book/src/ci/ci.md new file mode 100644 index 000000000..358932886 --- /dev/null +++ b/book/src/ci/ci.md @@ -0,0 +1,5 @@ +# Continuous Integration + +Here we describe our good practices for structuring different types of tests in Continuous Integration (GitHub Actions). + +Follow [this](https://github.com/smartcontractkit/.github/tree/main/.github/workflows) guide. \ No newline at end of file diff --git a/book/src/developing/developing_components.md b/book/src/developing/developing_components.md index 761e69f72..4d9c49e4d 100644 --- a/book/src/developing/developing_components.md +++ b/book/src/developing/developing_components.md @@ -49,11 +49,11 @@ Each component can define inputs and outputs, following these rules: ### Docker components good practices for [testcontainers-go](https://golang.testcontainers.org/): -An example [simple component](../../../../framework/components/blockchain/anvil.go) +An example [simple component](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/framework/components/blockchain/anvil.go) -An example of [complex component](../../../../framework/components/clnode/clnode.go) +An example of [complex component](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/framework/components/clnode/clnode.go) -An example of [composite component](../../../../framework/components/simple_node_set/node_set.go) +An example of [composite component](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/framework/components/simple_node_set/node_set.go) - Inputs should include at least `image`, `tag` and `pull_image` field ```golang diff --git a/book/src/framework/components/caching.md b/book/src/framework/components/caching.md index d1be1b73a..4700a68fc 100644 --- a/book/src/framework/components/caching.md +++ b/book/src/framework/components/caching.md @@ -2,9 +2,9 @@ We use component caching to accelerate test development and enforce idempotent test actions. -Each component is designed to be pure with defined inputs/outputs. +Each component is isolated by means of inputs and outputs. -You can use an environment variable to skip deployment steps and use cached outputs if your infrastructure is already running (locally or remotely): +If cached config has any outputs with `use_cache = true` it will be used instead of deploying a component again. ``` export CTF_CONFIGS=smoke-cache.toml @@ -26,4 +26,4 @@ http_url = 'http://127.0.0.1:33447' docker_internal_ws_url = 'ws://anvil-3716a:8900' docker_internal_http_url = 'http://anvil-3716a:8900' ``` -Set flag `use_cache = true` on any component output and run your test again \ No newline at end of file +Set flag `use_cache = true` on any component output, change output fields as needed and run your test again. \ No newline at end of file diff --git a/book/src/framework/getting_started.md b/book/src/framework/getting_started.md index 443942e09..5a9d7ab16 100644 --- a/book/src/framework/getting_started.md +++ b/book/src/framework/getting_started.md @@ -2,10 +2,11 @@ To start writing tests create a directory for your project with `go.mod` and pull the framework ``` -go get github.com/smartcontractkit/chainlink-testing-framework/framework@ac819d889f97e0f5c04aee3212454ad1f8b6f4ef +go get github.com/smartcontractkit/chainlink-testing-framework/framework ``` Then download the CLI (runs from directory where you have `go.mod`) +Make sure you have your GOPATH set: `export GOPATH=$HOME/go && export PATH=$PATH:$GOPATH/bin` ``` go get github.com/smartcontractkit/chainlink-testing-framework/framework/cmd && \ go install github.com/smartcontractkit/chainlink-testing-framework/framework/cmd && \ diff --git a/book/src/framework/nodeset_capabilities.md b/book/src/framework/nodeset_capabilities.md index 02a43eed9..7c07c11ac 100644 --- a/book/src/framework/nodeset_capabilities.md +++ b/book/src/framework/nodeset_capabilities.md @@ -16,8 +16,6 @@ Create a configuration file `smoke.toml` port = "8545" type = "anvil" -[contracts] - [data_provider] port = 9111 diff --git a/book/src/framework/nodeset_compatibility.md b/book/src/framework/nodeset_compatibility.md index 45186a5e4..3f23a3668 100644 --- a/book/src/framework/nodeset_compatibility.md +++ b/book/src/framework/nodeset_compatibility.md @@ -85,13 +85,9 @@ package capabilities_test import ( "fmt" "github.com/smartcontractkit/chainlink-testing-framework/framework" - "github.com/smartcontractkit/chainlink-testing-framework/framework/clclient" "github.com/smartcontractkit/chainlink-testing-framework/framework/components/blockchain" "github.com/smartcontractkit/chainlink-testing-framework/framework/components/fake" ns "github.com/smartcontractkit/chainlink-testing-framework/framework/components/simple_node_set" - "github.com/smartcontractkit/chainlink-testing-framework/seth" - burn_mint_erc677 "github.com/smartcontractkit/chainlink/e2e/capabilities/components/gethwrappers" - "github.com/smartcontractkit/chainlink/e2e/capabilities/components/onchain" "github.com/stretchr/testify/require" "os" "testing" diff --git a/book/src/framework/nodeset_environment.md b/book/src/framework/nodeset_environment.md index 3e95dd76a..1446b07e7 100644 --- a/book/src/framework/nodeset_environment.md +++ b/book/src/framework/nodeset_environment.md @@ -38,7 +38,6 @@ import ( "github.com/smartcontractkit/chainlink-testing-framework/framework/components/blockchain" "github.com/smartcontractkit/chainlink-testing-framework/framework/components/fake" ns "github.com/smartcontractkit/chainlink-testing-framework/framework/components/simple_node_set" - "github.com/smartcontractkit/chainlink/e2e/capabilities/components/onchain" "github.com/stretchr/testify/require" "testing" ) diff --git a/book/src/framework/test_configuration_overrides.md b/book/src/framework/test_configuration_overrides.md index d32908bd9..711a1fca0 100644 --- a/book/src/framework/test_configuration_overrides.md +++ b/book/src/framework/test_configuration_overrides.md @@ -4,8 +4,28 @@ To override any test configuration, we merge multiple files into a single struct You can specify multiple file paths using `CTF_CONFIGS=path1,path2,path3`. -The framework will apply these configurations from right to left. +The framework will apply these configurations from right to left and marshal them to a single test config structure. -> [!NOTE] -> When override slices remember that you should replace the full slice, it won't be extended by default! +Use it to structure the variations of your test, ex.: +``` +export CTF_CONFIGS=smoke-test-feature-a-simulated-network.toml +export CTF_CONFIGS=smoke-test-feature-a-simulated-network.toml,smoke-test-feature-a-testnet.toml + +export CTF_CONFIGS=smoke-test-feature-a.toml +export CTF_CONFIGS=smoke-test-feature-a.toml,smoke-test-feature-b.toml + +export CTF_CONFIGS=load-profile-api-service-1.toml +export CTF_CONFIGS=load-profile-api-service-1.toml,load-profile-api-service-2.toml +``` +This helps reduce duplication in the configuration. + +> [!NOTE] +> We designed overrides to be as simple as possible, as frameworks like [envconfig](https://github.com/kelseyhightower/envconfig) and [viper](https://github.com/spf13/viper) offer extensive flexibility but can lead to inconsistent configurations prone to drift. +> +> This feature is meant to override test setup configurations, not test logic. Avoid using TOML to alter test logic. +> +> Tests should remain straightforward, readable, and perform a single set of actions (potentially across different CI/CD environments). If variations in test logic are required, consider splitting them into separate tests. + +> [!WARNING] +> When override slices remember that you should replace the full slice, it won't be extended by default! diff --git a/book/src/k8s-test-runner/k8s-test-runner.md b/book/src/k8s-test-runner/k8s-test-runner.md new file mode 100644 index 000000000..29ba35ee0 --- /dev/null +++ b/book/src/k8s-test-runner/k8s-test-runner.md @@ -0,0 +1,140 @@ +## Preparing to Run Tests on Staging + +Ensure you complete the following steps before executing tests on the staging environment: + +1. **Connect to the VPN** + +2. **AWS Login with Staging Profile** + + Authenticate to AWS using your staging profile, specifically with the `StagingEKSAdmin` role. Execute the following command: + + ```sh + aws sso login --profile staging + ``` + +3. **Verify Authorization** + + Confirm your authorization status by listing the namespaces in the staging cluster. Run `kubectl get namespaces`. If you see a list of namespaces, this indicates successful access to the staging cluster. + +## Running Tests + +### Creating an Image with the Test Binary + +Before running tests, you must create a Docker image containing the test binary. To do this, execute the `create-test-image` command and provide the path to the test folder you wish to package. This command: + +1. Compiles test binary under `` +2. Creates a docker image with the test binary +3. Pushes the docker image to the image registry (e.g. Staging ECR) + +```sh +go run ./cmd/main.go create-test-image --image-registry-url --image-tag "" "" +``` + +Where `image-tag` should be a descriptive name for your test, such as "mercury-load-tests". + +### Running the Test in Kubernetes + +If a Docker image containing the test binary is available in an image registry (such as staging ECR), use `run` command to execute the test in K8s. + +``` +go run ./cmd/main.go run -c "" +``` + +The TOML config should specify the test runner configuration as follows: + +``` +namespace = "e2e-tests" +rbac_role_name = "" # RBAC role name for the chart +image_registry_url = "" # URL to the ECR containing the test binary image, e.g., staging ECR URL +image_name = "k8s-test-runner" +image_tag = "" # The image tag to use, like "mercury-load-tests" (see readme above) +job_count = "1" +test_name = "TestMercuryLoad/all_endpoints" +test_timeout = "24h" +test_config_base64_env_name = "LOAD_TEST_BASE64_TOML_CONTENT" +test_config_file_path = "/Users/lukasz/Documents/test-configs/load-staging-testnet.toml" +resources_requests_cpu = "1000m" +resources_requests_memory = "512Mi" +resources_limits_cpu = "2000m" +resources_limits_memory = "1024Mi" +[envs] +WASP_LOG_LEVEL = "info" +TEST_LOG_LEVEL = "info" +MERCURY_TEST_LOG_LEVEL = "info" +``` + +Where: + +- `test_name` is the name of the test to run (must be included in the test binary). +- `test_config_env_name` is the name of the environment variable used to provide the test configuration for the test (optional). +- `test_config_file_path` is the path to the configuration file for the test (optional). + +## Using K8s Test Runner on CI + +### Example + +This example demonstrates the process step by step. First, it shows how to download the Kubernetes Test Runner. Next, it details the use of the Test Runner to create a test binary specifically for the Mercury "e2e_tests/staging_prod/tests/load" test package. Finally, it describes executing the test in Kubernetes using a customized test runner configuration. + +``` +- name: Download K8s Test Runner + run: | + mkdir -p k8s-test-runner + cd k8s-test-runner + curl -L -o k8s-test-runner.tar.gz https://github.com/smartcontractkit/chainlink-testing-framework/releases/download/v0.2.4/test-runner.tar.gz + tar -xzf k8s-test-runner.tar.gz + chmod +x k8s-test-runner-linux-amd64 +``` + +Alternatively, you can place the k8s-test-runner package within your repository and unpack it: + +``` +- name: Unpack K8s Test Runner + run: | + cd e2e_tests + mkdir -p k8s-test-runner + tar -xzf k8s-test-runner-v0.0.1.tar.gz -C k8s-test-runner + chmod +x k8s-test-runner/k8s-test-runner-linux-amd64 +``` + +Then: + +``` +- name: Build K8s Test Runner Image + if: github.event.inputs.test-type == 'load' && github.event.inputs.rebuild-test-image == 'yes' + run: | + cd e2e_tests/k8s-test-runner + + ./k8s-test-runner-linux-amd64 create-test-image --image-registry-url "${{ secrets.AWS_ACCOUNT_ID_STAGING }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com" --image-tag "mercury-load-test" "../staging_prod/tests/load" + +- name: Run Test in K8s + run: | + cd e2e_tests/k8s-test-runner + + cat << EOF > config.toml + namespace = "e2e-tests" + rbac_role_name = "" # RBAC role name for the chart + image_registry_url = "${{ secrets.AWS_ACCOUNT_ID_STAGING }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com" + image_name = "k8s-test-runner" + image_tag = "mercury-load-test" + job_count = "1" + chart_path = "./chart" + test_name = "TestMercuryLoad/all_endpoints" + test_timeout = "24h" + resources_requests_cpu = "1000m" + resources_requests_memory = "512Mi" + resources_limits_cpu = "2000m" + resources_limits_memory = "1024Mi" + test_config_base64_env_name = "LOAD_TEST_BASE64_TOML_CONTENT" + test_config_base64 = "${{ steps.conditional-env-vars.outputs.LOAD_TEST_BASE64_TOML_CONTENT }}" + [envs] + WASP_LOG_LEVEL = "info" + TEST_LOG_LEVEL = "info" + MERCURY_TEST_LOG_LEVEL = "info" + EOF + + ./k8s-test-runner-linux-amd64 run -c config.toml +``` + +## Release + +Run `./package ` diff --git a/book/src/lib.md b/book/src/lib.md new file mode 100644 index 000000000..90e633166 --- /dev/null +++ b/book/src/lib.md @@ -0,0 +1,460 @@ +
+ +# Framework v1 (Deprecated) + +[![Lib tag](https://img.shields.io/github/v/tag/smartcontractkit/chainlink-testing-framework?filter=%2Alib%2A)](https://github.com/smartcontractkit/chainlink-testing-framework/tags) +[![Go Report Card](https://goreportcard.com/badge/github.com/smartcontractkit/chainlink-testing-framework)](https://goreportcard.com/report/github.com/smartcontractkit/chainlink-testing-framework) +[![Go Reference](https://pkg.go.dev/badge/github.com/smartcontractkit/chainlink-testing-framework.svg)](https://pkg.go.dev/github.com/smartcontractkit/chainlink-testing-framework) +[![Go Version](https://img.shields.io/github/go-mod/go-version/smartcontractkit/chainlink-testing-framework?filename=./lib/go.mod)](https://go.dev/) +![Tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/test.yaml/badge.svg) +![Lint](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/lint.yaml/badge.svg) + +
+ +**DEPRECATED: This is v1 version and it is not actively maintained** + +The purpose of this framework is to: +- Interact with different blockchains +- Configure CL jobs +- Deploy using `docker` +- Deploy using `k8s` + +If you're looking to implement a new chain integration for the testing framework, head over to the [blockchain](lib/blockchain.md) directory for more info. + +## k8s package + +We have a k8s package we are using in tests, it provides: + +- [cdk8s](https://cdk8s.io/) based wrappers +- High-level k8s API +- Automatic port forwarding + +You can also use this package to spin up standalone environments. + +### Local k8s cluster + +Read [here](lib/k8s/KUBERNETES.md) about how to spin up a local cluster + +#### Install + +Set up deps, you need to have `node 14.x.x`, [helm](https://helm.sh/docs/intro/install/) and [yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable) + +Then use + +```shell +make install_deps +``` + +##### Optional Nix + +We have setup a nix shell which will produce a reliable environment that will behave the same locally and in ci. To use it instead of the above you will need to [install nix](https://nixos.org/download/) + +To start the nix shell run: + +```shell +make nix_shell +``` + +If you install [direnv](https://github.com/direnv/direnv/blob/master/docs/installation.md) you will be able to have your environment start the nix shell as soon as you cd into it once you have allowed the directory via: + +```shell +direnv allow +``` + +### Running tests in k8s + +To read how to run a test in k8s, read [here](lib/k8s/REMOTE_RUN.md) + +### Usage + +#### With env vars (deprecated) + +Create an env in a separate file and run it + +```sh +export CHAINLINK_IMAGE="public.ecr.aws/chainlink/chainlink" +export CHAINLINK_TAG="1.4.0-root" +export CHAINLINK_ENV_USER="Satoshi" +go run k8s/examples/simple/env.go +``` + +For more features follow [tutorial](lib/k8s/TUTORIAL.md) + +#### With TOML config + +It should be noted that using env vars for configuring CL nodes in k8s is deprecated. TOML config should be used instead: + +```toml +[ChainlinkImage] +image="public.ecr.aws/chainlink/chainlink" +version="v2.8.0" +``` + +Check the example here: [env.go](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/lib/k8s/examples/simple_toml/env_toml_config.go) + +### Development + +#### Running standalone example environment + +```shell +go run k8s/examples/simple/env.go +``` + +If you have another env of that type, you can connect by overriding environment name + +```sh +ENV_NAMESPACE="..." go run k8s/examples/chainlink/env.go +``` + +Add more presets [here](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/lib/k8s/presets) + +Add more programmatic examples [here](../../lib/k8s/examples/) + +If you have [chaosmesh](https://chaos-mesh.org/) installed in your cluster you can pull and generated CRD in go like that + +```sh +make chaosmesh +``` + +If you need to check your system tests coverage, use [that](../../lib/k8s/TUTORIAL.md#coverage) + +# Chainlink Charts + +This repository contains helm charts used by the chainlink organization mostly in QA. + +## Chart Repository + +You can add the published chart repository by pointing helm to the `gh-pages` branch with a personal access token (PAT) that has at least read-only access to the repository. + +```sh +helm repo add chainlink-qa https://raw.githubusercontent.com/smartcontractkit/qa-charts/gh-pages/ +helm search repo chainlink +``` + +## Releasing Charts + +The following cases will trigger a chart release once a PR is merged into the `main` branch. +Modified packages or new packages get added and pushed to the `gh-pages` branch of the [qa-charts](https://github.com/smartcontractkit/qa-charts) repository. + +- An existing chart is version bumped +- A new chart is added + +Removed charts do not trigger a re-publish, the packages have to be removed and the index file regenerated in the `gh-pages` branch of the [qa-charts](https://github.com/smartcontractkit/qa-charts) repository. + +Note: The qa-charts repository is scheduled to look for changes to the charts once every hour. This can be expedited by going to that repo and running the cd action via github UI. + +# Simulated EVM chains + +We have extended support for execution layer clients in simulated networks. Following ones are supported: + +- `Geth` +- `Nethermind` +- `Besu` +- `Erigon` + +When it comes to consensus layer we currently support only `Prysm`. + +The easiest way to start a simulated network is to use a builder. It allows to configure the network in a fluent way and then start it. For example: + +```go +builder := NewEthereumNetworkBuilder() +cfg, err: = builder. + WithEthereumVersion(EthereumVersion_Eth2). + WithExecutionLayer(ExecutionLayer_Geth). + Build() +``` + +Since we support both `eth1` (aka pre-Merge) and `eth2` (aka post-Merge) client versions, you need to specify which one you want to use. You can do that by calling `WithEthereumVersion` method. There's no default provided. The only exception is when you use custom docker images (instead of default ones), because then we can determine which version it is based on the image version. + +If you want your test to execute as fast as possible go for `eth1` since it's either using a fake PoW or PoA consensus and is much faster than `eth2` which uses PoS consensus (where there is a minimum viable length of slot/block, which is 4 seconds; for `eth1` it's 1 second). If you want to test the latest features, changes or forks in the Ethereum network and have your tests running on a network which is as close as possible to Ethereum Mainnet, go for `eth2`. + +Every component has some default Docker image it uses, but builder has a method that allows to pass custom one: + +```go +builder := NewEthereumNetworkBuilder() +cfg, err: = builder. + WithEthereumVersion(EthereumVersion_Eth2). + WithConsensusLayer(ConsensusLayer_Prysm). + WithExecutionLayer(ExecutionLayer_Geth). + WithCustomDockerImages(map[ContainerType]string{ + ContainerType_Geth: "my-custom-geth-pos-image:my-version"}). + Build() +``` + +When using a custom image you can even further simplify the builder by calling only `WithCustomDockerImages` method. Based on the image name and version we will determine which execution layer client it is and whether it's `eth1` or `eth2` client: + +```go +builder := NewEthereumNetworkBuilder() +cfg, err: = builder. + WithCustomDockerImages(map[ContainerType]string{ + ContainerType_Geth: "ethereum/client-go:v1.13.10"}). + Build() +``` + +In the case above we would launch a `Geth` client with `eth2` network and `Prysm` consensus layer. + +You can also configure epochs at which hardforks will happen. Currently only `Deneb` is supported. Epoch must be >= 1. Example: + +```go +builder := NewEthereumNetworkBuilder() +cfg, err: = builder. + WithConsensusType(ConsensusType_PoS). + WithConsensusLayer(ConsensusLayer_Prysm). + WithExecutionLayer(ExecutionLayer_Geth). + WithEthereumChainConfig(EthereumChainConfig{ + HardForkEpochs: map[string]int{"Deneb": 1}, + }). + Build() +``` + +## Command line + +You can start a simulated network with a single command: + +```sh +go run docker/test_env/cmd/main.go start-test-env private-chain +``` + +By default it will start a network with 1 node running `Geth` and `Prysm`. It will use default chain id of `1337` and won't wait for the chain to finalize at least one epoch. Once the chain is started it will save the network configuration in a `JSON` file, which then you can use in your tests to connect to that chain (and thus save time it takes to start a new chain each time you run your test). + +Following cmd line flags are available: + +```sh + -c, --chain-id int chain id (default 1337) + -l, --consensus-layer string consensus layer (prysm) (default "prysm") + -t, --consensus-type string consensus type (pow or pos) (default "pos") + -e, --execution-layer string execution layer (geth, nethermind, besu or erigon) (default "geth") + -w, --wait-for-finalization wait for finalization of at least 1 epoch (might take up to 5 minutes) + --consensus-client-image string custom Docker image for consensus layer client + --execution-layer-image string custom Docker image for execution layer client + --validator-image string custom Docker image for validator +``` + +To connect to that environment in your tests use the following code: + +```go + builder := NewEthereumNetworkBuilder() + cfg, err := builder. + WithExistingConfigFromEnvVar(). + Build() + + if err != nil { + return err + } + + net, rpc, err := cfg.Start() + if err != nil { + return err + } +``` + +Builder will read the location of chain configuration from env var named `PRIVATE_ETHEREUM_NETWORK_CONFIG_PATH` (it will be printed in the console once the chain starts). + +`net` is an instance of `blockchain.EVMNetwork`, which contains characteristics of the network and can be used to connect to it using an EVM client. `rpc` variable contains arrays of public and private RPC endpoints, where "private" means URL that's accessible from the same Docker network as the chain is running in. + +# Using LogStream + +LogStream is a package that allows to connect to a Docker container and then flush logs to configured targets. Currently 3 targets are supported: + +- `file` - saves logs to a file in `./logs` folder +- `loki` - sends logs to Loki +- `in-memory` - stores logs in memory + +It can be configured to use multiple targets at once. If no target is specified, it becomes a no-op. + +LogStream has to be configured by passing an instance of `LoggingConfig` to the constructor. + +When you connect a container LogStream will create a new consumer and start a detached goroutine that listens to logs emitted by that container and which reconnects and re-requests logs if listening fails for whatever reason. Retry limit and timeout can both be configured using functional options. In most cases one container should have one consumer, but it's possible to have multiple consumers for one container. + +LogStream stores all logs in gob temporary file. To actually send/save them, you need to flush them. When you do it, LogStream will decode the file and send logs to configured targets. If log handling results in an error it won't be retried and processing of logs for given consumer will stop (if you think we should add a retry mechanism please let us know). + +_Important:_ Flushing and accepting logs is blocking operation. That's because they both share the same cursor to temporary file and otherwise it's position would be racey and could result in mixed up logs. + +## Configuration + +Basic `LogStream` TOML configuration is following: + +```toml +[LogStream] +log_targets=["file"] +log_producer_timeout="10s" +log_producer_retry_limit=10 +``` + +You can find it here: [logging_default.toml](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/lib/config/tomls/logging_default.toml) + +When using `in-memory` or `file` target no other configuration variables are required. When using `loki` target, following ones must be set: + +```toml +[Logging.Loki] +tenant_id="promtail" +url="https://change.me" +basic_auth_secret="my-secret-auth" +bearer_token_secret="bearer-token" +``` + +Also, do remember that different URL should be used when running in CI and everywhere else. In CI it should be a public endpoint, while in local environment it should be a private one. + +If your test has a Grafana dashboard in order for the url to be correctly printed you should provide the following config: + +```toml +[Logging.Grafana] +url="http://grafana.somwhere.com/my_dashboard" +``` + +## Initialisation + +First you need to create a new instance: + +```golang +// t - instance of *testing.T (can be nil) +// testConfig.Logging - pointer to logging part of TestConfig +ls := logstream.NewLogStream(t, testConfig.Logging) +``` + +## Listening to logs + +If using `testcontainers-go` Docker containers it is recommended to use life cycle hooks for connecting and disconnecting LogStream from the container. You can do that when creating `ContainerRequest` in the following way: + +```golang +containerRequest := &tc.ContainerRequest{ + LifecycleHooks: []tc.ContainerLifecycleHooks{ + {PostStarts: []tc.ContainerHook{ + func(ctx context.Context, c tc.Container) error { + if ls != nil { + return n.ls.ConnectContainer(ctx, c, "custom-container-prefix-can-be-empty") + } + return nil + }, + }, + PostStops: []tc.ContainerHook{ + func(ctx context.Context, c tc.Container) error { + if ls != nil { + return n.ls.DisconnectContainer(c) + } + return nil + }, + }}, + }, + } +``` + +You can print log location for each target using this function: `(m *LogStream) PrintLogTargetsLocations()`. For `file` target it will print relative folder path, for `loki` it will print URL of a Grafana Dashboard scoped to current execution and container ids. For `in-memory` target it's no-op. + +It is recommended to shutdown LogStream at the end of your tests. Here's an example: + +```golang +t.Cleanup(func() { + l.Warn().Msg("Shutting down Log Stream") + + if t.Failed() || os.Getenv("TEST_LOG_COLLECT") == "true" { + // we can't do much if this fails, so we just log the error + _ = logStream.FlushLogsToTargets() + // this will log log locations for each target (for file it will be a folder, for Loki Grafana dashboard -- remember to provide it's url in config!) + logStream.PrintLogTargetsLocations() + // this will save log locations in test summary, so that they can be easily accessed in GH's step summary + logStream.SaveLogLocationInTestSummary() + } + + // we can't do much if this fails, so we just log the error + _ = logStream.Shutdown(testcontext.Get(b.t)) + }) +``` + +or in a bit shorter way: + +```golang +t.Cleanup(func() { + l.Warn().Msg("Shutting down Log Stream") + + if t.Failed() || os.Getenv("TEST_LOG_COLLECT") == "true" { + // this will log log locations for each target (for file it will be a folder, for Loki Grafana dashboard -- remember to provide it's url in config!) + logStream.PrintLogTargetsLocations() + // this will save log locations in test summary, so that they can be easily accessed in GH's step summary + } + + // we can't do much if this fails + _ = logStream.FlushAndShutdown() + }) +``` + +## Grouping test execution + +When running tests in CI you're probably interested in grouping logs by test execution, so that you can easily find the logs in Loki. To do that your job should set `RUN_ID` environment variable. In GHA it's recommended to set it to workflow id. If that variable is not set, then a run id will be automatically generated and saved in `.run.id` file, so that it can be shared by tests that are part of the same execution, but are running in different processes. + +## Test Summary + +In order to facilitate displaying information in GH's step summary `testsummary` package was added. It exposes a single function `AddEntry(testName, key string, value interface{}) `. When you call it, it either creates a test summary JSON file or appends to it. The result is is a map of keys with values. + +Example: + +```JSON +{ + "file":[ + { + "test_name":"TestOCRv2Basic", + "value":"./logs/TestOCRv2Basic-2023-12-01T18-00-59-TestOCRv2Basic-38ac1e52-d0a6-48" + } + ], + "loki":[ + { + "test_name":"TestOCRv2Basic", + "value":"https://grafana.ops.prod.cldev.sh/d/ddf75041-1e39-42af-aa46-361fe4c36e9e/ci-e2e-tests-logs?orgId=1\u0026var-run_id=TestOCRv2Basic-38ac1e52-d0a6-48\u0026var-container_id=cl-node-a179ca7d\u0026var-container_id=cl-node-76798f87\u0026var-container_id=cl-node-9ff7c3ae\u0026var-container_id=cl-node-43409b09\u0026var-container_id=cl-node-3b6810bd\u0026var-container_id=cl-node-69fed256\u0026from=1701449851165\u0026to=1701450124925" + } + ] +} +``` + +In GHA after tests have ended we can use tools like `jq` to extract the information we need and display it in step summary. + +# TOML Config + +Basic and universal building blocks for TOML-based config are provided by `config` package. For more information do read [this](lib/config/config.md). + +# ECR Mirror + +An ecr mirror can be used to push images used often in order to bypass rate limit issues from dockerhub. The list of image mirrors can be found in the [matrix here](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/.github/workflows/update-internal-mirrors.yaml). This currently works with images with version numbers in dockerhub. Support for gcr is coming in the future. The images must also have a version number so putting `latest` will not work. We have a separate list for one offs we want that can be added to [here](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/.github/actions/update-internal-mirrors/scripts/mirror.json) that does work with gcr and latest images. Note however for `latest` it will only pull it once and will not update it in our mirror if the latest on the public repository has changed, in this case it is preferable to update it manually when you know that you need the new latest and the update will not break your tests. + +For images in the mirrors you can use the INTERNAL_DOCKER_REPO environment variable when running tests and it will use that mirror in place of the public repository. + +We have two ways to add new images to the ecr. The first two requirements are that you create the ecr repository with the same name as the one in dockerhub out in aws and then add that ecr to the infra permissions (ask TT if you don't know how to do this). + +1. If it does not have version numbers or is gcr then you can add it [here](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/.github/actions/update-internal-mirrors/scripts/mirror.json) +2. You can add to the [mirror matrix](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/.github/actions/update-internal-mirrors/scripts/update_mirrors.sh) the new image name and an expression to get the latest versions added when the workflow runs. You can check the postgres one used in there for an example but basically the expression should filter out only the latest image or 2 for that particular version when calling the dockerhub endpoint, example curl call `curl -s "https://hub.docker.com/v2/repositories/${image_name}/tags/?page_size=100" | jq -r '.results[].name' | grep -E ${image_expression}` where image_name could be `library/postgres` and image_expression could be `'^[0-9]+\.[0-9]+$'`. Adding your ecr to this matrix should make sure we always have the latest versions for that expression. + +## Debugging HTTP and RPC calls + +```bash +export SETH_LOG_LEVEL=info +export RESTY_DEBUG=true +``` + +## Loki Client + +The `LokiClient` allows you to easily query Loki logs from your tests. It supports basic authentication, custom queries, and can be configured for (Resty) debug mode. + +### Debugging Resty and Loki Client + +```bash +export LOKI_CLIENT_LOG_LEVEL=info +export RESTY_DEBUG=true +``` + +### Example usage: + +```go +auth := LokiBasicAuth{ + Username: os.Getenv("LOKI_LOGIN"), + Password: os.Getenv("LOKI_PASSWORD"), +} + +queryParams := LokiQueryParams{ + Query: `{namespace="test"} |= "test"`, + StartTime: time.Now().AddDate(0, 0, -1), + EndTime: time.Now(), + Limit: 100, + } + +lokiClient := NewLokiClient("https://loki.api.url", "my-tenant", auth, queryParams) +logEntries, err := lokiClient.QueryLogs(context.Background()) +``` diff --git a/book/src/lib/blockchain.md b/book/src/lib/blockchain.md new file mode 100644 index 000000000..5100655fa --- /dev/null +++ b/book/src/lib/blockchain.md @@ -0,0 +1,45 @@ +# Blockchain Clients + +
+ +This documentation is deprecated, we are using it in [Chainlink Integration Tests](https://github.com/smartcontractkit/chainlink/tree/develop/integration-tests) + +If you want to test our new products use [v2](../framework/overview.md) +
+ +This folder contains the bulk of code that handles integrating with different EVM chains. If you're looking to run tests on a new EVM chain, and are having issues with the default implementation, you've come to the right place. + +### Some Terminology + +- [L2 Chain](https://ethereum.org/en/layer-2/): A Layer 2 chain "branching" off Ethereum. +- [EVM](https://ethereum.org/en/developers/docs/evm/): Ethereum Virtual Machine that underpins the Ethereum blockchain. +- [EVM Compatible](https://blog.thirdweb.com/evm-compatible-blockchains-and-ethereum-virtual-machine/#:~:text=What%20does%20'EVM%20compatibility'%20mean,significant%20changes%20to%20their%20code.): A chain that has some large, underlying differences from how base Ethereum works, but can still be interacted with largely the same way as Ethereum. +- [EIP-1559](https://eips.ethereum.org/EIPS/eip-1559): The Ethereum Improvement Proposal that changed how gas fees are calculated and paid on Ethereum +- Legacy Transactions: Transactions that are sent using the old gas fee calculation method, the one used before EIP-1559. +- Dynamic Fee Transaction: Transactions that are sent using the new gas fee calculation method, the one used after EIP-1559. + +## How Client Integrations Work + +In order to test Chainlink nodes, the `chainlink-testing-framework` needs to be able to interact with the chain that the node is running on. This is done through the `blockchain.EVMClient` interface. The `EVMClient` interface is a wrapper around [geth](https://geth.ethereum.org/) to interact with the blockchain. We conduct all our testing blockchain operations through this wrapper, like sending transactions and monitoring on-chain events. The primary implementation of this wrapper is built for [Ethereum](./ethereum.go). Most others, like the [Metis](./metis.go) and [Optimism](./optimism.go) integrations, extend and modify the base Ethereum implementation. + +## Do I Need a New Integration? + +If you're reading this, probably. The default EVM integration is designed to work with mainnet Ethereum, which covers most other EVM chain interactions, but it's not guaranteed to work with all of them. If you're on a new chain and the test framework is throwing errors while doing basic things like send transactions, receive new headers, or deploy contracts, you'll likely need to create a new integration. The most common issue with new chains (especially L2s) is gas estimations and lack of support for dynamic transactions. + +## Creating a New Integration + +Take a look at the [Metis](./metis.go) integration as an example. Metis is an L2, EVM compatible chain. It's largely the same as the base Ethereum integration, so we'll extend from that. + +```go +type MetisMultinodeClient struct { + *EthereumMultinodeClient +} + +type MetisClient struct { + *EthereumClient +} +``` + +Now we need to let other libraries (like our tests in the main Chainlink repo) that this integration exists. So we add the new implementation to the [known_networks.go](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/lib/blockchain/known_networks.go) file. We can then add that network to our tests' own [known_networks.go](https://github.com/smartcontractkit/chainlink/blob/develop/integration-tests/known_networks.go) file (it's annoying, there are plans to simplify). + +Now our Metis integration is the exact same as our base Ethereum one, which doesn't do us too much good. I'm assuming you came here to make some changes, so first let's find out what we need to change. This is a mix of reading developer documentation on the chain you're testing and trial and error. Mostly the latter in later stages. In the case of Metis, like many L2s, they [have their own spin on gas fees](https://docs.metis.io/dev/protocol-in-detail/transaction-fees-on-the-metis-platform). They also only support Legacy transactions. So we'll need to override any methods that deal with gas estimations, `Fund`, `DeployContract`, and `ReturnFunds`. diff --git a/book/src/lib/config/config.md b/book/src/lib/config/config.md new file mode 100644 index 000000000..33c574d52 --- /dev/null +++ b/book/src/lib/config/config.md @@ -0,0 +1,377 @@ +# TOML Config + +
+ +This documentation is deprecated, we are using it in [Chainlink Integration Tests](https://github.com/smartcontractkit/chainlink/tree/develop/integration-tests) + +If you want to test our new products use [v2](../framework/overview.md) +
+ +These basic building blocks can be used to create a TOML config file. For example: + +```golang +import ( + ctf_config "github.com/smartcontractkit/chainlink-testing-framework/config" + ctf_test_env "github.com/smartcontractkit/chainlink-testing-framework/docker/test_env" +) + +type TestConfig struct { + ChainlinkImage *ctf_config.ChainlinkImageConfig `toml:"ChainlinkImage"` + ChainlinkUpgradeImage *ctf_config.ChainlinkImageConfig `toml:"ChainlinkUpgradeImage"` + Logging *ctf_config.LoggingConfig `toml:"Logging"` + Network *ctf_config.NetworkConfig `toml:"Network"` + Pyroscope *ctf_config.PyroscopeConfig `toml:"Pyroscope"` + PrivateEthereumNetwork *ctf_test_env.EthereumNetwork `toml:"PrivateEthereumNetwork"` +} +``` + +It's up to the user to provide a way to read the config from file and unmarshal it into the struct. You can check [testconfig.go](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/lib/config/examples/testconfig.go) to see one way it could be done. + +`Validate()` should be used to ensure that the config is valid. Some of the building blocks have also a `Default()` method that can be used to get default values. + +Also, you might find `BytesToAnyTomlStruct(logger zerolog.Logger, filename, configurationName string, target any, content []byte) error` utility method useful for unmarshalling TOMLs read from env var or files into a struct + +## Test Secrets + +Test secrets are not stored directly within the `TestConfig` TOML due to security reasons. Instead, they are passed into `TestConfig` via environment variables. Below is a list of all available secrets. Set only the secrets required for your specific tests, like so: `E2E_TEST_CHAINLINK_IMAGE=qa_ecr_image_url`. + +### Default Secret Loading + +By default, secrets are loaded from the `~/.testsecrets` dotenv file. Example of a local `~/.testsecrets` file: + +```bash +E2E_TEST_CHAINLINK_IMAGE=qa_ecr_image_url +E2E_TEST_CHAINLINK_UPGRADE_IMAGE=qa_ecr_image_url +E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY=wallet_key +``` + +### All E2E Test Secrets + +| Secret | Env Var | Example | +| ----------------------------- | ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| Chainlink Image | `E2E_TEST_CHAINLINK_IMAGE` | `E2E_TEST_CHAINLINK_IMAGE=qa_ecr_image_url` | +| Chainlink Upgrade Image | `E2E_TEST_CHAINLINK_UPGRADE_IMAGE` | `E2E_TEST_CHAINLINK_UPGRADE_IMAGE=qa_ecr_image_url` | +| Wallet Key per network | `E2E_TEST_(.+)_WALLET_KEY` or `E2E_TEST_(.+)_WALLET_KEY_(\d+)$` | `E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY=wallet_key` or `E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY_1=wallet_key_1`, `E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY_2=wallet_key_2` for multiple keys per network | +| RPC HTTP URL per network | `E2E_TEST_(.+)_RPC_HTTP_URL` or `E2E_TEST_(.+)_RPC_HTTP_URL_(\d+)$` | `E2E_TEST_ARBITRUM_SEPOLIA_RPC_HTTP_URL=url` or `E2E_TEST_ARBITRUM_SEPOLIA_RPC_HTTP_URL_1=url`, `E2E_TEST_ARBITRUM_SEPOLIA_RPC_HTTP_URL_2=url` for multiple http urls per network | +| RPC WebSocket URL per network | `E2E_TEST_(.+)_RPC_WS_URL` or `E2E_TEST_(.+)_RPC_WS_URL_(\d+)$` | `E2E_TEST_ARBITRUM_RPC_WS_URL=ws_url` or `E2E_TEST_ARBITRUM_RPC_WS_URL_1=ws_url_1`, `E2E_TEST_ARBITRUM_RPC_WS_URL_2=ws_url_2` for multiple ws urls per network | +| Loki Tenant ID | `E2E_TEST_LOKI_TENANT_ID` | `E2E_TEST_LOKI_TENANT_ID=tenant_id` | +| Loki Endpoint | `E2E_TEST_LOKI_ENDPOINT` | `E2E_TEST_LOKI_ENDPOINT=url` | +| Loki Basic Auth | `E2E_TEST_LOKI_BASIC_AUTH` | `E2E_TEST_LOKI_BASIC_AUTH=token` | +| Loki Bearer Token | `E2E_TEST_LOKI_BEARER_TOKEN` | `E2E_TEST_LOKI_BEARER_TOKEN=token` | +| Grafana Bearer Token | `E2E_TEST_GRAFANA_BEARER_TOKEN` | `E2E_TEST_GRAFANA_BEARER_TOKEN=token` | +| Pyroscope Server URL | `E2E_TEST_PYROSCOPE_SERVER_URL` | `E2E_TEST_PYROSCOPE_SERVER_URL=url` | +| Pyroscope Key | `E2E_TEST_PYROSCOPE_KEY` | `E2E_TEST_PYROSCOPE_KEY=key` | + +### Run GitHub Workflow with Your Test Secrets + +By default, GitHub workflows execute with a set of predefined secrets. However, you can use custom secrets by specifying a unique identifier for your secrets when running the `gh workflow` command. + +#### Steps to Use Custom Secrets + +1. **Upload Local Secrets to GitHub Secrets Vault:** + + - **Install `ghsecrets` tool:** + Install the `ghsecrets` tool to manage GitHub Secrets more efficiently. + + ```bash + go install github.com/smartcontractkit/chainlink-testing-framework/tools/ghsecrets@latest + ``` + + If you use `asdf`, run `asdf reshim` + + - **Upload Secrets:** + Run `ghsecrets set` from local core repo to upload the content of your `~/.testsecrets` file to the GitHub Secrets Vault and generate a unique identifier (referred to as `your_ghsecret_id`). + + ```bash + cd path-to-chainlink-core-repo + ``` + + ```bash + ghsecrets set + ``` + + For more details about `ghsecrets`, visit https://github.com/smartcontractkit/chainlink-testing-framework/tree/main/tools/ghsecrets#faq + +2. **Execute the Workflow with Custom Secrets:** + - To use the custom secrets in your GitHub Actions workflow, pass the `-f test_secrets_override_key={your_ghsecret_id}` flag when running the `gh workflow` command. + ```bash + gh workflow run -f test_secrets_override_key={your_ghsecret_id} + ``` + +#### Default Secrets Handling + +If the `test_secrets_override_key` is not provided, the workflow will default to using the secrets preconfigured in the CI environment. + +### Creating New Test Secrets in TestConfig + +When adding a new secret to the `TestConfig`, such as a token or other sensitive information, the method `ReadConfigValuesFromEnvVars()` in `config/testconfig.go` must be extended to include the new secret. Ensure that the new environment variable starts with the `E2E_TEST_` prefix. This prefix is crucial for ensuring that the secret is correctly propagated to Kubernetes tests when using the Remote Runner. + +Here’s a quick checklist for adding a new test secret: + +- Add the secret to ~/.testsecrets with the `E2E_TEST_` prefix to ensure proper handling. +- Extend the `config/testconfig.go:ReadConfigValuesFromEnvVars()` method to load the secret in `TestConfig` +- Add the secrets to [All E2E Test Secrets](#all-e2e-test-secrets) table. + +## Working example + +For a full working example making use of all the building blocks see [testconfig.go](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/lib/config/examples/testconfig.go). It provides methods for reading TOML, applying overrides and validating non-empty config blocks. It supports 4 levels of overrides, in order of precedence: + +- `BASE64_CONFIG_OVERRIDE` env var +- `overrides.toml` +- `[product_name].toml` +- `default.toml` + +All you need to do now to get the config is execute `func GetConfig(configurationName string, product string) (TestConfig, error)`. It will first look for folder with file `.root_dir` and from there it will look for config files in all subfolders, so that you can place the config files in whatever folder(s) work for you. It assumes that all configuration versions for a single product are kept in `[product_name].toml` under different configuration names (that can represent anything you want: a single test, a test type, a test group, etc). + +Overrides of config files are done in a super-simple way. We try to unmarshall consecutive files into the same struct. Since it's all pointer based only not-nil keys are overwritten. + +## IMPORTANT! + +It is **required** to add `overrides.toml` to `.gitignore` in your project, so that you don't accidentally commit it as it might contain secrets. + +## Network config (and default RPC endpoints) + +Some more explanation is needed for the `NetworkConfig`: + +```golang +type NetworkConfig struct { + // list of networks that should be used for testing + SelectedNetworks []string `toml:"selected_networks"` + // map of network name to EVMNetworks where key is network name and value is a pointer to EVMNetwork + // if not set, it will try to find the network from defined networks in MappedNetworks under known_networks.go + // it doesn't matter if you use `arbitrum_sepolia` or `ARBITRUM_SEPOLIA` or even `arbitrum_SEPOLIA` as key + // as all keys will be uppercased when loading the Default config + EVMNetworks map[string]*blockchain.EVMNetwork `toml:"EVMNetworks,omitempty"` + // map of network name to ForkConfigs where key is network name and value is a pointer to ForkConfig + // only used if network fork is needed, if provided, the network will be forked with the given config + // networkname is fetched first from the EVMNetworks and + // if not defined with EVMNetworks, it will try to find the network from defined networks in MappedNetworks under known_networks.go + ForkConfigs map[string]*ForkConfig `toml:"ForkConfigs,omitempty"` + // map of network name to RPC endpoints where key is network name and value is a list of RPC HTTP endpoints + RpcHttpUrls map[string][]string `toml:"RpcHttpUrls"` + // map of network name to RPC endpoints where key is network name and value is a list of RPC WS endpoints + RpcWsUrls map[string][]string `toml:"RpcWsUrls"` + // map of network name to wallet keys where key is network name and value is a list of private keys (aka funding keys) + WalletKeys map[string][]string `toml:"WalletKeys"` +} + +func (n *NetworkConfig) Default() error { + ... +} +``` + +Sample TOML config: + +```toml +selected_networks = ["arbitrum_goerli", "optimism_goerli", "new_network"] + +[EVMNetworks.new_network] +evm_name = "new_test_network" +evm_chain_id = 100009 +evm_simulated = true +evm_chainlink_transaction_limit = 5000 +evm_minimum_confirmations = 1 +evm_gas_estimation_buffer = 10000 +client_implementation = "Ethereum" +evm_supports_eip1559 = true +evm_default_gas_limit = 6000000 + +[ForkConfigs.new_network] +url = "ws://localhost:8546" +block_number = 100 + +[RpcHttpUrls] +arbitrum_goerli = ["https://devnet-2.mt/ABC/rpc/"] +new_network = ["http://localhost:8545"] + +[RpcWsUrls] +arbitrum_goerli = ["wss://devnet-2.mt/ABC/ws/"] +new_network = ["ws://localhost:8546"] + +[WalletKeys] +arbitrum_goerli = ["1810868fc221b9f50b5b3e0186d8a5f343f892e51ce12a9e818f936ec0b651ed"] +optimism_goerli = ["1810868fc221b9f50b5b3e0186d8a5f343f892e51ce12a9e818f936ec0b651ed"] +new_network = ["ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"] +``` + +Whenever you are adding a new EVMNetwork to the config, you can either + +- provide the rpcs and wallet keys in RpcUrls and WalletKeys. Like in the example above, you can see that `new_network` is added to the `selected_networks` and `EVMNetworks` and then the rpcs and wallet keys are provided in `RpcHttpUrls`, `RpcWsUrls` and `WalletKeys` respectively. +- provide the rpcs and wallet keys in the `EVMNetworks` itself. Like in the example below, you can see that `new_network` is added to the `selected_networks` and `EVMNetworks` and then the rpcs and wallet keys are provided in `EVMNetworks` itself. + +```toml + +selected_networks = ["new_network"] + +[EVMNetworks.new_network] +evm_name = "new_test_network" +evm_chain_id = 100009 +evm_urls = ["ws://localhost:8546"] +evm_http_urls = ["http://localhost:8545"] +evm_keys = ["ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"] +evm_simulated = true +evm_chainlink_transaction_limit = 5000 +evm_minimum_confirmations = 1 +evm_gas_estimation_buffer = 10000 +client_implementation = "Ethereum" +evm_supports_eip1559 = true +evm_default_gas_limit = 6000000 +``` + +If your config struct looks like that: + +```golang + +type TestConfig struct { + Network *ctf_config.NetworkConfig `toml:"Network"` +} +``` + +then your TOML file should look like that: + +```toml +[Network] +selected_networks = ["arbitrum_goerli","new_network"] + +[Network.EVMNetworks.new_network] +evm_name = "new_test_network" +evm_chain_id = 100009 +evm_simulated = true +evm_chainlink_transaction_limit = 5000 +evm_minimum_confirmations = 1 +evm_gas_estimation_buffer = 10000 +client_implementation = "Ethereum" +evm_supports_eip1559 = true +evm_default_gas_limit = 6000000 + +[Network.RpcHttpUrls] +arbitrum_goerli = ["https://devnet-2.mt/ABC/rpc/"] +new_network = ["http://localhost:8545"] + +[Network.RpcWsUrls] +arbitrum_goerli = ["ws://devnet-2.mt/ABC/rpc/"] +new_network = ["ws://localhost:8546"] + +[Network.WalletKeys] +arbitrum_goerli = ["1810868fc221b9f50b5b3e0186d8a5f343f892e51ce12a9e818f936ec0b651ed"] +new_network = ["ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"] +``` + +If in your product config you want to support case-insensitive network names and map keys remember to run `NetworkConfig.UpperCaseNetworkNames()` on your config before using it. + +## Providing custom values in the CI + +Up to this point when we wanted to modify some dynamic tests parameters in the CI we would simply set env vars. That approach won't work anymore. The way to go around it is to build a TOML file, `base64` it, mask it and then set is as `BASE64_CONFIG_OVERRIDE` env var that will be read by tests. Here's an example of a working snippet of how that could look: + +```bash +convert_to_toml_array() { + local IFS=',' + local input_array=($1) + local toml_array_format="[" + + for element in "${input_array[@]}"; do + toml_array_format+="\"$element\"," + done + + toml_array_format="${toml_array_format%,}]" + echo "$toml_array_format" +} + +selected_networks=$(convert_to_toml_array "$SELECTED_NETWORKS") +log_targets=$(convert_to_toml_array "$LOGSTREAM_LOG_TARGETS") + +if [ -n "$PYROSCOPE_SERVER" ]; then + pyroscope_enabled=true +else + pyroscope_enabled=false +fi + +if [ -n "$ETH2_EL_CLIENT" ]; then + execution_layer="$ETH2_EL_CLIENT" +else + execution_layer="geth" +fi + +if [ -n "$TEST_LOG_COLLECT" ]; then + test_log_collect=true +else + test_log_collect=false +fi + +cat << EOF > config.toml +[Network] +selected_networks=$selected_networks + +[ChainlinkImage] +image="$CHAINLINK_IMAGE" +version="$CHAINLINK_VERSION" + +[Pyroscope] +enabled=$pyroscope_enabled +server_url="$PYROSCOPE_SERVER" +environment="$PYROSCOPE_ENVIRONMENT" +key_secret="$PYROSCOPE_KEY" + +[Logging] +test_log_collect=$test_log_collect +run_id="$RUN_ID" + +[Logging.LogStream] +log_targets=$log_targets + +[Logging.Loki] +tenant_id="$LOKI_TENANT_ID" +url="$LOKI_URL" +basic_auth_secret="$LOKI_BASIC_AUTH" +bearer_token_secret="$LOKI_BEARER_TOKEN" + +[Logging.Grafana] +url="$GRAFANA_URL" +EOF + +BASE64_CONFIG_OVERRIDE=$(cat config.toml | base64 -w 0) +echo ::add-mask::$BASE64_CONFIG_OVERRIDE +echo "BASE64_CONFIG_OVERRIDE=$BASE64_CONFIG_OVERRIDE" >> $GITHUB_ENV +``` + +**These two lines in that very order are super important** + +```bash +BASE64_CONFIG_OVERRIDE=$(cat config.toml | base64 -w 0) +echo ::add-mask::$BASE64_CONFIG_OVERRIDE +``` + +`::add-mask::` has to be called only after env var has been set to it's final value, otherwise it won't be recognized and masked properly and secrets will be exposed in the logs. + +## Providing custom values for local execution + +For local execution it's best to put custom variables in `overrides.toml` file. + +## Providing custom values in k8s + +It's easy. All you need to do is: + +- Create TOML file with these values +- Base64 it: `cat your.toml | base64` +- Set the base64 result as `BASE64_CONFIG_OVERRIDE` environment variable. + +`BASE64_CONFIG_OVERRIDE` will be automatically forwarded to k8s (as long as it is set and available to the test process), when creating the environment programmatically via `environment.New()`. + +Quick example: + +```bash +BASE64_CONFIG_OVERRIDE=$(cat your.toml | base64) go test your-test-that-runs-in-k8s ./file/with/your/test +``` + +# Not moved to TOML + +Not moved to TOML: + +- `SLACK_API_KEY` +- `SLACK_USER` +- `SLACK_CHANNEL` +- `TEST_LOG_LEVEL` +- `CHAINLINK_ENV_USER` +- `DETACH_RUNNER` +- `ENV_JOB_IMAGE` +- most of k8s-specific env variables were left untouched diff --git a/book/src/lib/crib.md b/book/src/lib/crib.md new file mode 100644 index 000000000..63537d63d --- /dev/null +++ b/book/src/lib/crib.md @@ -0,0 +1,7 @@ +### CRIB Connector + +
+ +`GAPv1` won't be supported in the future, you can still use [this example](https://github.com/smartcontractkit/chainlink/tree/develop/integration-tests/crib), [CI run](https://github.com/smartcontractkit/chainlink/actions/workflows/crib-integration-test.yml) but expect this to be changed. + +
diff --git a/lib/k8s/KUBERNETES.md b/book/src/lib/k8s/KUBERNETES.md similarity index 77% rename from lib/k8s/KUBERNETES.md rename to book/src/lib/k8s/KUBERNETES.md index 676999174..a80756a00 100644 --- a/lib/k8s/KUBERNETES.md +++ b/book/src/lib/k8s/KUBERNETES.md @@ -1,5 +1,13 @@ # Kubernetes + +
+ +Managing k8s is challenging, so we've decided to separate `k8s` deployments here - [CRIB](https://github.com/smartcontractkit/crib) + +This documentation is outdated, and we are using it only internally to run our soak tests. For `v2` tests please check [this example](../crib.md) and read [CRIB docs](https://github.com/smartcontractkit/crib) +
+ We run our software in Kubernetes. ### Local k3d setup diff --git a/lib/k8s/REMOTE_RUN.md b/book/src/lib/k8s/REMOTE_RUN.md similarity index 69% rename from lib/k8s/REMOTE_RUN.md rename to book/src/lib/k8s/REMOTE_RUN.md index fb2e75e75..6453c5cea 100644 --- a/lib/k8s/REMOTE_RUN.md +++ b/book/src/lib/k8s/REMOTE_RUN.md @@ -1,7 +1,15 @@ ## How to run the same environment deployment inside k8s +
+ +Managing k8s is challenging, so we've decided to separate `k8s` deployments here - [CRIB](https://github.com/smartcontractkit/crib) + +This documentation is outdated, and we are using it only internally to run our soak tests. For `v2` tests please check [this example](../crib.md) and read [CRIB docs](https://github.com/smartcontractkit/crib) +
+ + You can build a `Dockerfile` to run exactly the same environment interactions inside k8s in case you need to run long-running tests -Base image is [here](Dockerfile.base) +Base image is [here](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/lib/k8s/Dockerfile.base) ```Dockerfile FROM .dkr.ecr.us-west-2.amazonaws.com/test-base-image:latest diff --git a/lib/k8s/TUTORIAL.md b/book/src/lib/k8s/TUTORIAL.md similarity index 98% rename from lib/k8s/TUTORIAL.md rename to book/src/lib/k8s/TUTORIAL.md index db85091a2..f5f3ba471 100644 --- a/lib/k8s/TUTORIAL.md +++ b/book/src/lib/k8s/TUTORIAL.md @@ -1,5 +1,13 @@ # How to create environments +
+ +Managing k8s is challenging, so we've decided to separate `k8s` deployments here - [CRIB](https://github.com/smartcontractkit/crib) + +This documentation is outdated, and we are using it only internally to run our soak tests. For `v2` tests please check [this example](../crib.md) and read [CRIB docs](https://github.com/smartcontractkit/crib) +
+ + - [Getting started](#getting-started) - [Connect to environment](#connect-to-environment) - [Creating environments](#creating-environments) diff --git a/book/src/secrets.md b/book/src/secrets.md index d407b1ac5..ae0ef7eec 100644 --- a/book/src/secrets.md +++ b/book/src/secrets.md @@ -2,7 +2,7 @@ `client/secretsmanager.go` has a simple API to read/write/delete secrets. -It uses a struct to protect such secrets from accidental printing or marshalling, see an [example](client/secretsmanager_test.go) test +It uses a struct to protect such secrets from accidental printing or marshalling, see an [example](../../lib/client/secretsmanager_test.go) test ## Using AWSSecretsManager via CLI diff --git a/framework/README.md b/framework/README.md index 93deeb164..0f4b80b15 100644 --- a/framework/README.md +++ b/framework/README.md @@ -1,31 +1,5 @@ -## Chainlink Testing Framework Harness +# Framework - -* [CLI](./cmd/README.md) -* [Components](./COMPONENTS.md) -* [Configuration](./CONFIGURATION.md) -* [Caching](./CACHING.md) -* [Local Observability Stack](./cmd/observability/README.md) -* [Examples](https://github.com/smartcontractkit/chainlink/tree/8e8597aa14c39c48ed4b3261f6080fa43b5d7cd0/e2e/capabilities) - +Modular and data-driven harness for Chainlink on-chain and off-chain components. -This module includes the CTFv2 harness, a lightweight, modular, and data-driven framework designed for combining off-chain and on-chain components while implementing best practices for end-to-end system-level testing: - -- **Non-nil configuration**: All test variables must have defaults, automatic validation. -- **Component isolation**: Components are decoupled via input/output structs, without exposing internal details. -- **Modular configuration**: No arcane knowledge of framework settings is required; the config is simply a reflection of the components being used in the test. Components declare their own configuration—'what you see is what you get.' -- **Replaceability and extensibility**: Since components are decoupled via outputs, any deployment component can be swapped with a real service without altering the test code. -- **Caching**: any component can use cached configs to skip environment setup for faster test development -- **Integrated observability stack**: use `ctf obs up` to spin up a local observability stack. - - -### Environment variables (Tests, when using in Go code) -| Name | Description | Possible values | Default | Required? | -|:----------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------:|-------------------------:|:-------:|:------------------------:| -| CTF_CONFIGS | Path(s) to test config files.
Can be more than one, ex.: smoke.toml,smoke_1.toml,smoke_2.toml.
First filepath will hold all the merged values | Any valid TOML file path | | ✅ | -| CTF_LOG_LEVEL | Harness log level | `info`, `debug`, `trace` | `info` | 🚫 | -| CTF_LOKI_STREAM | Streams all components logs to `Loki`, see params below | `true`, `false` | `false` | 🚫 | -| LOKI_URL | URL to `Loki` push api, should be like`${host}/loki/api/v1/push` | URL | - | If you use `Loki` then ✅ | -| LOKI_TENANT_ID | Streams all components logs to `Loki`, see params below | `true`, `false` | - | If you use `Loki` then ✅ | -| TESTCONTAINERS_RYUK_DISABLED | Testcontainers-Go reaper container, removes all the containers after the test exit | `true`, `false` | `false` | 🚫 | -| RESTY_DEBUG | Log all Resty client HTTP calls | `true`, `false` | `false` | 🚫 | \ No newline at end of file +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](http://localhost:9999/framework/overview.html) \ No newline at end of file diff --git a/framework/cmd/README.md b/framework/cmd/README.md deleted file mode 100644 index 8ea0ec309..000000000 --- a/framework/cmd/README.md +++ /dev/null @@ -1,11 +0,0 @@ -## CLI -### Install -``` -go get github.com/smartcontractkit/chainlink-testing-framework/framework/cmd && \ -go install github.com/smartcontractkit/chainlink-testing-framework/framework/cmd && \ -mv ~/go/bin/cmd ~/go/bin/ctf -``` -### Usage -``` -ctf -h -``` \ No newline at end of file diff --git a/framework/cmd/observability/README.md b/framework/cmd/observability/README.md deleted file mode 100644 index a20e74e99..000000000 --- a/framework/cmd/observability/README.md +++ /dev/null @@ -1,11 +0,0 @@ -## Observability tools -We have some observability tools we use with our harness, you can use them by calling -``` -ctf obs up -``` -Change your `Loki` config in your `.envrc` you use to run tests -``` -export LOKI_TENANT_ID=promtail -export LOKI_URL=http://host.docker.internal:3030/loki/api/v1/push -``` -Then check [Loki](http://localhost:3000/explore?panes=%7B%220EE%22:%7B%22datasource%22:%22P8E80F9AEF21F6940%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bjob%3D%5C%22ctf%5C%22%7D%22,%22queryType%22:%22range%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22P8E80F9AEF21F6940%22%7D,%22editorMode%22:%22code%22%7D%5D,%22range%22:%7B%22from%22:%22now-5m%22,%22to%22:%22now%22%7D%7D%7D&schemaVersion=1&orgId=1) logs \ No newline at end of file diff --git a/havoc/README.md b/havoc/README.md index 0a0aaff3c..f15820690 100644 --- a/havoc/README.md +++ b/havoc/README.md @@ -1,211 +1,5 @@ ## Havoc -The `havoc` package is a Go library designed to facilitate chaos testing within Kubernetes environments using Chaos Mesh. It offers a structured way to define, execute, and manage chaos experiments as code, directly integrated into Go applications or testing suites. This package simplifies the creation and control of Chaos Mesh experiments, including network chaos, pod failures, and stress testing on Kubernetes clusters. +The `havoc` package is a Go library designed to facilitate chaos testing within Kubernetes environments using Chaos Mesh. -### Features - -- **Chaos Object Management:** Easily create, update, pause, resume, and delete chaos experiments using Go structures and methods. -- **Lifecycle Hooks:** Utilize chaos listeners to hook into lifecycle events of chaos experiments, such as creation, start, pause, resume, and finish. -- **Support for Various Chaos Experiments:** Create and manage different types of chaos experiments like NetworkChaos, IOChaos, StressChaos, PodChaos, and HTTPChaos. -- **Chaos Experiment Status Monitoring:** Monitor and react to the status of chaos experiments programmatically. - -### Installation - -To use `havoc` in your project, ensure you have a Go environment setup. Then, install the package using go get: - -``` -go get -u github.com/smartcontractkit/chainlink-testing-framework/havoc -``` - -Ensure your Kubernetes cluster is accessible and that you have Chaos Mesh installed and configured. - -### Monitoring and Observability in Chaos Experiments - -`havoc` enhances chaos experiment observability through structured logging and Grafana annotations, facilitated by implementing the ChaosListener interface. This approach allows for detailed monitoring, debugging, and visual representation of chaos experiments' impact. - -#### Structured Logging with ChaosLogger - -`ChaosLogger` leverages the zerolog library to provide structured, queryable logging of chaos events. It automatically logs key lifecycle events such as creation, start, pause, and termination of chaos experiments, including detailed contextual information. - -Instantiate `ChaosLogger` and register it as a listener to your chaos experiments: - -``` -logger := havoc.NewChaosLogger() -chaos.AddListener(logger) -``` - -### Default package logger - -`havoc/logger.go` contains default `Logger` instance for the package. - -#### Visual Monitoring with Grafana Annotations - -`SingleLineGrafanaAnnotator` is a `ChaosListener` that annotates Grafana dashboards with chaos experiment events. This visual representation helps correlate chaos events with their effects on system metrics and logs. - -Initialize `SingleLineGrafanaAnnotator` with your Grafana instance details and register it alongside `ChaosLogger`: - -``` -annotator := havoc.NewSingleLineGrafanaAnnotator( - "http://grafana-instance.com", - "grafana-access-token", - "dashboard-uid", -) -chaos.AddListener(annotator) -``` - -### Creating a Chaos Experiment - -To create a chaos experiment, define the chaos object options, initialize a chaos experiment with NewChaos, and then call Create to start the experiment. - -Here is an example of creating and starting a PodChaos experiment: - -``` -package main - -import ( - "context" - "github.com/smartcontractkit/chainlink-testing-framework/havoc" - "github.com/chaos-mesh/chaos-mesh/api/v1alpha1" - "sigs.k8s.io/controller-runtime/pkg/client" - "time" -) - -func main() { - // Initialize dependencies - client, err := havoc.NewChaosMeshClient() - if err != nil { - panic(err) - } - logger := havoc.NewChaosLogger() - annotator := havoc.NewSingleLineGrafanaAnnotator( - "http://grafana-instance.com", - "grafana-access-token", - "dashboard-uid", - ) - - // Define chaos experiment - podChaos := &v1alpha1.PodChaos{ /* PodChaos spec */ } - chaos, err := havoc.NewChaos(havoc.ChaosOpts{ - Object: podChaos, - Description: "Pod failure example", - DelayCreate: 5 * time.Second, - Client: client, - }) - if err != nil { - panic(err) - } - - // Register listeners - chaos.AddListener(logger) - chaos.AddListener(annotator) - - // Start chaos experiment - chaos.Create(context.Background()) - - // Manage chaos lifecycle... -} -``` - -### Test Example - -``` -func TestChaosDON(t *testing.T) { - testDuration := time.Minute * 60 - - // Load test config - cfg := &config.MercuryQAEnvChaos{} - - // Define chaos experiments and their schedule - - k8sClient, err := havoc.NewChaosMeshClient() - require.NoError(t, err) - - // Test 3.2: Disable 2 nodes simultaneously - - podFailureChaos4, err := k8s_chaos.MercuryPodChaosSchedule(k8s_chaos.MercuryScheduledPodChaosOpts{ - Name: "schedule-don-ocr-node-failure-4", - Description: "Disable 2 nodes (clc-ocr-mercury-arb-testnet-qa-nodes-3 and clc-ocr-mercury-arb-testnet-qa-nodes-4)", - DelayCreate: time.Minute * 0, - Duration: time.Minute * 20, - Namespace: cfg.ChaosNodeNamespace, - PodSelector: v1alpha1.PodSelector{ - Mode: v1alpha1.AllMode, - Selector: v1alpha1.PodSelectorSpec{ - GenericSelectorSpec: v1alpha1.GenericSelectorSpec{ - Namespaces: []string{cfg.ChaosNodeNamespace}, - ExpressionSelectors: v1alpha1.LabelSelectorRequirements{ - { - Key: "app.kubernetes.io/instance", - Operator: "In", - Values: []string{ - "clc-ocr-mercury-arb-testnet-qa-nodes-3", - "clc-ocr-mercury-arb-testnet-qa-nodes-4", - }, - }, - }, - }, - }, - }, - Client: k8sClient, - }) - require.NoError(t, err) - - // Test 3.3: Disable 3 nodes simultaneously - - podFailureChaos5, err := k8s_chaos.MercuryPodChaosSchedule(k8s_chaos.MercuryScheduledPodChaosOpts{ - Name: "schedule-don-ocr-node-failure-5", - Description: "Disable 3 nodes (clc-ocr-mercury-arb-testnet-qa-nodes-3, clc-ocr-mercury-arb-testnet-qa-nodes-4 and clc-ocr-mercury-arb-testnet-qa-nodes-5)", - DelayCreate: time.Minute * 40, - Duration: time.Minute * 20, - Namespace: cfg.ChaosNodeNamespace, - PodSelector: v1alpha1.PodSelector{ - Mode: v1alpha1.AllMode, - Selector: v1alpha1.PodSelectorSpec{ - GenericSelectorSpec: v1alpha1.GenericSelectorSpec{ - Namespaces: []string{cfg.ChaosNodeNamespace}, - ExpressionSelectors: v1alpha1.LabelSelectorRequirements{ - { - Key: "app.kubernetes.io/instance", - Operator: "In", - Values: []string{ - "clc-ocr-mercury-arb-testnet-qa-nodes-3", - "clc-ocr-mercury-arb-testnet-qa-nodes-4", - "clc-ocr-mercury-arb-testnet-qa-nodes-5", - }, - }, - }, - }, - }, - }, - Client: k8sClient, - }) - require.NoError(t, err) - - chaosList := []havoc.ChaosEntity{ - podFailureChaos4, - podFailureChaos5, - } - - for _, chaos := range chaosList { - chaos.AddListener(havoc.NewChaosLogger()) - chaos.AddListener(havoc.NewSingleLineGrafanaAnnotator(cfg.GrafanaURL, cfg.GrafanaToken, cfg.GrafanaDashboardUID)) - - // Fail the test if the chaos object already exists - exists, err := havoc.ChaosObjectExists(chaos.GetObject(), k8sClient) - require.NoError(t, err) - require.False(t, exists, "chaos object already exists: %s. Delete it before starting the test", chaos.GetChaosName()) - - chaos.Create(context.Background()) - } - - t.Cleanup(func() { - for _, chaos := range chaosList { - // Delete chaos object if it still exists - chaos.Delete(context.Background()) - } - }) - - // Simulate user activity/load for the duration of the chaos experiments - runUserLoad(t, cfg, testDuration) -} -``` +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/libs/havoc.html) \ No newline at end of file diff --git a/k8s-test-runner/README.md b/k8s-test-runner/README.md index 29ba35ee0..f12380ca1 100644 --- a/k8s-test-runner/README.md +++ b/k8s-test-runner/README.md @@ -1,140 +1,5 @@ -## Preparing to Run Tests on Staging +# K8s Test Runner -Ensure you complete the following steps before executing tests on the staging environment: +A tool to build and run arbitrary Go code in `k8s` the easy way. -1. **Connect to the VPN** - -2. **AWS Login with Staging Profile** - - Authenticate to AWS using your staging profile, specifically with the `StagingEKSAdmin` role. Execute the following command: - - ```sh - aws sso login --profile staging - ``` - -3. **Verify Authorization** - - Confirm your authorization status by listing the namespaces in the staging cluster. Run `kubectl get namespaces`. If you see a list of namespaces, this indicates successful access to the staging cluster. - -## Running Tests - -### Creating an Image with the Test Binary - -Before running tests, you must create a Docker image containing the test binary. To do this, execute the `create-test-image` command and provide the path to the test folder you wish to package. This command: - -1. Compiles test binary under `` -2. Creates a docker image with the test binary -3. Pushes the docker image to the image registry (e.g. Staging ECR) - -```sh -go run ./cmd/main.go create-test-image --image-registry-url --image-tag "" "" -``` - -Where `image-tag` should be a descriptive name for your test, such as "mercury-load-tests". - -### Running the Test in Kubernetes - -If a Docker image containing the test binary is available in an image registry (such as staging ECR), use `run` command to execute the test in K8s. - -``` -go run ./cmd/main.go run -c "" -``` - -The TOML config should specify the test runner configuration as follows: - -``` -namespace = "e2e-tests" -rbac_role_name = "" # RBAC role name for the chart -image_registry_url = "" # URL to the ECR containing the test binary image, e.g., staging ECR URL -image_name = "k8s-test-runner" -image_tag = "" # The image tag to use, like "mercury-load-tests" (see readme above) -job_count = "1" -test_name = "TestMercuryLoad/all_endpoints" -test_timeout = "24h" -test_config_base64_env_name = "LOAD_TEST_BASE64_TOML_CONTENT" -test_config_file_path = "/Users/lukasz/Documents/test-configs/load-staging-testnet.toml" -resources_requests_cpu = "1000m" -resources_requests_memory = "512Mi" -resources_limits_cpu = "2000m" -resources_limits_memory = "1024Mi" -[envs] -WASP_LOG_LEVEL = "info" -TEST_LOG_LEVEL = "info" -MERCURY_TEST_LOG_LEVEL = "info" -``` - -Where: - -- `test_name` is the name of the test to run (must be included in the test binary). -- `test_config_env_name` is the name of the environment variable used to provide the test configuration for the test (optional). -- `test_config_file_path` is the path to the configuration file for the test (optional). - -## Using K8s Test Runner on CI - -### Example - -This example demonstrates the process step by step. First, it shows how to download the Kubernetes Test Runner. Next, it details the use of the Test Runner to create a test binary specifically for the Mercury "e2e_tests/staging_prod/tests/load" test package. Finally, it describes executing the test in Kubernetes using a customized test runner configuration. - -``` -- name: Download K8s Test Runner - run: | - mkdir -p k8s-test-runner - cd k8s-test-runner - curl -L -o k8s-test-runner.tar.gz https://github.com/smartcontractkit/chainlink-testing-framework/releases/download/v0.2.4/test-runner.tar.gz - tar -xzf k8s-test-runner.tar.gz - chmod +x k8s-test-runner-linux-amd64 -``` - -Alternatively, you can place the k8s-test-runner package within your repository and unpack it: - -``` -- name: Unpack K8s Test Runner - run: | - cd e2e_tests - mkdir -p k8s-test-runner - tar -xzf k8s-test-runner-v0.0.1.tar.gz -C k8s-test-runner - chmod +x k8s-test-runner/k8s-test-runner-linux-amd64 -``` - -Then: - -``` -- name: Build K8s Test Runner Image - if: github.event.inputs.test-type == 'load' && github.event.inputs.rebuild-test-image == 'yes' - run: | - cd e2e_tests/k8s-test-runner - - ./k8s-test-runner-linux-amd64 create-test-image --image-registry-url "${{ secrets.AWS_ACCOUNT_ID_STAGING }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com" --image-tag "mercury-load-test" "../staging_prod/tests/load" - -- name: Run Test in K8s - run: | - cd e2e_tests/k8s-test-runner - - cat << EOF > config.toml - namespace = "e2e-tests" - rbac_role_name = "" # RBAC role name for the chart - image_registry_url = "${{ secrets.AWS_ACCOUNT_ID_STAGING }}.dkr.ecr.${{ secrets.AWS_REGION }}.amazonaws.com" - image_name = "k8s-test-runner" - image_tag = "mercury-load-test" - job_count = "1" - chart_path = "./chart" - test_name = "TestMercuryLoad/all_endpoints" - test_timeout = "24h" - resources_requests_cpu = "1000m" - resources_requests_memory = "512Mi" - resources_limits_cpu = "2000m" - resources_limits_memory = "1024Mi" - test_config_base64_env_name = "LOAD_TEST_BASE64_TOML_CONTENT" - test_config_base64 = "${{ steps.conditional-env-vars.outputs.LOAD_TEST_BASE64_TOML_CONTENT }}" - [envs] - WASP_LOG_LEVEL = "info" - TEST_LOG_LEVEL = "info" - MERCURY_TEST_LOG_LEVEL = "info" - EOF - - ./k8s-test-runner-linux-amd64 run -c config.toml -``` - -## Release - -Run `./package ` +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/k8s-test-runner/k8s-test-runner.html) \ No newline at end of file diff --git a/lib/README.md b/lib/README.md index 186d71dae..5532781a4 100644 --- a/lib/README.md +++ b/lib/README.md @@ -1,462 +1,3 @@ -
+# Framework v1 (Deprecated) -# Framework - -[![Lib tag](https://img.shields.io/github/v/tag/smartcontractkit/chainlink-testing-framework?filter=%2Alib%2A)](https://github.com/smartcontractkit/chainlink-testing-framework/tags) -[![Go Report Card](https://goreportcard.com/badge/github.com/smartcontractkit/chainlink-testing-framework)](https://goreportcard.com/report/github.com/smartcontractkit/chainlink-testing-framework) -[![Go Reference](https://pkg.go.dev/badge/github.com/smartcontractkit/chainlink-testing-framework.svg)](https://pkg.go.dev/github.com/smartcontractkit/chainlink-testing-framework) -[![Go Version](https://img.shields.io/github/go-mod/go-version/smartcontractkit/chainlink-testing-framework?filename=./lib/go.mod)](https://go.dev/) -![Tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/test.yaml/badge.svg) -![Lint](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/lint.yaml/badge.svg) - -
- -The purpose of this framework is to: -- Interact with different blockchains -- Configure CL jobs -- Deploy using `docker` -- Deploy using `k8s` - -If you're looking to implement a new chain integration for the testing framework, head over to the [blockchain](./blockchain/) directory for more info. - -## k8s package - -We have a k8s package we are using in tests, it provides: - -- [cdk8s](https://cdk8s.io/) based wrappers -- High-level k8s API -- Automatic port forwarding - -You can also use this package to spin up standalone environments. - -### Local k8s cluster - -Read [here](./k8s/KUBERNETES.md) about how to spin up a local cluster - -#### Install - -Set up deps, you need to have `node 14.x.x`, [helm](https://helm.sh/docs/intro/install/) and [yarn](https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable) - -Then use - -```shell -make install_deps -``` - -##### Optional Nix - -We have setup a nix shell which will produce a reliable environment that will behave the same locally and in ci. To use it instead of the above you will need to [install nix](https://nixos.org/download/) - -To start the nix shell run: - -```shell -make nix_shell -``` - -If you install [direnv](https://github.com/direnv/direnv/blob/master/docs/installation.md) you will be able to have your environment start the nix shell as soon as you cd into it once you have allowed the directory via: - -```shell -direnv allow -``` - -### Running tests in k8s - -To read how to run a test in k8s, read [here](./k8s/REMOTE_RUN.md) - -### Usage - -#### With env vars (deprecated) - -Create an env in a separate file and run it - -```sh -export CHAINLINK_IMAGE="public.ecr.aws/chainlink/chainlink" -export CHAINLINK_TAG="1.4.0-root" -export CHAINLINK_ENV_USER="Satoshi" -go run k8s/examples/simple/env.go -``` - -For more features follow [tutorial](./k8s/TUTORIAL.md) - -#### With TOML config - -It should be noted that using env vars for configuring CL nodes in k8s is deprecated. TOML config should be used instead: - -```toml -[ChainlinkImage] -image="public.ecr.aws/chainlink/chainlink" -version="v2.8.0" -``` - -Check the example here: [env.go](./k8s/examples/simple_toml/env_toml_config.go) - -### Development - -#### Running standalone example environment - -```shell -go run k8s/examples/simple/env.go -``` - -If you have another env of that type, you can connect by overriding environment name - -```sh -ENV_NAMESPACE="..." go run k8s/examples/chainlink/env.go -``` - -Add more presets [here](./k8s/presets) - -Add more programmatic examples [here](./k8s/examples/) - -If you have [chaosmesh](https://chaos-mesh.org/) installed in your cluster you can pull and generated CRD in go like that - -```sh -make chaosmesh -``` - -If you need to check your system tests coverage, use [that](./k8s/TUTORIAL.md#coverage) - -# Chainlink Charts - -This repository contains helm charts used by the chainlink organization mostly in QA. - -## Chart Repository - -You can add the published chart repository by pointing helm to the `gh-pages` branch with a personal access token (PAT) that has at least read-only access to the repository. - -```sh -helm repo add chainlink-qa https://raw.githubusercontent.com/smartcontractkit/qa-charts/gh-pages/ -helm search repo chainlink -``` - -## Releasing Charts - -The following cases will trigger a chart release once a PR is merged into the `main` branch. -Modified packages or new packages get added and pushed to the `gh-pages` branch of the [qa-charts](https://github.com/smartcontractkit/qa-charts) repository. - -- An existing chart is version bumped -- A new chart is added - -Removed charts do not trigger a re-publish, the packages have to be removed and the index file regenerated in the `gh-pages` branch of the [qa-charts](https://github.com/smartcontractkit/qa-charts) repository. - -Note: The qa-charts repository is scheduled to look for changes to the charts once every hour. This can be expedited by going to that repo and running the cd action via github UI. - -# Simulated EVM chains - -We have extended support for execution layer clients in simulated networks. Following ones are supported: - -- `Geth` -- `Nethermind` -- `Besu` -- `Erigon` - -When it comes to consensus layer we currently support only `Prysm`. - -The easiest way to start a simulated network is to use a builder. It allows to configure the network in a fluent way and then start it. For example: - -```go -builder := NewEthereumNetworkBuilder() -cfg, err: = builder. - WithEthereumVersion(EthereumVersion_Eth2). - WithExecutionLayer(ExecutionLayer_Geth). - Build() -``` - -Since we support both `eth1` (aka pre-Merge) and `eth2` (aka post-Merge) client versions, you need to specify which one you want to use. You can do that by calling `WithEthereumVersion` method. There's no default provided. The only exception is when you use custom docker images (instead of default ones), because then we can determine which version it is based on the image version. - -If you want your test to execute as fast as possible go for `eth1` since it's either using a fake PoW or PoA consensus and is much faster than `eth2` which uses PoS consensus (where there is a minimum viable length of slot/block, which is 4 seconds; for `eth1` it's 1 second). If you want to test the latest features, changes or forks in the Ethereum network and have your tests running on a network which is as close as possible to Ethereum Mainnet, go for `eth2`. - -Every component has some default Docker image it uses, but builder has a method that allows to pass custom one: - -```go -builder := NewEthereumNetworkBuilder() -cfg, err: = builder. - WithEthereumVersion(EthereumVersion_Eth2). - WithConsensusLayer(ConsensusLayer_Prysm). - WithExecutionLayer(ExecutionLayer_Geth). - WithCustomDockerImages(map[ContainerType]string{ - ContainerType_Geth: "my-custom-geth-pos-image:my-version"}). - Build() -``` - -When using a custom image you can even further simplify the builder by calling only `WithCustomDockerImages` method. Based on the image name and version we will determine which execution layer client it is and whether it's `eth1` or `eth2` client: - -```go -builder := NewEthereumNetworkBuilder() -cfg, err: = builder. - WithCustomDockerImages(map[ContainerType]string{ - ContainerType_Geth: "ethereum/client-go:v1.13.10"}). - Build() -``` - -In the case above we would launch a `Geth` client with `eth2` network and `Prysm` consensus layer. - -You can also configure epochs at which hardforks will happen. Currently only `Deneb` is supported. Epoch must be >= 1. Example: - -```go -builder := NewEthereumNetworkBuilder() -cfg, err: = builder. - WithConsensusType(ConsensusType_PoS). - WithConsensusLayer(ConsensusLayer_Prysm). - WithExecutionLayer(ExecutionLayer_Geth). - WithEthereumChainConfig(EthereumChainConfig{ - HardForkEpochs: map[string]int{"Deneb": 1}, - }). - Build() -``` - -## Command line - -You can start a simulated network with a single command: - -```sh -go run docker/test_env/cmd/main.go start-test-env private-chain -``` - -By default it will start a network with 1 node running `Geth` and `Prysm`. It will use default chain id of `1337` and won't wait for the chain to finalize at least one epoch. Once the chain is started it will save the network configuration in a `JSON` file, which then you can use in your tests to connect to that chain (and thus save time it takes to start a new chain each time you run your test). - -Following cmd line flags are available: - -```sh - -c, --chain-id int chain id (default 1337) - -l, --consensus-layer string consensus layer (prysm) (default "prysm") - -t, --consensus-type string consensus type (pow or pos) (default "pos") - -e, --execution-layer string execution layer (geth, nethermind, besu or erigon) (default "geth") - -w, --wait-for-finalization wait for finalization of at least 1 epoch (might take up to 5 minutes) - --consensus-client-image string custom Docker image for consensus layer client - --execution-layer-image string custom Docker image for execution layer client - --validator-image string custom Docker image for validator -``` - -To connect to that environment in your tests use the following code: - -```go - builder := NewEthereumNetworkBuilder() - cfg, err := builder. - WithExistingConfigFromEnvVar(). - Build() - - if err != nil { - return err - } - - net, rpc, err := cfg.Start() - if err != nil { - return err - } -``` - -Builder will read the location of chain configuration from env var named `PRIVATE_ETHEREUM_NETWORK_CONFIG_PATH` (it will be printed in the console once the chain starts). - -`net` is an instance of `blockchain.EVMNetwork`, which contains characteristics of the network and can be used to connect to it using an EVM client. `rpc` variable contains arrays of public and private RPC endpoints, where "private" means URL that's accessible from the same Docker network as the chain is running in. - -# Using LogStream - -LogStream is a package that allows to connect to a Docker container and then flush logs to configured targets. Currently 3 targets are supported: - -- `file` - saves logs to a file in `./logs` folder -- `loki` - sends logs to Loki -- `in-memory` - stores logs in memory - -It can be configured to use multiple targets at once. If no target is specified, it becomes a no-op. - -LogStream has to be configured by passing an instance of `LoggingConfig` to the constructor. - -When you connect a container LogStream will create a new consumer and start a detached goroutine that listens to logs emitted by that container and which reconnects and re-requests logs if listening fails for whatever reason. Retry limit and timeout can both be configured using functional options. In most cases one container should have one consumer, but it's possible to have multiple consumers for one container. - -LogStream stores all logs in gob temporary file. To actually send/save them, you need to flush them. When you do it, LogStream will decode the file and send logs to configured targets. If log handling results in an error it won't be retried and processing of logs for given consumer will stop (if you think we should add a retry mechanism please let us know). - -_Important:_ Flushing and accepting logs is blocking operation. That's because they both share the same cursor to temporary file and otherwise it's position would be racey and could result in mixed up logs. - -## Configuration - -Basic `LogStream` TOML configuration is following: - -```toml -[LogStream] -log_targets=["file"] -log_producer_timeout="10s" -log_producer_retry_limit=10 -``` - -You can find it here: [logging_default.toml](config/tomls/logging_default.toml) - -When using `in-memory` or `file` target no other configuration variables are required. When using `loki` target, following ones must be set: - -```toml -[Logging.Loki] -tenant_id="promtail" -url="https://change.me" -basic_auth_secret="my-secret-auth" -bearer_token_secret="bearer-token" -``` - -Also, do remember that different URL should be used when running in CI and everywhere else. In CI it should be a public endpoint, while in local environment it should be a private one. - -If your test has a Grafana dashboard in order for the url to be correctly printed you should provide the following config: - -```toml -[Logging.Grafana] -url="http://grafana.somwhere.com/my_dashboard" -``` - -## Initialisation - -First you need to create a new instance: - -```golang -// t - instance of *testing.T (can be nil) -// testConfig.Logging - pointer to logging part of TestConfig -ls := logstream.NewLogStream(t, testConfig.Logging) -``` - -## Listening to logs - -If using `testcontainers-go` Docker containers it is recommended to use life cycle hooks for connecting and disconnecting LogStream from the container. You can do that when creating `ContainerRequest` in the following way: - -```golang -containerRequest := &tc.ContainerRequest{ - LifecycleHooks: []tc.ContainerLifecycleHooks{ - {PostStarts: []tc.ContainerHook{ - func(ctx context.Context, c tc.Container) error { - if ls != nil { - return n.ls.ConnectContainer(ctx, c, "custom-container-prefix-can-be-empty") - } - return nil - }, - }, - PostStops: []tc.ContainerHook{ - func(ctx context.Context, c tc.Container) error { - if ls != nil { - return n.ls.DisconnectContainer(c) - } - return nil - }, - }}, - }, - } -``` - -You can print log location for each target using this function: `(m *LogStream) PrintLogTargetsLocations()`. For `file` target it will print relative folder path, for `loki` it will print URL of a Grafana Dashboard scoped to current execution and container ids. For `in-memory` target it's no-op. - -It is recommended to shutdown LogStream at the end of your tests. Here's an example: - -```golang -t.Cleanup(func() { - l.Warn().Msg("Shutting down Log Stream") - - if t.Failed() || os.Getenv("TEST_LOG_COLLECT") == "true" { - // we can't do much if this fails, so we just log the error - _ = logStream.FlushLogsToTargets() - // this will log log locations for each target (for file it will be a folder, for Loki Grafana dashboard -- remember to provide it's url in config!) - logStream.PrintLogTargetsLocations() - // this will save log locations in test summary, so that they can be easily accessed in GH's step summary - logStream.SaveLogLocationInTestSummary() - } - - // we can't do much if this fails, so we just log the error - _ = logStream.Shutdown(testcontext.Get(b.t)) - }) -``` - -or in a bit shorter way: - -```golang -t.Cleanup(func() { - l.Warn().Msg("Shutting down Log Stream") - - if t.Failed() || os.Getenv("TEST_LOG_COLLECT") == "true" { - // this will log log locations for each target (for file it will be a folder, for Loki Grafana dashboard -- remember to provide it's url in config!) - logStream.PrintLogTargetsLocations() - // this will save log locations in test summary, so that they can be easily accessed in GH's step summary - } - - // we can't do much if this fails - _ = logStream.FlushAndShutdown() - }) -``` - -## Grouping test execution - -When running tests in CI you're probably interested in grouping logs by test execution, so that you can easily find the logs in Loki. To do that your job should set `RUN_ID` environment variable. In GHA it's recommended to set it to workflow id. If that variable is not set, then a run id will be automatically generated and saved in `.run.id` file, so that it can be shared by tests that are part of the same execution, but are running in different processes. - -## Test Summary - -In order to facilitate displaying information in GH's step summary `testsummary` package was added. It exposes a single function `AddEntry(testName, key string, value interface{}) `. When you call it, it either creates a test summary JSON file or appends to it. The result is is a map of keys with values. - -Example: - -```JSON -{ - "file":[ - { - "test_name":"TestOCRv2Basic", - "value":"./logs/TestOCRv2Basic-2023-12-01T18-00-59-TestOCRv2Basic-38ac1e52-d0a6-48" - } - ], - "loki":[ - { - "test_name":"TestOCRv2Basic", - "value":"https://grafana.ops.prod.cldev.sh/d/ddf75041-1e39-42af-aa46-361fe4c36e9e/ci-e2e-tests-logs?orgId=1\u0026var-run_id=TestOCRv2Basic-38ac1e52-d0a6-48\u0026var-container_id=cl-node-a179ca7d\u0026var-container_id=cl-node-76798f87\u0026var-container_id=cl-node-9ff7c3ae\u0026var-container_id=cl-node-43409b09\u0026var-container_id=cl-node-3b6810bd\u0026var-container_id=cl-node-69fed256\u0026from=1701449851165\u0026to=1701450124925" - } - ] -} -``` - -In GHA after tests have ended we can use tools like `jq` to extract the information we need and display it in step summary. - -# TOML Config - -Basic and universal building blocks for TOML-based config are provided by `config` package. For more information do read [this](./config/README.md). - -# ECR Mirror - -An ecr mirror can be used to push images used often in order to bypass rate limit issues from dockerhub. The list of image mirrors can be found in the [matrix here](./.github/workflows/update-internal-mirrors.yaml). This currently works with images with version numbers in dockerhub. Support for gcr is coming in the future. The images must also have a version number so putting `latest` will not work. We have a separate list for one offs we want that can be added to [here](./scripts/mirror.json) that does work with gcr and latest images. Note however for `latest` it will only pull it once and will not update it in our mirror if the latest on the public repository has changed, in this case it is preferable to update it manually when you know that you need the new latest and the update will not break your tests. - -For images in the mirrors you can use the INTERNAL_DOCKER_REPO environment variable when running tests and it will use that mirror in place of the public repository. - -We have two ways to add new images to the ecr. The first two requirements are that you create the ecr repository with the same name as the one in dockerhub out in aws and then add that ecr to the infra permissions (ask TT if you don't know how to do this). - -1. If it does not have version numbers or is gcr then you can add it [here](./scripts/mirror.json) -2. You can add to the [mirror matrix](./.github/workflows/update-internal-mirrors.yaml) the new image name and an expression to get the latest versions added when the workflow runs. You can check the postgres one used in there for an example but basically the expression should filter out only the latest image or 2 for that particular version when calling the dockerhub endpoint, example curl call `curl -s "https://hub.docker.com/v2/repositories/${image_name}/tags/?page_size=100" | jq -r '.results[].name' | grep -E ${image_expression}` where image_name could be `library/postgres` and image_expression could be `'^[0-9]+\.[0-9]+$'`. Adding your ecr to this matrix should make sure we always have the latest versions for that expression. - -## Debugging HTTP and RPC calls - -```bash -export SETH_LOG_LEVEL=info -export RESTY_DEBUG=true -``` - -## Using AWS Secrets Manager - -Check the [docs](SECRETS.md) - -## Loki Client - -The `LokiClient` allows you to easily query Loki logs from your tests. It supports basic authentication, custom queries, and can be configured for (Resty) debug mode. - -### Debugging Resty and Loki Client - -```bash -export LOKI_CLIENT_LOG_LEVEL=info -export RESTY_DEBUG=true -``` - -### Example usage: - -```go -auth := LokiBasicAuth{ - Username: os.Getenv("LOKI_LOGIN"), - Password: os.Getenv("LOKI_PASSWORD"), -} - -queryParams := LokiQueryParams{ - Query: `{namespace="test"} |= "test"`, - StartTime: time.Now().AddDate(0, 0, -1), - EndTime: time.Now(), - Limit: 100, - } - -lokiClient := NewLokiClient("https://loki.api.url", "my-tenant", auth, queryParams) -logEntries, err := lokiClient.QueryLogs(context.Background()) -``` +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/lib.html) diff --git a/lib/SECRETS.md b/lib/SECRETS.md deleted file mode 100644 index d407b1ac5..000000000 --- a/lib/SECRETS.md +++ /dev/null @@ -1,24 +0,0 @@ -## Using AWSSecretsManager from code - -`client/secretsmanager.go` has a simple API to read/write/delete secrets. - -It uses a struct to protect such secrets from accidental printing or marshalling, see an [example](client/secretsmanager_test.go) test - -## Using AWSSecretsManager via CLI - -To create a static secret use `aws cli` - -``` -aws --region us-west-2 secretsmanager create-secret \ - --name MyTestSecret \ - --description "My test secret created with the CLI." \ - --secret-string "{\"user\":\"diegor\",\"password\":\"EXAMPLE-PASSWORD\"}" -``` - -Example of reading the secret - -``` -aws --region us-west-2 secretsmanager get-secret-value --secret-id MyTestSecret -``` - -For more information check [AWS CLI Reference](https://docs.aws.amazon.com/cli/v1/userguide/cli_secrets-manager_code_examples.html) diff --git a/lib/config/README.md b/lib/config/README.md index 685d76041..94fbe0983 100644 --- a/lib/config/README.md +++ b/lib/config/README.md @@ -1,370 +1,3 @@ # TOML Config -These basic building blocks can be used to create a TOML config file. For example: - -```golang -import ( - ctf_config "github.com/smartcontractkit/chainlink-testing-framework/config" - ctf_test_env "github.com/smartcontractkit/chainlink-testing-framework/docker/test_env" -) - -type TestConfig struct { - ChainlinkImage *ctf_config.ChainlinkImageConfig `toml:"ChainlinkImage"` - ChainlinkUpgradeImage *ctf_config.ChainlinkImageConfig `toml:"ChainlinkUpgradeImage"` - Logging *ctf_config.LoggingConfig `toml:"Logging"` - Network *ctf_config.NetworkConfig `toml:"Network"` - Pyroscope *ctf_config.PyroscopeConfig `toml:"Pyroscope"` - PrivateEthereumNetwork *ctf_test_env.EthereumNetwork `toml:"PrivateEthereumNetwork"` -} -``` - -It's up to the user to provide a way to read the config from file and unmarshal it into the struct. You can check [testconfig.go](../config/examples/testconfig.go) to see one way it could be done.. - -`Validate()` should be used to ensure that the config is valid. Some of the building blocks have also a `Default()` method that can be used to get default values. - -Also you might find `BytesToAnyTomlStruct(logger zerolog.Logger, filename, configurationName string, target any, content []byte) error` utility method useful for unmarshalling TOMLs read from env var or files into a struct - -## Test Secrets - -Test secrets are not stored directly within the `TestConfig` TOML due to security reasons. Instead, they are passed into `TestConfig` via environment variables. Below is a list of all available secrets. Set only the secrets required for your specific tests, like so: `E2E_TEST_CHAINLINK_IMAGE=qa_ecr_image_url`. - -### Default Secret Loading - -By default, secrets are loaded from the `~/.testsecrets` dotenv file. Example of a local `~/.testsecrets` file: - -```bash -E2E_TEST_CHAINLINK_IMAGE=qa_ecr_image_url -E2E_TEST_CHAINLINK_UPGRADE_IMAGE=qa_ecr_image_url -E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY=wallet_key -``` - -### All E2E Test Secrets - -| Secret | Env Var | Example | -| ----------------------------- | ------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| Chainlink Image | `E2E_TEST_CHAINLINK_IMAGE` | `E2E_TEST_CHAINLINK_IMAGE=qa_ecr_image_url` | -| Chainlink Upgrade Image | `E2E_TEST_CHAINLINK_UPGRADE_IMAGE` | `E2E_TEST_CHAINLINK_UPGRADE_IMAGE=qa_ecr_image_url` | -| Wallet Key per network | `E2E_TEST_(.+)_WALLET_KEY` or `E2E_TEST_(.+)_WALLET_KEY_(\d+)$` | `E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY=wallet_key` or `E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY_1=wallet_key_1`, `E2E_TEST_ARBITRUM_SEPOLIA_WALLET_KEY_2=wallet_key_2` for multiple keys per network | -| RPC HTTP URL per network | `E2E_TEST_(.+)_RPC_HTTP_URL` or `E2E_TEST_(.+)_RPC_HTTP_URL_(\d+)$` | `E2E_TEST_ARBITRUM_SEPOLIA_RPC_HTTP_URL=url` or `E2E_TEST_ARBITRUM_SEPOLIA_RPC_HTTP_URL_1=url`, `E2E_TEST_ARBITRUM_SEPOLIA_RPC_HTTP_URL_2=url` for multiple http urls per network | -| RPC WebSocket URL per network | `E2E_TEST_(.+)_RPC_WS_URL` or `E2E_TEST_(.+)_RPC_WS_URL_(\d+)$` | `E2E_TEST_ARBITRUM_RPC_WS_URL=ws_url` or `E2E_TEST_ARBITRUM_RPC_WS_URL_1=ws_url_1`, `E2E_TEST_ARBITRUM_RPC_WS_URL_2=ws_url_2` for multiple ws urls per network | -| Loki Tenant ID | `E2E_TEST_LOKI_TENANT_ID` | `E2E_TEST_LOKI_TENANT_ID=tenant_id` | -| Loki Endpoint | `E2E_TEST_LOKI_ENDPOINT` | `E2E_TEST_LOKI_ENDPOINT=url` | -| Loki Basic Auth | `E2E_TEST_LOKI_BASIC_AUTH` | `E2E_TEST_LOKI_BASIC_AUTH=token` | -| Loki Bearer Token | `E2E_TEST_LOKI_BEARER_TOKEN` | `E2E_TEST_LOKI_BEARER_TOKEN=token` | -| Grafana Bearer Token | `E2E_TEST_GRAFANA_BEARER_TOKEN` | `E2E_TEST_GRAFANA_BEARER_TOKEN=token` | -| Pyroscope Server URL | `E2E_TEST_PYROSCOPE_SERVER_URL` | `E2E_TEST_PYROSCOPE_SERVER_URL=url` | -| Pyroscope Key | `E2E_TEST_PYROSCOPE_KEY` | `E2E_TEST_PYROSCOPE_KEY=key` | - -### Run GitHub Workflow with Your Test Secrets - -By default, GitHub workflows execute with a set of predefined secrets. However, you can use custom secrets by specifying a unique identifier for your secrets when running the `gh workflow` command. - -#### Steps to Use Custom Secrets - -1. **Upload Local Secrets to GitHub Secrets Vault:** - - - **Install `ghsecrets` tool:** - Install the `ghsecrets` tool to manage GitHub Secrets more efficiently. - - ```bash - go install github.com/smartcontractkit/chainlink-testing-framework/tools/ghsecrets@latest - ``` - - If you use `asdf`, run `asdf reshim` - - - **Upload Secrets:** - Run `ghsecrets set` from local core repo to upload the content of your `~/.testsecrets` file to the GitHub Secrets Vault and generate a unique identifier (referred to as `your_ghsecret_id`). - - ```bash - cd path-to-chainlink-core-repo - ``` - - ```bash - ghsecrets set - ``` - - For more details about `ghsecrets`, visit https://github.com/smartcontractkit/chainlink-testing-framework/tree/main/tools/ghsecrets#faq - -2. **Execute the Workflow with Custom Secrets:** - - To use the custom secrets in your GitHub Actions workflow, pass the `-f test_secrets_override_key={your_ghsecret_id}` flag when running the `gh workflow` command. - ```bash - gh workflow run -f test_secrets_override_key={your_ghsecret_id} - ``` - -#### Default Secrets Handling - -If the `test_secrets_override_key` is not provided, the workflow will default to using the secrets preconfigured in the CI environment. - -### Creating New Test Secrets in TestConfig - -When adding a new secret to the `TestConfig`, such as a token or other sensitive information, the method `ReadConfigValuesFromEnvVars()` in `config/testconfig.go` must be extended to include the new secret. Ensure that the new environment variable starts with the `E2E_TEST_` prefix. This prefix is crucial for ensuring that the secret is correctly propagated to Kubernetes tests when using the Remote Runner. - -Here’s a quick checklist for adding a new test secret: - -- Add the secret to ~/.testsecrets with the `E2E_TEST_` prefix to ensure proper handling. -- Extend the `config/testconfig.go:ReadConfigValuesFromEnvVars()` method to load the secret in `TestConfig` -- Add the secrets to [All E2E Test Secrets](https://github.com/smartcontractkit/chainlink-testing-framework/blob/main/config/README.md#all-e2e-test-secrets) table. - -## Working example - -For a full working example making use of all the building blocks see [testconfig.go](../config/examples/testconfig.go). It provides methods for reading TOML, applying overrides and validating non-empty config blocks. It supports 4 levels of overrides, in order of precedence: - -- `BASE64_CONFIG_OVERRIDE` env var -- `overrides.toml` -- `[product_name].toml` -- `default.toml` - -All you need to do now to get the config is execute `func GetConfig(configurationName string, product string) (TestConfig, error)`. It will first look for folder with file `.root_dir` and from there it will look for config files in all subfolders, so that you can place the config files in whatever folder(s) work for you. It assumes that all configuration versions for a single product are kept in `[product_name].toml` under different configuration names (that can represent anything you want: a single test, a test type, a test group, etc). - -Overrides of config files are done in a super-simple way. We try to unmarshall consecutive files into the same struct. Since it's all pointer based only not-nil keys are overwritten. - -## IMPORTANT! - -It is **required** to add `overrides.toml` to `.gitignore` in your project, so that you don't accidentally commit it as it might contain secrets. - -## Network config (and default RPC endpoints) - -Some more explanation is needed for the `NetworkConfig`: - -```golang -type NetworkConfig struct { - // list of networks that should be used for testing - SelectedNetworks []string `toml:"selected_networks"` - // map of network name to EVMNetworks where key is network name and value is a pointer to EVMNetwork - // if not set, it will try to find the network from defined networks in MappedNetworks under known_networks.go - // it doesn't matter if you use `arbitrum_sepolia` or `ARBITRUM_SEPOLIA` or even `arbitrum_SEPOLIA` as key - // as all keys will be uppercased when loading the Default config - EVMNetworks map[string]*blockchain.EVMNetwork `toml:"EVMNetworks,omitempty"` - // map of network name to ForkConfigs where key is network name and value is a pointer to ForkConfig - // only used if network fork is needed, if provided, the network will be forked with the given config - // networkname is fetched first from the EVMNetworks and - // if not defined with EVMNetworks, it will try to find the network from defined networks in MappedNetworks under known_networks.go - ForkConfigs map[string]*ForkConfig `toml:"ForkConfigs,omitempty"` - // map of network name to RPC endpoints where key is network name and value is a list of RPC HTTP endpoints - RpcHttpUrls map[string][]string `toml:"RpcHttpUrls"` - // map of network name to RPC endpoints where key is network name and value is a list of RPC WS endpoints - RpcWsUrls map[string][]string `toml:"RpcWsUrls"` - // map of network name to wallet keys where key is network name and value is a list of private keys (aka funding keys) - WalletKeys map[string][]string `toml:"WalletKeys"` -} - -func (n *NetworkConfig) Default() error { - ... -} -``` - -Sample TOML config: - -```toml -selected_networks = ["arbitrum_goerli", "optimism_goerli", "new_network"] - -[EVMNetworks.new_network] -evm_name = "new_test_network" -evm_chain_id = 100009 -evm_simulated = true -evm_chainlink_transaction_limit = 5000 -evm_minimum_confirmations = 1 -evm_gas_estimation_buffer = 10000 -client_implementation = "Ethereum" -evm_supports_eip1559 = true -evm_default_gas_limit = 6000000 - -[ForkConfigs.new_network] -url = "ws://localhost:8546" -block_number = 100 - -[RpcHttpUrls] -arbitrum_goerli = ["https://devnet-2.mt/ABC/rpc/"] -new_network = ["http://localhost:8545"] - -[RpcWsUrls] -arbitrum_goerli = ["wss://devnet-2.mt/ABC/ws/"] -new_network = ["ws://localhost:8546"] - -[WalletKeys] -arbitrum_goerli = ["1810868fc221b9f50b5b3e0186d8a5f343f892e51ce12a9e818f936ec0b651ed"] -optimism_goerli = ["1810868fc221b9f50b5b3e0186d8a5f343f892e51ce12a9e818f936ec0b651ed"] -new_network = ["ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"] -``` - -Whenever you are adding a new EVMNetwork to the config, you can either - -- provide the rpcs and wallet keys in RpcUrls and WalletKeys. Like in the example above, you can see that `new_network` is added to the `selected_networks` and `EVMNetworks` and then the rpcs and wallet keys are provided in `RpcHttpUrls`, `RpcWsUrls` and `WalletKeys` respectively. -- provide the rpcs and wallet keys in the `EVMNetworks` itself. Like in the example below, you can see that `new_network` is added to the `selected_networks` and `EVMNetworks` and then the rpcs and wallet keys are provided in `EVMNetworks` itself. - -```toml - -selected_networks = ["new_network"] - -[EVMNetworks.new_network] -evm_name = "new_test_network" -evm_chain_id = 100009 -evm_urls = ["ws://localhost:8546"] -evm_http_urls = ["http://localhost:8545"] -evm_keys = ["ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"] -evm_simulated = true -evm_chainlink_transaction_limit = 5000 -evm_minimum_confirmations = 1 -evm_gas_estimation_buffer = 10000 -client_implementation = "Ethereum" -evm_supports_eip1559 = true -evm_default_gas_limit = 6000000 -``` - -If your config struct looks like that: - -```golang - -type TestConfig struct { - Network *ctf_config.NetworkConfig `toml:"Network"` -} -``` - -then your TOML file should look like that: - -```toml -[Network] -selected_networks = ["arbitrum_goerli","new_network"] - -[Network.EVMNetworks.new_network] -evm_name = "new_test_network" -evm_chain_id = 100009 -evm_simulated = true -evm_chainlink_transaction_limit = 5000 -evm_minimum_confirmations = 1 -evm_gas_estimation_buffer = 10000 -client_implementation = "Ethereum" -evm_supports_eip1559 = true -evm_default_gas_limit = 6000000 - -[Network.RpcHttpUrls] -arbitrum_goerli = ["https://devnet-2.mt/ABC/rpc/"] -new_network = ["http://localhost:8545"] - -[Network.RpcWsUrls] -arbitrum_goerli = ["ws://devnet-2.mt/ABC/rpc/"] -new_network = ["ws://localhost:8546"] - -[Network.WalletKeys] -arbitrum_goerli = ["1810868fc221b9f50b5b3e0186d8a5f343f892e51ce12a9e818f936ec0b651ed"] -new_network = ["ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"] -``` - -If in your product config you want to support case-insensitive network names and map keys remember to run `NetworkConfig.UpperCaseNetworkNames()` on your config before using it. - -## Providing custom values in the CI - -Up to this point when we wanted to modify some dynamic tests parameters in the CI we would simply set env vars. That approach won't work anymore. The way to go around it is to build a TOML file, `base64` it, mask it and then set is as `BASE64_CONFIG_OVERRIDE` env var that will be read by tests. Here's an example of a working snippet of how that could look: - -```bash -convert_to_toml_array() { - local IFS=',' - local input_array=($1) - local toml_array_format="[" - - for element in "${input_array[@]}"; do - toml_array_format+="\"$element\"," - done - - toml_array_format="${toml_array_format%,}]" - echo "$toml_array_format" -} - -selected_networks=$(convert_to_toml_array "$SELECTED_NETWORKS") -log_targets=$(convert_to_toml_array "$LOGSTREAM_LOG_TARGETS") - -if [ -n "$PYROSCOPE_SERVER" ]; then - pyroscope_enabled=true -else - pyroscope_enabled=false -fi - -if [ -n "$ETH2_EL_CLIENT" ]; then - execution_layer="$ETH2_EL_CLIENT" -else - execution_layer="geth" -fi - -if [ -n "$TEST_LOG_COLLECT" ]; then - test_log_collect=true -else - test_log_collect=false -fi - -cat << EOF > config.toml -[Network] -selected_networks=$selected_networks - -[ChainlinkImage] -image="$CHAINLINK_IMAGE" -version="$CHAINLINK_VERSION" - -[Pyroscope] -enabled=$pyroscope_enabled -server_url="$PYROSCOPE_SERVER" -environment="$PYROSCOPE_ENVIRONMENT" -key_secret="$PYROSCOPE_KEY" - -[Logging] -test_log_collect=$test_log_collect -run_id="$RUN_ID" - -[Logging.LogStream] -log_targets=$log_targets - -[Logging.Loki] -tenant_id="$LOKI_TENANT_ID" -url="$LOKI_URL" -basic_auth_secret="$LOKI_BASIC_AUTH" -bearer_token_secret="$LOKI_BEARER_TOKEN" - -[Logging.Grafana] -url="$GRAFANA_URL" -EOF - -BASE64_CONFIG_OVERRIDE=$(cat config.toml | base64 -w 0) -echo ::add-mask::$BASE64_CONFIG_OVERRIDE -echo "BASE64_CONFIG_OVERRIDE=$BASE64_CONFIG_OVERRIDE" >> $GITHUB_ENV -``` - -**These two lines in that very order are super important** - -```bash -BASE64_CONFIG_OVERRIDE=$(cat config.toml | base64 -w 0) -echo ::add-mask::$BASE64_CONFIG_OVERRIDE -``` - -`::add-mask::` has to be called only after env var has been set to it's final value, otherwise it won't be recognized and masked properly and secrets will be exposed in the logs. - -## Providing custom values for local execution - -For local execution it's best to put custom variables in `overrides.toml` file. - -## Providing custom values in k8s - -It's easy. All you need to do is: - -- Create TOML file with these values -- Base64 it: `cat your.toml | base64` -- Set the base64 result as `BASE64_CONFIG_OVERRIDE` environment variable. - -`BASE64_CONFIG_OVERRIDE` will be automatically forwarded to k8s (as long as it is set and available to the test process), when creating the environment programmatically via `environment.New()`. - -Quick example: - -```bash -BASE64_CONFIG_OVERRIDE=$(cat your.toml | base64) go test your-test-that-runs-in-k8s ./file/with/your/test -``` - -# Not moved to TOML - -Not moved to TOML: - -- `SLACK_API_KEY` -- `SLACK_USER` -- `SLACK_CHANNEL` -- `TEST_LOG_LEVEL` -- `CHAINLINK_ENV_USER` -- `DETACH_RUNNER` -- `ENV_JOB_IMAGE` -- most of k8s-specific env variables were left untouched +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/lib/config/config.html) \ No newline at end of file diff --git a/lib/crib/README.md b/lib/crib/README.md index 34f9a5f7c..22e01cdf4 100644 --- a/lib/crib/README.md +++ b/lib/crib/README.md @@ -1,25 +1,3 @@ ### CRIB Connector -This is a simple CRIB connector for OCRv1 CRIB -This code is temporary and may be removed in the future if connection logic will be simplified with [ARC](https://github.com/actions/actions-runner-controller) - -## Example - -Go to the [CRIB](https://github.com/smartcontractkit/crib) repository and spin up a cluster. - -```shell -./scripts/cribbit.sh crib-oh-my-crib -devspace deploy --debug --profile local-dev-simulated-core-ocr1 -``` - -## Run an example test - -```shell -export CRIB_NAMESPACE=crib-oh-my-crib -export CRIB_NETWORK=geth # only "geth" is supported for now -export CRIB_NODES=5 # min 5 nodes -#export SETH_LOG_LEVEL=debug # these two can be enabled to debug connection issues -#export RESTY_DEBUG=true -export GAP_URL=https://localhost:8080/primary # only applicable in CI, unset the var to connect locally -go test -v -run TestCRIB -``` +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/lib/crib/crib.html) diff --git a/lib/k8s/README.md b/lib/k8s/README.md new file mode 100644 index 000000000..1a25a5e59 --- /dev/null +++ b/lib/k8s/README.md @@ -0,0 +1,3 @@ +# K8s Deployment (Deprecated) + +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/lib/k8s/KUBERNETES.html) \ No newline at end of file diff --git a/seth/README.md b/seth/README.md index 86fb8d484..72ac5fc46 100644 --- a/seth/README.md +++ b/seth/README.md @@ -1,841 +1,5 @@ # Seth -Reliable and debug-friendly Ethereum client +Reliable and debug-friendly Ethereum client. -[![Go Report Card](https://goreportcard.com/badge/github.com/smartcontractkit/chainlink-testing-framework/seth)](https://goreportcard.com/report/github.com/smartcontractkit/chainlink-testing-framework/seth) -[![Decoding tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/seth-test-decode.yml/badge.svg)](https://github.com/smartcontractkit/seth/actions/workflows/test_decode.yml) -[![Tracing tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/seth-test-trace.yml/badge.svg)](https://github.com/smartcontractkit/seth/actions/workflows/test_trace.yml) -[![Gas bumping tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/seth-test-bumping.yml/badge.svg)](https://github.com/smartcontractkit/seth/actions/workflows/test_cli.yml) -[![API tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/seth-test-api.yml/badge.svg)](https://github.com/smartcontractkit/seth/actions/workflows/test_api.yml) -[![CLI tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/seth-test-cli.yml/badge.svg)](https://github.com/smartcontractkit/seth/actions/workflows/test_cli.yml) -[![Integration tests (testnets)](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/seth-test-decode-testnet.yml/badge.svg)](https://github.com/smartcontractkit/seth/actions/workflows/test_decode_testnet.yml) -
- -# Content - -1. [Goals](#goals) -2. [Features](#features) -3. [Examples](#examples) -4. [Setup](#setup) - 1. [Building test contracts](#building-test-contracts) - 2. [Testing](#testing) -5. [Configuration](#config) - 1. [Simplified configuration](#simplified-configuration) - 2. [ClientBuilder](#clientbuilder) - 3. [Supported env vars](#supported-env-vars) - 4. [TOML configuration](#toml-configuration) -6. [Automated gas price estimation](#automatic-gas-estimator) -7. [DOT Graphs of transactions](#dot-graphs) -8. [Using multiple private keys](#using-multiple-keys) -9. [Experimental features](#experimental-features) -10. [Gas bumping for slow transactions](#gas-bumping-for-slow-transactions) -11. [CLI](#cli) -12. [Manual gas price estimation](#manual-gas-price-estimation) -13. [Block Stats](#block-stats) -14. [Single transaction tracing](#single-transaction-tracing) -15. [Bulk transaction tracing](#bulk-transaction-tracing) -16. [RPC traffic logging](#rpc-traffic-logging) -17. [Read-only mode](#read-only-mode) - -## Goals - -- Be a thin, debuggable and battle tested wrapper on top of `go-ethereum` -- Decode all transaction inputs/outputs/logs for all ABIs you are working with, automatically -- Simple synchronous API -- Do not handle `nonces` on the client side, trust the server -- Do not wrap `bind` generated contracts, small set of additional debug API -- Resilient: should execute transactions even if there is a gas spike or an RPC outage (failover) -- Well tested: should provide a suite of e2e tests that can be run on testnets to check integration - -## Features - -- [x] Decode named inputs -- [x] Decode named outputs -- [x] Decode anonymous outputs -- [x] Decode logs -- [x] Decode indexed logs -- [x] Decode old string reverts -- [x] Decode new typed reverts -- [x] EIP-1559 support -- [x] Multi-keys client support -- [x] CLI to manipulate test keys -- [x] Simple manual gas price estimation -- [ ] Fail over client logic -- [ ] Decode collided event hashes -- [x] Tracing support (4byte) -- [x] Tracing support (callTracer) -- [ ] Tracing support (prestate) -- [x] Tracing decoding -- [x] Tracing tests -- [ ] More tests for corner cases of decoding/tracing -- [x] Saving of deployed contracts mapping (`address -> ABI_name`) for live networks -- [x] Reading of deployed contracts mappings for live networks -- [x] Automatic gas estimator (experimental) -- [x] Block stats CLI -- [x] Check if address has a pending nonce (transaction) and panic if it does -- [x] DOT graph output for tracing -- [x] Gas bumping for slow transactions - -You can read more about how ABI finding and contract map works [here](./docs/abi_finder_contract_map.md) and about contract store here [here](./docs/contract_store.md). - -## Examples - -Check [examples](./examples) folder - -Lib provides a small amount of helpers for decoding handling that you can use with vanilla `go-ethereum` generated wrappers - -```go -// Decode waits for transaction and decode all the data/errors -Decode(tx *types.Transaction, txErr error) (*DecodedTransaction, error) - -// NewTXOpts returns a new sequential transaction options wrapper, -// sets opts.GasPrice and opts.GasLimit from seth.toml or override with options -NewTXOpts(o ...TransactOpt) *bind.TransactOpts - -// NewCallOpts returns a new call options wrapper -NewCallOpts(o ...CallOpt) *bind.CallOpts -``` - -By default, we are using the `root` key `0`, but you can also use any of the private keys passed as part of `Network` configuration in `seth.toml` or ephemeral keys. - -```go -// NewCallKeyOpts returns a new sequential call options wrapper from the key N -NewCallKeyOpts(keyNum int, o ...CallOpt) *bind.CallOpts - -// NewTXKeyOpts returns a new transaction options wrapper called from the key N -NewTXKeyOpts(keyNum int, o ...TransactOpt) *bind.TransactOpts -``` - -Start `Geth` in a separate terminal, then run the examples - -```sh -make GethSync -cd examples -go test -v -``` - -## Setup - -We are using [nix](https://nixos.org/) - -Enter the shell - -```sh -nix develop -``` - -## Building test contracts - -We have `go-ethereum` and [foundry](https://github.com/foundry-rs/foundry) tools inside `nix` shell - -```sh -make build -``` - -## Testing - -To run tests on a local network, first start it - -```sh -make AnvilSync -``` - -Or use latest `Geth` - -```sh -make GethSync -``` - -You can use default `hardhat` key `ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80` to run tests - -Run the [decode](./client_decode_test.go) tests - -```sh -make network=Anvil root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test -make network=Geth root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test -``` - -Check other params in [seth.toml](./seth.toml), select any network and use your key for testnets - -User facing API tests are [here](./client_api_test.go) - -```sh -make network=Anvil root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test_api -make network=Geth root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test_api -``` - -CLI tests - -```sh -make network=Anvil root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test_cli -make network=Geth root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test_cli -``` - -Tracing tests - -```sh -make network=Anvil root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test_trace -make network=Geth root_private_key=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 test_trace -``` - -# Config - -### Simplified configuration - -If you do not want to set all the parameters, you can use a simplified progammatical configuration. Here's an example: - -```go -cfg := seth.DefaultConfig("ws://localhost:8546", []string{"ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"}) -client, err := seth.NewClientWithConfig(cfg) -if err != nil { - log.Fatal(err) -} -``` - -This config uses what we consider reasonable defaults, such as: - -- 5 minute transaction confirmation timeout -- 1 minute RPC node dial timeout -- enabled EIP-1559 dynamic fees and automatic gas prices estimation (with 200 blocks history; will auto-disable itself if RPC doesn't support EIP-1559) -- tracing only of reverted transaction to console and DOT graphs -- checking of RPC node health on client creation -- no ephemeral keys - -### ClientBuilder - -You can also use a `ClientBuilder` to build a config programmatically. Here's an extensive example: - -```go -client, err := NewClientBuilder(). - // network - WithNetworkName("my network"). - // if empty we will ask the RPC node for the chain ID - WithNetworkChainId(1337). - WithRpcUrl("ws://localhost:8546"). - WithPrivateKeys([]string{"ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"}). - WithRpcDialTimeout(10*time.Second). - WithTransactionTimeouts(1*time.Minute). - // addresses - WithEphemeralAddresses(10, 10). - // tracing - WithTracing(seth.TracingLevel_All, []string{seth.TraceOutput_Console}). - // protections - WithProtections(true, true, seth.MustMakeDuration(2*time.Minute)). - // artifacts folder - WithArtifactsFolder("some_folder"). - // folder with gethwrappers for ABI decoding - WithGethWrappersFolders([]string{"./gethwrappers/ccip", "./gethwrappers/keystone"}). - // nonce manager - WithNonceManager(10, 3, 60, 5). - // EIP-1559 and gas estimations - WithEIP1559DynamicFees(true). - WithDynamicGasPrices(120_000_000_000, 44_000_000_000). - WithGasPriceEstimations(true, 10, seth.Priority_Fast). -// gas bumping: retries, max gas price, bumping strategy function - WithGasBumping(5, 100_000_000_000, PriorityBasedGasBumpingStrategyFn). - Build() - -if err != nil { - log.Fatal(err) -} -``` - -By default, it uses the same values as simplified configuration, but you can override them by calling the appropriate methods. Builder includes only options -that we thought to be most useful, it's not a 1:1 mapping of all fields in the `Config` struct. Therefore, if you need to set some more advanced options, you should create the `Config` struct directly, -use TOML config or manually set the fields on the `Config` struct returned by the builder. - -It' also possible to use the builder to create a new config from an existing one: - -```go -client, err := NewClientBuilderWithConfig(&existingConfig). - UseNetworkWithChainId(1337). - WithEIP1559DynamicFees(false). - Build() - -if err != nil { - log.Fatal(err) -} -``` -This can be useful if you already have a config, but want to modify it slightly. It can also be useful if you read TOML config with multiple `Networks` and you want to specify which one you want to use. - -### Supported env vars - -Some crucial data is stored in env vars, create `.envrc` and use `source .envrc`, or use `direnv` - -```sh -export SETH_LOG_LEVEL=info # global logger level -export SETH_CONFIG_PATH=seth.toml # path to the toml config -export SETH_NETWORK=Geth # selected network -export SETH_ROOT_PRIVATE_KEY=ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80 # root private key - -alias seth="SETH_CONFIG_PATH=seth.toml go run cmd/seth/seth.go" # useful alias for CLI -``` - -> Find the log level options [here](https://github.com/rs/zerolog?tab=readme-ov-file#leveled-logging) - -Alternatively if you don't have a network defined in the TOML you can still use the CLI by providing these 2 key env vars: - -```sh -export SETH_URL=https://rpc.fuji.testnet.anyswap.exchange -export SETH_CHAIN_ID=43113 - -go run cmd/seth/seth.go ... # your command -``` - -In that case you should still pass network name with `-n` flag. - -### TOML configuration - -Set up your ABI directory (relative to `seth.toml`) - -```toml -abi_dir = "contracts/abi" -``` - -Setup your BIN directory (relative to `seth.toml`) - -```toml -bin_dir = "contracts/bin" -``` - -Decide whether you want to generate any `ephemeral` keys: - -```toml -# Set number of ephemeral keys to be generated (0 for no ephemeral keys). Each key will receive a proportion of native tokens from root private key's balance with the value equal to `(root_balance / ephemeral_keys_number) - transfer_fee * ephemeral_keys_number`. -ephemeral_addresses_number = 10 -``` - -You can enable auto-tracing for all transactions meeting configured level, which means that every time you use `Decode()` we will decode the transaction and also trace all calls made within the transaction, together with all inputs, outputs, logs and events. Three tracing levels are available: - -- `all` - trace all transactions -- `reverted` - trace only reverted transactions (that's default setting used if you don't set `tracing_level`) -- `none` - don't trace any transactions - -Example: - -```toml -tracing_level = "reverted" -``` - -Additionally, you can decide where tracing/decoding data goes to. There are three options: - -- `console` - we will print all tracing data to the console -- `json` - we will save tracing data for each transaction to a JSON file -- `dot` - we will save tracing data for each transaction to a DOT file (graph) - -```toml -trace_outputs = ["console", "json", "dot"] -``` - -For info on viewing DOT files please check the [DOT graphs](#dot-graphs) section below. - -Example: -![image](./docs/tracing_example.png) -These two options should be used with care, when `tracing_level` is set to `all` as they might generate a lot of data. - -If you want to check if the RPC is healthy on start, you can enable it with: - -```toml -check_rpc_health_on_start = false -``` - -It will execute a simple check of transferring 10k wei from root key to root key and check if the transaction was successful. - -You can also enable pending nonce protection that will check if given key has any pending transactions. By default, we will wait 1 minute for all transactions to be mined. If any of them is still pending, we will panic. You can enable it with: -```toml -pending_nonce_protection_enabled = true -pending_nonce_protection_timeout = "5m" -``` - -You can add more networks like this: - -```toml -[[Networks]] -name = "Fuji" -transaction_timeout = "30s" -# gas limit should be explicitly set only if you are connecting to a node that's incapable of estimating gas limit itself (should only happen for very old versions) -# gas_limit = 9_000_000 -# hardcoded gas limit for sending funds that will be used if estimation of gas limit fails -transfer_gas_fee = 21_000 -# legacy transactions -gas_price = 1_000_000_000 -# EIP-1559 transactions -eip_1559_dynamic_fees = true -gas_fee_cap = 25_000_000_000 -gas_tip_cap = 1_800_000_000 -urls_secret = ["..."] -# if set to true we will dynamically estimate gas for every transaction (explained in more detail below) -gas_price_estimation_enabled = true -# how many last blocks to use, when estimating gas for a transaction -gas_price_estimation_blocks = 1000 -# priority of the transaction, can be "fast", "standard" or "slow" (the higher the priority, the higher adjustment factor and buffer will be used for gas estimation) [default: "standard"] -gas_price_estimation_tx_priority = "slow" -``` - -If you don't we will use the default settings for `Default` network. - -ChainID is not needed, as it's fetched from the node. - -If you want to save addresses of deployed contracts, you can enable it with: - -```toml -save_deployed_contracts_map = true -``` - -If you want to re-use previously deployed contracts you can indicate file name in `seth.toml`: - -```toml -contract_map_file = "deployed_contracts_mumbai.toml" -``` - -Both features only work for live networks. Otherwise, they are ignored, and nothing is saved/read from for simulated networks. - -### Automatic Gas Estimator - -This section explains how to configure and understand the automatic gas estimator, which is crucial for executing transactions on Ethereum-based networks. Here’s what you need to know: - -#### Configuration Requirements - -Before using the automatic gas estimator, it's essential to set the default gas-related parameters for your network: - -- **Non-EIP-1559 Networks**: Set the `gas_price` to define the cost per unit of gas if your network doesn't support EIP-1559. -- **EIP-1559 Networks**: If your network supports EIP-1559, set the following: - - `eip_1559_dynamic_fees`: Enables dynamic fee structure. - - `gas_fee_cap`: The maximum fee you're willing to pay per gas. - - `gas_tip_cap`: An optional tip to prioritize your transaction within a block (although if it's set to `0` there's a high chance your transaction will take longer to execute as it will be less attractive to miners, so do set it). - -These settings act as a fallback if the gas estimation fails. Additionally, always specify `transfer_gas_fee` for the fee associated with token transfers. - -If you do not know if your network supports EIP-1559, but you want to give it a try it's recommended that you also set `gas_price` as a fallback. When we try to use EIP-1559 during gas price estimation, but it fails, we will fallback to using non-EIP-1559 logic. If that one fails as well, we will use hardcoded `gas_price` value. - -#### How Gas Estimation Works - -Gas estimation varies based on whether the network is a private Ethereum Network or a live network. - -- **Private Ethereum Networks**: no estimation is needed. We always use hardcoded values. - -For real networks, the estimation process differs for legacy transactions and those compliant with EIP-1559: - -##### Legacy Transactions - -1. **Initial Price**: Query the network node for the current suggested gas price. -2. **Priority Adjustment**: Modify the initial price based on `gas_price_estimation_tx_priority`. Higher priority increases the price to ensure faster inclusion in a block. -3. **Congestion Analysis**: Examine the last X blocks (as specified by `gas_price_estimation_blocks`) to determine network congestion, calculating the usage rate of gas in each block and giving recent blocks more weight. Disabled if `gas_price_estimation_blocks` equals `0`. -4. **Buffering**: Add a buffer to the adjusted gas price to increase transaction reliability during high congestion. - -##### EIP-1559 Transactions - -1. **Tip Fee Query**: Ask the node for the current recommended tip fee. -2. **Fee History Analysis**: Gather the base fee and tip history from recent blocks to establish a fee baseline. -3. **Fee Selection**: Use the greatest of the node's suggested tip or the historical average tip for upcoming calculations. -4. **Priority and Adjustment**: Increase the base and tip fees based on transaction priority (`gas_price_estimation_tx_priority`), which influences how much you are willing to spend to expedite your transaction. -5. **Final Fee Calculation**: Sum the base fee and adjusted tip to set the `gas_fee_cap`. -6. **Congestion Buffer**: Similar to legacy transactions, analyze congestion and apply a buffer to both the fee cap and the tip to secure transaction inclusion. - -Understanding and setting these parameters correctly ensures that your transactions are processed efficiently and cost-effectively on the network. - -When fetching historical base fee and tip data, we will use the last `gas_price_estimation_blocks` blocks. If it's set to `0` we will default to `100` last blocks. If the blockchain has less than `100` blocks we will use all of them. - -Finally, `gas_price_estimation_tx_priority` is also used, when deciding, which percentile to use for base fee and tip for historical fee data. Here's how that looks: - -```go -case Priority_Fast: - baseFee = stats.GasPrice.Perc99 - historicalGasTipCap = stats.TipCap.Perc99 -case Priority_Standard: - baseFee = stats.GasPrice.Perc50 - historicalGasTipCap = stats.TipCap.Perc50 -case Priority_Slow: - baseFee = stats.GasPrice.Perc25 - historicalGasTipCap = stats.TipCap.Perc25 -``` - -##### Adjustment factor - -All values are multiplied by the adjustment factor, which is calculated based on `gas_price_estimation_tx_priority`: - -```go -case Priority_Fast: - return 1.2 -case Priority_Standard: - return 1.0 -case Priority_Slow: - return 0.8 -``` - -For fast transactions we will increase gas price by 20%, for standard we will use the value as is and for slow we will decrease it by 20%. - -##### Buffer percents - -If `gas_price_estimation_blocks` is higher than `0` we further adjust the gas price by adding a buffer to it, based on congestion rate: - -```go -case Congestion_Low: - return 1.10, nil -case Congestion_Medium: - return 1.20, nil -case Congestion_High: - return 1.30, nil -case Congestion_VeryHigh: - return 1.40, nil -``` - -For low congestion rate we will increase gas price by 10%, for medium by 20%, for high by 30% and for very high by 40%. We cache block header data in an in-memory cache, so we don't have to fetch it every time we estimate gas. The cache has capacity equal to `gas_price_estimation_blocks` and every time we add a new element, we remove one that is least frequently used and oldest (with block number being a constant and chain always moving forward it makes no sense to keep old blocks). It's important to know that in order to use congestion metrics we need to fetch at least 80% of the requested blocks. If that fails, we will skip this part of the estimation and only adjust the gas price based on priority. -For both transaction types if any of the steps fails, we fall back to hardcoded values. - -### DOT graphs - -There are multiple ways of visualising DOT graphs: - -- `xdot` application [recommended] -- VSCode Extensions -- online viewers - -### xdot - -To install simply run `homebrew install xdot` and then run `xdot `. This tool seems to be the best for the job, since the viewer is interactive and supports tooltips, which in our case contain extra tracing information. - -### VSCode Extensions - -There are multiple extensions that can be used to view DOT files in VSCode. We recommend using [Graphviz Preview](https://marketplace.visualstudio.com/items?itemName=EFanZh.graphviz-preview). The downside is that it doesn't support tooltips. - -### Goland - -We were unable to find any (working) plugins for DOT graph visualization. If you do know any, please let us know. - -### Online viewers - -There's at least a dozen of them available, but none of them support tooltips and most can't handle our multi-line labels. These two are known to work, though: - -- [Devtools/daily](https://www.devtoolsdaily.com/graphviz/) -- [Sketchviz](https://sketchviz.com/) - -### Using multiple keys - -If you want to use existing multiple keys (instead of ephemeral ones) you can pass them as part of the network configuration. In that case it's recommended to **not** read them from TOML file. If you need to read them for the filesystem/os it's best if you use environment variables. -Once you've read them in a safe manner you should programmatically add them to Seth's Config struct (which safe parts can be read from TOML file). You can either add them directly to `Network`, if it's already set up, or you can add them to `Networks` slice to the network you intend to use. - -For example you could start by reading the TOML configuration first: - -```go -cfg, err := seth.ReadCfg() -if err != nil { - log.Fatal(err) -} -``` - -Then read the private keys in a safe manner. For example from a secure vault or environment variables: - -```go -var privateKeys []string -var err error -privateKeys, err = some_utils.ReadPrivateKeysFromEnv() -if err != nil { - log.Fatal(err) -} -``` - -and then add them to the `Network` you plan to use. Let's assume it's called `Sepolia`: - -```go -for i, network := range cfg.Networks { - if network.Name == "Sepolia" { - cfg.Networks[i].PrivateKeys = privateKeys - } -} -``` - -Or if you aren't using `[[Networks]]` in your TOML config and have just a single `Network`: - -```go -cfg.Network.PrivateKeys = privateKeys -``` - -Or... you can use the convenience function `AppendPksToNetwork()` to have them added to both the `Network` and `Networks` slice: - -```go -added := cfg.AppendPksToNetwork(privateKeys, "Sepolia") -if !added { - log.Fatal("Network Sepolia not found in the config") -} -``` - -Finally, proceed to create a new Seth instance: - -```go -seth, err := seth.NewClientWithConfig(cfg) -if err != nil { - log.Fatal(err) -} -``` - -A working example can be found [here](examples/example_test.go) as `TestSmokeExampleMultiKeyFromEnv` test. - -Currently, there's no safe way to pass multiple keys to CLI. In that case TOML is the only way to go, but you should be mindful that if you commit the TOML file with keys in it, you should assume they are compromised and all funds on them are lost. - -### Experimental features - -In order to enable an experimental feature you need to pass its name in config. It's a global config, you cannot enable it per-network. Example: - -```toml -# other settings before... -tracing_level = "reverted" -trace_outputs = ["console"] -experiments_enabled = ["slow_funds_return", "eip_1559_fee_equalizer"] -``` - -Here's what they do: - -- `slow_funds_return` will work only in `core` and when enabled it changes tx priority to `slow` and increases transaction timeout to 30 minutes. -- `eip_1559_fee_equalizer` in case of EIP-1559 transactions if it detects that historical base fee and suggested/historical tip are more than 3 orders of magnitude apart, it will use the higher value for both (this helps in cases where base fee is almost 0 and transaction is never processed). - -## Gas bumping for slow transactions - -Seth has built-in gas bumping mechanism for slow transactions. If a transaction is not mined within a certain time frame (`Network`'s transaction timeout), Seth will automatically bump the gas price and resubmit the transaction. This feature is disabled by default and can be enabled by setting the `[gas_bumps] retries` to a non-zero number: - -```toml -[gas_bumps] -retries = 5 -``` - -Once enabled, by default the amount, by which gas price is bumped depends on `gas_price_estimation_tx_priority` setting and is calculated as follows: - -- `Priority_Fast`: 30% increase -- `Priority_Standard`: 15% increase -- `Priority_Slow`: 5% increase -- everything else: no increase - -You can cap max gas price by settings (in wei): - -```toml -[gas_bumps] -max_gas_price = 1000000000000 -``` - -Once the gas price bump would go above the limit we stop bumping and use the last gas price that was below the limit. - -How gas price is calculated depends on transaction type: - -- for legacy transactions it's just the gas price -- for EIP-1559 transactions it's the sum of gas fee cap and tip cap -- for Blob transactions (EIP-4844) it's the sum of gas fee cap and tip cap and max fee per blob -- for AccessList transactions (EIP-2930) it's just the gas price - -Please note that Blob and AccessList support remains experimental and is not tested. - -If you want to use a custom bumping strategy, you can use a function with [GasBumpStrategyFn](retry.go) type. Here's an example of a custom strategy that bumps the gas price by 100% for every retry: - -```go -var customGasBumpStrategyFn = func(gasPrice *big.Int) *big.Int { - return new(big.Int).Mul(gasPrice, big.NewInt(2)) -} -``` - -To use this strategy, you need to pass it to the `WithGasBumping` function in the `ClientBuilder`: - -```go -var hundredGwei in64 = 100_000_000_000 -client, err := builder. - // other settings... - WithGasBumping(5, hundredGwei, customGasBumpStrategyFn). - Build() -``` - -Or set it directly on Seth's config: - -```go -// assuming sethClient is already created -sethClient.Config.GasBumps.StrategyFn = customGasBumpStrategyFn -``` - -Since strategy function only accepts a single parameter, if you want to base its behaviour on anything else than that you will need to capture these values from the context, in which you define the strategy function. For example, you can use a closure to capture the initial gas price: - -```go -gasOracleClient := NewGasOracleClient() - -var oracleGasBumpStrategyFn = func(gasPrice *big.Int) *big.Int { - // get the current gas price from the oracle - suggestedGasPrice := gasOracleClient.GetCurrentGasPrice() - - // if oracle suggests a higher gas price, use it - if suggestedGasPrice.Cmp(gasPrice) == 1 { - return suggestedGasPrice - } - - // otherwise bump by 100% - return new(big.Int).Mul(gasPrice, big.NewInt(2)) -} -``` - -Same strategy is applied to all types of transactions, regardless whether it's gas price, gas fee cap, gas tip cap or max blob fee. - -When enabled, gas bumping is used in two places: - -- during contract deployment via `DeployContract` function -- inside `Decode()` function - -It is recommended to decrease transaction timeout when using gas bumping, as it will be effectively increased by the number of retries. So if you were running with 5 minutes timeout and 0 retries, you should set it to 1 minute and 5 retries -or 30 seconds and 10 retries. - -Don't worry if while bumping logic executes previous transaction gets mined. In that case sending replacement transaction with higher gas will fail (because it is using the same nonce as original transaction) and we will retry waiting for the mining of the original transaction. - -**Gas bumping is only applied for submitted transaction. If transaction was rejected by the node (e.g. because of too low base fee) we will not bump the gas price nor try to submit it, because original transaction submission happens outside of Seth.** - -## CLI - -You can either define the network you want to interact with in your TOML config and then refer it in the CLI command, or you can pass all network parameters via env vars. Most of the examples below show how to use the former approach. - -### Manual gas price estimation - -In order to adjust gas price for a transaction, you can use `seth gas` command - -```sh -seth -n Fuji gas -b 10000 -tp 0.99 -``` - -This will analyze last 10k blocks and give you 25/50/75/99th/Max percentiles for base fees and tip fees - -`-tp 0.99` requests the 99th tip percentile across all the transaction in one block and calculates 25/50/75/99th/Max across all blocks - -### Block Stats - -If you need to get some insights into network stats and create a realistic load/chaos profile with simulators (`anvil` as an example), you can use `stats` CLI command - -#### Define your network in `seth.toml` - -Edit your `seth.toml` - -```toml -[[networks]] -name = "MyCustomNetwork" -urls_secret = ["..."] - -[block_stats] -rpc_requests_per_second_limit = 5 -``` - -Then check the stats for the last N blocks - -```sh -seth -n MyCustomNetwork stats -s -10 -``` - -To check stats for the interval (A, B) - -```sh -seth -n MyCustomNetwork stats -s A -e B -``` - -#### Pass all network parameters via env vars - -If you don't have a network defined in the TOML you can still use the CLI by providing the RPC url via cmd arg. - -Then check the stats for the last N blocks - -```sh -seth -u "https://my-rpc.network.io" stats -s -10 -``` - -To check stats for the interval (A, B) - -```sh -seth -u "https://my-rpc.network.io" stats -s A -e B -``` - -Results can help you to understand if network is stable, what is avg block time, gas price, block utilization and transactions per second. - -```toml -# Stats -perc_95_tps = 8.0 -perc_95_block_duration = '3s' -perc_95_block_gas_used = 1305450 -perc_95_block_gas_limit = 15000000 -perc_95_block_base_fee = 25000000000 -avg_tps = 2.433333333333333 -avg_block_duration = '2s' -avg_block_gas_used = 493233 -avg_block_gas_limit = 15000000 -avg_block_base_fee = 25000000000 - -# Recommended performance/chaos test parameters -duration = '2m0s' -block_gas_base_fee_initial_value = 25000000000 -block_gas_base_fee_bump_percentage = '100.00% (no bump required)' -block_gas_usage_percentage = '3.28822000% gas used (no congestion)' -avg_tps = 3.0 -max_tps = 8.0 -``` - -### Single transaction tracing - -You can trace a single transaction using `seth trace` command. Example with `seth` alias mentioned before: - -```sh -seth -u "https://my-rpc.network.io" trace -t 0x4c21294bf4c0a19de16e0fca74e1ea1687ba96c3cab64f6fca5640fb7b84df65 -``` - -or if you want to use a predefined-network: - -```sh -seth -n=Geth trace -t 0x4c21294bf4c0a19de16e0fca74e1ea1687ba96c3cab64f6fca5640fb7b84df65 -``` - -### Bulk transaction tracing - -You can trace multiple transactions at once using `seth trace` command for a predefined network named `Geth`. Example: - -```sh -seth -n=Geth trace -f reverted_transactions.json -``` - -or by passing all the RPC parameter with a flag: - -```sh -seth -u "https://my-rpc.network.io" trace -f reverted_transactions.json -``` - -You need to pass a file with a list of transaction hashes to trace. The file should be a JSON array of transaction hashes, like this: - -```json -[ - "0x...", - "0x...", - "0x...", - ... -] -``` - -(Note that currently Seth automatically creates `reverted_transactions__.json` with all reverted transactions, so you can use this file as input for the `trace` command.) - -### RPC Traffic logging -With `SETH_LOG_LEVEL=trace` we will also log to console all traffic between Seth and RPC node. This can be useful for debugging as you can see all the requests and responses. - - -### Read-only mode -It's possible to use Seth in read-only mode only for transaction confirmation and tracing. Following operations will fail: -* contract deployment (we need a pk to sign the transaction) -* new transaction options (we need the pk/address to check nonce) -* RPC health check (we need a pk to send a transaction to ourselves) -* pending nonce protection (we need an address to check pending transactions) -* ephemeral keys (we need a pk to fund them) -* gas bumping (we need a pk to sign the transaction) - -The easiest way to enable read-only mode is to client via `ClientBuilder`: -```go - client, err := builder. - WithNetworkName("my network"). - WithRpcUrl("ws://localhost:8546"). - WithEphemeralAddresses(10, 1000). - WithPrivateKeys([]string{"ac0974bec39a17e36ba4a6b4d238ff944bacb478cbed5efcae784d7bf4f2ff80"}). - WithReadOnlyMode(). - Build() -``` - -when builder is called with `WithReadOnlyMode()` it will disable all the operations mentioned above and all the configuration settings related to them. - -Additionally, when the client is build anc `cfg.ReadOnly = true` is set, we will validate that: -* no addresses and private keys are passed -* no ephemeral addresses are to be created -* RPC health check is disabled -* pending nonce protection is disabled -* gas bumping is disabled +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/libs/seth.html) \ No newline at end of file diff --git a/wasp/README.md b/wasp/README.md index 948943d75..baab394de 100644 --- a/wasp/README.md +++ b/wasp/README.md @@ -1,171 +1,3 @@ -

- wasp -

-
+## Scalable protocol-agnostic load testing library -[![Go Report Card](https://goreportcard.com/badge/github.com/smartcontractkit/wasp)](https://goreportcard.com/report/github.com/smartcontractkit/wasp) -[![Component Tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/wasp-test.yml/badge.svg)](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/wasp-test.yml) -[![E2E tests](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/wasp-test-e2e.yml/badge.svg)](https://github.com/smartcontractkit/chainlink-testing-framework/actions/workflows/wasp-test-e2e.yml) -![gopherbadger-tag-do-not-edit](https://img.shields.io/badge/Go%20Coverage-80%25-brightgreen.svg?longCache=true&style=flat) - -Scalable protocol-agnostic load testing library for `Go` - -
- -## Goals -- Easy to reuse any custom client `Go` code -- Easy to grasp -- Have a slim codebase (500-1k loc) -- No test harness or CLI, easy to integrate and run with plain `go test` -- Have a predictable performance footprint -- Easy to create synthetic or user-based scenarios -- Scalable in `k8s` without complicated configuration or vendored UI interfaces -- Non-opinionated reporting, push any data to `Loki` - -## Setup -We are using `nix` for deps, see [installation](https://nixos.org/manual/nix/stable/installation/installation.html) guide -```bash -nix develop -``` - - -## Run example tests with Grafana + Loki -```bash -make start -``` -Insert `GRAFANA_TOKEN` created in previous command -```bash -export LOKI_TOKEN= -export LOKI_URL=http://localhost:3030/loki/api/v1/push -export GRAFANA_URL=http://localhost:3000 -export GRAFANA_TOKEN= -export DATA_SOURCE_NAME=Loki -export DASHBOARD_FOLDER=LoadTests -export DASHBOARD_NAME=Wasp - -make dashboard -``` -Run some tests: -``` -make test_loki -``` -Open your [Grafana dashboard](http://localhost:3000/d/wasp/wasp-load-generator?orgId=1&refresh=5s) - -In case you deploy to your own Grafana check `DASHBOARD_FOLDER` and `DASHBOARD_NAME`, defaults are `LoadTests` dir and dashboard is called `Wasp` - -Remove environment: -```bash -make stop -``` - -## Test Layout and examples -Check [examples](examples/README.md) to understand what is the easiest way to structure your tests, run them both locally and remotely, at scale, inside `k8s` - -## Run pyroscope test -``` -make pyro_start -make test_pyro_rps -make test_pyro_vu -make pyro_stop -``` -Open [pyroscope](http://localhost:4040/) - -You can also use `trace.out` in the root folder with `Go` default tracing UI - -## How it works -![img.png](docs/how-it-works.png) - -Check this [doc](./HOW_IT_WORKS.md) for more examples and project overview - -## Loki debug -You can check all the messages the tool sends with env var `WASP_LOG_LEVEL=trace` - -If Loki client fail to deliver a batch test will proceed, if you experience Loki issues, consider setting `Timeout` in `LokiConfig` or set `MaxErrors: 10` to return an error after N Loki errors - -`MaxErrors: -1` can be used to ignore all the errors - -Default Promtail settings are: -``` -&LokiConfig{ - TenantID: os.Getenv("LOKI_TENANT_ID"), - URL: os.Getenv("LOKI_URL"), - Token: os.Getenv("LOKI_TOKEN"), - BasicAuth: os.Getenv("LOKI_BASIC_AUTH"), - MaxErrors: 10, - BatchWait: 5 * time.Second, - BatchSize: 500 * 1024, - Timeout: 20 * time.Second, - DropRateLimitedBatches: false, - ExposePrometheusMetrics: false, - MaxStreams: 600, - MaxLineSize: 999999, - MaxLineSizeTruncate: false, -} -``` -If you see errors like -``` -ERR Malformed promtail log message, skipping Line=["level",{},"component","client","host","...","msg","batch add err","tenant","","error",{}] -``` -Try to increase `MaxStreams` even more or check your `Loki` configuration - - -## WASP Dashboard - -Basic [dashboard](dashboard/dashboard.go): - -![dashboard_img](./docs/dashboard_basic.png) - -### Reusing Dashboard Components - -You can integrate components from the WASP dashboard into your custom dashboards. - -Example: - -``` -import ( - waspdashboard "github.com/smartcontractkit/wasp/dashboard" -) - -func BuildCustomLoadTestDashboard(dashboardName string) (dashboard.Builder, error) { - // Custom key,value used to query for panels - panelQuery := map[string]string{ - "branch": `=~"${branch:pipe}"`, - "commit": `=~"${commit:pipe}"`, - "network_type": `="testnet"`, - } - - return dashboard.New( - dashboardName, - waspdashboard.WASPLoadStatsRow("Loki", panelQuery), - waspdashboard.WASPDebugDataRow("Loki", panelQuery, true), - # other options - ) -} -``` - -## Annotate Dashboards and Monitor Alerts - -To enable dashboard annotations and alert monitoring, utilize the `WithGrafana()` function in conjunction with `wasp.Profile`. This approach allows for the integration of dashboard annotations and the evaluation of dashboard alerts. - -Example: - -``` -_, err = wasp.NewProfile(). - WithGrafana(grafanaOpts). - Add(wasp.NewGenerator(getLatestReportByTimestampCfg)). - Run(true) -require.NoError(t, err) -``` - -Where: - -``` -type GrafanaOpts struct { - GrafanaURL string `toml:"grafana_url"` - GrafanaToken string `toml:"grafana_token_secret"` - WaitBeforeAlertCheck time.Duration `toml:"grafana_wait_before_alert_check"` // Cooldown period to wait before checking for alerts - AnnotateDashboardUIDs []string `toml:"grafana_annotate_dashboard_uids"` // Grafana dashboardUIDs to annotate start and end of the run - CheckDashboardAlertsAfterRun []string `toml:"grafana_check_alerts_after_run_on_dashboard_uids"` // Grafana dashboardIds to check for alerts after run -} - -``` +[![Documentation](https://img.shields.io/badge/Documentation-MDBook-blue?style=for-the-badge)](https://smartcontractkit.github.io/chainlink-testing-framework/libs/wasp.html) \ No newline at end of file diff --git a/wasp/SECURITY.md b/wasp/SECURITY.md deleted file mode 100644 index b0ee4c04d..000000000 --- a/wasp/SECURITY.md +++ /dev/null @@ -1,11 +0,0 @@ -# Security Policy - -## Supported Versions - -| Version | Supported | -| ------- | ------------------ | -| 0.1.x | :white_check_mark: | - -## Reporting a Vulnerability - -Open an issue, we'll review it and get back to you promptly.