Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support provider test lifecycle hooks #468

Open
mefellows opened this issue Oct 8, 2024 · 0 comments
Open

Support provider test lifecycle hooks #468

mefellows opened this issue Oct 8, 2024 · 0 comments
Labels
enhancement Indicates new feature requests

Comments

@mefellows
Copy link
Member

It is common in testing frameworks to be able to hook into the lifecycle of tests to modify the state of the system in a fine-grained way. Current Pact SDK clients can support this on the consumer side fairly easily (as each test needs to be separated authored and has its own set of FFI lifecycle functions), however on the provider side things are essentially a single process - all tests run within the pactffi_verifier_execute function.

This presents a challenge for clients wanting to provide a native testing experience around the verification because there is no straightforward way to hook into the lifecycle.

JS/Go have proxy layers which were introduced for other reasons, but have been handy in enabling such behaviour:

Examples:

This does create additional client complexity, however, and may not be workable for all languages.

Related issues:

Potential approaches

  1. HTTP "hooks" that fire, consistent with the current approach for state setup/teardown
    1. Support for a before hook would run before any state setup calls are made, and an after hook after all teardown calls were made
  2. FFI callback functions

Another (potential) benefit of the hooks approach, would be to allow SDKs to also create "on-demand" test cases in the target frameworks. As it stands, most clients currently would run all provider tests within a single "test" case. Creating hooks that mark the start end of each test case would allow clients to dynamically create those tests.

I think this is probably best handled in another way, however (e.g. perhaps sending a structured JSON object back at the end of the verification, along with any related errors for each scenario, for the client to parse).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Indicates new feature requests
Projects
None yet
Development

No branches or pull requests

1 participant