-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request]: Add support for full evaluation of files including dependencies. #54
Comments
I'm not sure about the title "...does no longer work". This definitely never worked. The way we discover tests is via two possible modes:
There is no mode or no support for full evaluation of files with all dependencies imported. Beside some security and performance concerns it might also be rather tricky to fully evaluate each file. Hence I would not consider this a "bug", but simply as designed and supported. But the request is legit and we should convert this into a "feature request" instead. Another test discovery mode with "full evaluation" should be technically feasible but is quite a bit effort to respect all project settings correctly (tsconfig, package.json etc.). Can you explain a bit more why such a code setup really makes sense? The example rather seems a bit artificial to put a test into a separate file and import it for execution. But I guess you have a more realistic setup in your project? I want to be sure we support your use case correctly. |
My bad, it seem I mixed up my memory with other extensions I tried out.
I can see that. A workaround for at least beeing able to run all tests from an import seems to be to wrap the imported tests in a
Yes definitely not a bug since I wrongfully though it previosuly did work.
In bigger projects where you are working on different implementation for a protocol, then you usually have a dependency on a test suite that many implementations re-use, in order to assert all implementations behave the same way. In my case I am building a database where I can support different backends, SQLite, PG, In memory db, etc. But I only want to write the tests once in a separate place and make sure each implementation (sub package in a a monorepo) passes all of them. Here is a example from |
Thanks a lot for the insights. Makes a lot more sense now. 😁 I have some ideas how it could be added. The biggest challenge will be to inject a correct module resolver into the evaluation context. https://github.com/CoderLine/mocha-vscode/blob/main/src%2Fdiscoverer%2Fevaluate.ts#L195 I am also thinking that I should inject some environment variable during this evaluation. this would allow devs to prevent some undesired code execution during test discovery. |
@marcus-pousette You can check out v1.1.0and change your test evaluation mode to
{
"mocha-vscode.extractSettings": {
"suite": [
"describe",
"suite"
],
"test": [
"it",
"test"
],
"extractWith": "evaluation-cjs-full",
"extractTimeout": 10000
}
} |
Checklist
Actual behavior
Importing tests does not work
Expected behavior
I want to define a function that generates tests in another file and call this file from a test file to generate tests. For example, this is useful when you have a test suite and there are different input parameters you want to run this suite for
Minimal, Reproducible Example
If I defined something like
No tests show up in the sidebar. Before I could run the tests in the sidebar by clicking the play button on the root folder and by doing so the "runtime" tests would be discovered.
The current behaviour is that no tests are explored.
Output
Plugin Version Details
6d5e329
VS Code Version Details
Version: 1.87
Further details
No response
The text was updated successfully, but these errors were encountered: