Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: PURL generation for language parsers #3961

Closed
terriko opened this issue Mar 20, 2024 · 6 comments · Fixed by #4332
Closed

test: PURL generation for language parsers #3961

terriko opened this issue Mar 20, 2024 · 6 comments · Fixed by #4332
Assignees
Labels
Milestone

Comments

@terriko
Copy link
Contributor

terriko commented Mar 20, 2024

We're currently in the process of adding PURL generation to our language parsers, but currently aren't using it for anything. Eventually we will as part of the planned gsoc project described in #3771 . But while we're waiting for bigger things, we could definitely stand to have some unit tests!

To avoid wasting time parsing files twice, we may want to add the generate_purl tests into the existingtest_language_package code, and do really basic unit-tests where we throw both valid and invalid vendor, package, version info into generate_purl

@terriko terriko added the tests label Mar 20, 2024
@joydeep049
Copy link
Contributor

joydeep049 commented Mar 20, 2024

I would like to work on this as I'm already working on PURL generation. Would be a good learning experience.
@terriko @anthonyharrison

@joydeep049
Copy link
Contributor

Also, the generate_purl test would be aimed at testing the generate_purl function of the language parsers, or the basic version written in __init__.py?
Would we take normalization into consideration when we are testing the function?
@terriko @anthonyharrison

@joydeep049
Copy link
Contributor

Eagerly waiting for response @terriko :)

@terriko
Copy link
Contributor Author

terriko commented Apr 8, 2024

The goal is always for us to get as close as possible to 100% test coverage, so definitely both. If you've never worked with code coverage tools like codecov, they can help you figure out what parts your tests cover and which parts are not covered.

@terriko
Copy link
Contributor Author

terriko commented Apr 8, 2024

Note that I don't actually expect us to get to 100% over all of cve-bin-tool because it is asymptotically hard, some of the code has to be tested against real data, and some of the tests would take a long time, so we aim to hover around 80% mostly as a balance of coverage vs practicality. But for something like this where you're literally just making a string and it'll execute in a few microseconds, there's no reason not to test Every Possible Code Path, unless of course upon testing it you realize that a lot of them could be collapsed into a single code path so you don't have to write as many tests. Refactoring is a valid way to improve code coverage too. 📈

@terriko
Copy link
Contributor Author

terriko commented Aug 7, 2024

Tagging @inosmeet to take a look at these tests.

terriko pushed a commit that referenced this issue Aug 12, 2024
* fixes #3961

Signed-off-by: Meet Soni <meetsoni3017@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants