-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Publish test results and logs #2707
Conversation
@@ -22,20 +18,17 @@ | |||
""", | |||
requirement=simple_requirement(unsupported_os=[]), | |||
) | |||
class AgentBvt(TestSuite): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the limitations on LISA's TestSuite, now I am modeling a DCR scenario as a TestSuite with a single test (main), which invokes each of the steps in the scenario.
I also moved the logic common to all suites to a new class, AgentTestSuite.
@@ -0,0 +1,53 @@ | |||
from pathlib import Path, PurePath |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently just a skeleton, needs many improvements
@@ -0,0 +1,17 @@ | |||
#!/usr/bin/env bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
executed on the test VM to collect agent and system logs into a compressed tarball
Codecov Report
@@ Coverage Diff @@
## develop #2707 +/- ##
===========================================
- Coverage 71.95% 71.94% -0.01%
===========================================
Files 104 104
Lines 15765 15765
Branches 2244 2244
===========================================
- Hits 11343 11342 -1
Misses 3906 3906
- Partials 516 517 +1
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
# Remove the first 2 levels of the tree (which indicate the time of the test run) to make navigation | ||
# in the Azure Pipelines UI easier. | ||
# | ||
mv "$BUILD_ARTIFACTSTAGINGDIRECTORY"/[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]/*/* "$BUILD_ARTIFACTSTAGINGDIRECTORY" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, I noticed this.
@@ -32,3 +32,14 @@ jobs: | |||
AZURE_CLIENT_SECRET: $(AZURE-CLIENT-SECRET) | |||
AZURE_TENANT_ID: $(AZURE-TENANT-ID) | |||
SUBSCRIPTION_ID: $(SUBSCRIPTION-ID) | |||
|
|||
- task: PublishTestResults@2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have general question regarding junit test results. Based on the current design when do we say test_case failed, is it if all the steps failed in test case or any of the step failed?
Like dcr today do we show warning/failed status on test case if any of them failed, so that it's easy to navigate to that test case and dig the logs to see the errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
each scenario needs to define what conditions should be treated as an error (i.e. the test case must fail) or as warnings (i.e. the test passes, but there are some warning messages in the logs).
dcr is too limited in that it does not make this distinction and it is hard to know what is an actual failure without checking the logs in detail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but how do you pass this information to junit output, now junit output control by lisa and lisa determines the test case result.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just pass or fail the test, not sure i get your question
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand pass or fail. It's straightforward. I'm curious about warning case If we make test case pass for warnings, then azure pipeline shows green and it's hard to know particular test case has warnings in log without going into each test case log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
warning would be something that does not prevent a test from executing/succeeding. when a test fails. warnings can help determine why.
not sure if there is a way to report them to the pipeline. but i am not sure we want to since they are only interesting when something fails
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we consider one of step failure as warnings to continue execute rest of the steps, in those cases we still need to follow up why particular step as warning to make sure no regressions.
I'm just throwing point; this is something to think about in the grand scheme of things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure i understand what you are trying to achieve, but you can always reach out offline
Publish test results and logs to the Azure Pipeline.
Also, given the limitations on LISA's TestSuite, re-organized the agent BVTs.