-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding unit tests for utilities.py
#415
Conversation
Foresight Summary
View More Details✅ CI workflow has finished in 37 seconds (2 minutes 30 seconds less than
|
Job | Failed Steps | Tests | |
---|---|---|---|
tests-unit | - 🔗 | N/A | See Details |
lint-pylint | - 🔗 | N/A | See Details |
lint-black | - 🔗 | N/A | See Details |
*You can configure Foresight comments in your organization settings page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Yes, these are really unit tests as you're testing a specific function.
Move them under the /unit/
dir.
Also, rename utilities_test.py -> test_utilities.py.
See https://github.com/bitovi/bitops/tree/main/scripts/tests/unit existing structure.
So the BITOPS_RUN_MODE = 'testing' doesn't work but it does need to. Other then that, these were created using OpenAI. The prompt goes as followed:
|
These still need to be cleaned up. The first one that should be cleaned up is the run_cmd as I believe it will close a few of the handle_hook test failures
|
utilities.py
See https://github.com/bitovi/bitops/tree/main/scripts/tests#running-the-tests which gives an example command with all the configuration/environment. |
tox.ini
Outdated
setenv = | ||
BITOPS_LOGGING_FILENAME= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unit Tests ideally shouldn't write to a file for logging.
I saw you tried to disable the logging in another PR, but this should do it in a more native way.
This fixes the Unit Tests run in this PR without relying on a chain of PRs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allowing to test our stuff triggered a lot of the changes.
As we're moving the exception handling outside of the function, - it's better to also log errors outside of the function.
Example pattern:
# Function definition should raise an exception
def example():
if ok:
do thing
logger.debug("Some debugging information in the function is OK")
else:
raise ExampleException("Custom error here goes outside as an Exception")
# Usage of the function in the main code
# should catch the exception
# and handle the error logging and return codes, eg. full error handling.
try:
example()
except ExampleException as e:
# error handling is moved outside of the function
logger.error("An error occurred while trying to run an example: " + e)
sys.exit(1)
This also fixes VSCode tests autodiscovery
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated it to be ready for merge.
Many things needed more work after the AI touch.
@PhillypHenning Let me know in case of any questions and let's merge it.
result = run_cmd([plugin_install_language, plugin_install_script_path]) | ||
except Exception as e: | ||
logger.error( | ||
f"Failed to run plugin install script: [{plugin_install_script_path}]. Error: [{e}]" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error message for deploy and install should be similar.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This one comes from the Exception, eg. when something wrong happens on the Popen system side like a timeout, permissions issue, etc.
What would be the more expected error message to you?
Feel free to push directly to the branch or you can also propose a code suggestion
(https://egghead.io/lessons/github-add-suggestions-in-a-github-pr-review) for better clarity.
scripts/plugins/config/parser.py
Outdated
f"Required config file was not found. \ | ||
To fix this please add the following file: [{e.filename}]" | ||
) | ||
sys.exit(2) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
run_cmd try catch blocks exit with 1xx, should we consider making this a 1xx exit as well? What denotes a 2 vs 101
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's how you added it here initially: eaeba19#diff-ec9057b7cfb0579064835fe003bf275eb479b2910d976987366051d5d7593c9dR24, so that's more a question to you.
There is no difference to me, unless we're returning the non-zero error code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated in 92ffbd1
sys.exit(2) | |
sys.exit(101) |
Adds to #286
Description
Adding unit tests for the utilities.py file
Type of change
How Has This Been Tested?
It is the testing 👍
python3 -m unittest scripts/tests/unit/test_utilities.py
Checklist: