Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

List of improvements #7

Closed
1 task
ehmatthes opened this issue Oct 24, 2021 · 11 comments
Closed
1 task

List of improvements #7

ehmatthes opened this issue Oct 24, 2021 · 11 comments

Comments

@ehmatthes
Copy link
Owner

ehmatthes commented Oct 24, 2021

Rather than opening a bunch of issues, this is one issue to keep track of many smaller things that should be done if anyone is starting to use this project.

  • Go through all notes in here and move them to a notes file, or fresher issues.
@ehmatthes
Copy link
Owner Author

ehmatthes commented Oct 24, 2021

General improvements

  • Expand documentation.
    • On root README.
    • In docs/.
    • State the explicit scope of this project. Support a variety of dependency management approaches and consider supporting any PaaS platform that offers a free deployment option, but don't try to support advanced use cases like AWS buckets, payment systems, etc. The core goal is to make it easy for people who've built initial local functionality to push their project to an initial deployment as simply as possible.
    • Add a TOC to root readme.
    • Create a small set of tags for issues: testing, docs, heroku, azure, etc.
    • Revisit labels: src
    • List all CLI options when running manage.py simple_deploy.
  • Support other dependency management approaches besides requirements.txt.
    • Support Pipenv. Should be straightforward, because I've done this before in a custom buildpack.
    • Support Poetry.
  • Consider making a terminal recording of deploying a demo project.
  • Consider stating in the output that git checkout . should bring your project back to the state it was in before attempting to configure for deployment.
  • Address any DEV: note in the codebase.
  • Do all of these approaches work with psycpog3 instead of psycopg2? (Not at this point, see Upgrade to psycopg2 #46.)
  • Pad third party terminal output, so it stays nested in the sequence of steps.
    • Capture output and rewrite it to terminal, line by line, padded?
    • How much of a performance hit is this?
  • Add a TOC to root README.
  • This assumes user is already using Git to track project. Should we check for a .git directory?
  • On Heroku, add a runtime.txt using latest supported Python version.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Oct 27, 2021

Testing

  • Consider renaming the test scripts to reflect this project, rather than the buildpack project they're based on.
  • If staying with bash for integration testing, build a proper help function.
  • Explore bash functions?
  • Find a way to keep the poetry test run from polluting the django-simple-deploy project environment. How can it install what it needs to in a new venv that's been activated in the tmp directory?
    • Would it work any better with a set of pinned versions? Was a poetry caching issue.
  • Sometimes poetry test fails unless cache is cleared manually before running test. Keep an eye on this issue. poetry cache clear --all pypi. See Fix poetry cache issue when testing #22.
    • I have no idea why clearing the cache from within the script is not working. Maybe because different environment? Look at cache options, and try modifying script call to omit the --no-interaction flag. This is hard to test, bc don't always know what makes the run successful.
    • See open issue about cache clearing.
  • Sometimes heroku deployed project shows "no app here yet" for a little while longer than usual. If no errors in test output, wait a moment and refresh the app page? Am I hitting some Heroku usage limit?
  • Testing automate_all is interrupted by the prompt confirming whether to automate committing and pushing. Could consider a --no-interactions flag for simple_deploy.py.
  • Start testing failure modes.
    • Test that a non-empty ALLOWED_HOSTS generates the appropriate error message. Should be a unit test, because this should be detected before deploying.
  • Testing of deployed app could be pulled back out of test_heroku_deployment.sh, as long as it's passed the URL of the app to test.
  • Exceptions in .py files don't cause bash test to stop running. Do a better job of reporting these issues; right now I have to scan output for them.
  • Automate-all should probably be a simple -a flag. Not -o automate_all.
  • Once the blog project is part of this project, refactor the test_deployed_app_functionality.py script.
  • Figure out whether pytest, or a different approach than a simple bash script, will manage integration testing better.
  • Verify that an appropriate branch is pushed.
    • This could be in integration testing, but it might be a simpler configuration unit test.
  • Build a starting set of unit tests for simple_deploy.
  • Local testing
    • Test that secret key setting is correct in heroku settings.
  • Use Pytest for integration testing. I just commented out most tests so I could run only the newest test, and felt ridiculous for that.
    • Look at pytest tmp dir support. But, be careful - we want users to be able to drop into the tmp dir and work with the deployed app, so it shouldn't be too hidden.
  • Should integration test look for log files at all?
  • Testing invalid calls, ie omitting the --platform flag:
    • I am certain there's a better approach to testing the results of invalid calls. What is a better approach? I assume there's an efficient way to make each kind of invalid call test its own function. (Currently this is just an assert statement in a single function in conftest.py.) See approach to modifying ALLOWED_HOSTS during testing.
    • Also, test for specific error messages.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Oct 29, 2021

Refactoring

  • In simple_deploy.py, could probably have a method _add_pkg(), which decides whether to call _add_req_txt_pkg(), _add_pipenv_pkg(), etc.
    • Add a version parameter to _add_req_txt_pkg(), so calls have same format as _add_pipenv_pkg().
  • There's a lot of refactoring that can be done in simple_deploy.py, including breaking it into multiple files.
  • confirm_automate_all() can probably be moved back to simple_deploy.py, and use the --platform arg to know which messages to display.
  • Many methods in simple_deploy.Command are helper methods, because the project started in this one class. They should probably have the leading underscore removed now that they're called by other classes.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Oct 31, 2021

More general improvements

  • After generating requirements.txt file when using Poetry, add a comment at the top of the file that it was generated by simple_deploy.
  • Clean up formatting of final output when --automate-all flag is used.
  • If --automate-all runs into "There's nothing here yet", a slight pause (sleep) might avoid it.
  • In simple_deploy.py, execute_command() only captures stderr.
  • Consider a --quiet flag, but that should probably be strongly discouraged for most users. Probably only useful with testing?
  • Refactor the subprocess and Popen calls, which are currently repetitive with logging work. Probably want a second custom execute_command_subprocess(), and execute_command_popen()
  • Test for what system we're running on should use sys.platform or platform.system(). See SO conversation and platform.system() documentation.
  • Reorganize simple_deploy/templates/ using one directory per platform.
  • In simple_deploy.py, in _find_git_dir(), shouldn't need any of the Path() wrappers, especially after ensuring that self.project_root is a Path object in Error detecting BASE_DIR as string #175.
  • Consider adding a new method add_requirements() to simple_deploy.py. Accepts a list of packages, like add_requirements() in fly and platformsh deploy.py files.
  • For projects that use a bare requirements.txt file, should we be running pip freeze at some point? We lose the comments I believe, but we gain the specific version numbers. Maybe suggest this as a followup step in success message/ friendly summary?
  • Can logging/ writing output use tee at all? Most of the work is writing to output and to a file.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 7, 2021

Heroku deployment

@ehmatthes
Copy link
Owner Author

ehmatthes commented Feb 19, 2022

Improving output

  • Command errors are written twice; I believe this has to do with how logging is set up. Log errors, but don't write them to the console myself; let the exception handling path take care of that. Implemented at 018507461365398d12c8a7078069be806067f782.
  • Consider adding information to error output.
    • ie, omitting --platform flag message could say more about how to evaluate platforms, once that's been documented. Could also recommend running --help, once that's more fully implemented.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Jul 12, 2022

Release process

  • Have someone more familiar with Python packaging, especially django packaging, look over the release process. This should include the setup.py, setup.cfg, MANIFEST.in, pyproject.toml.
    • Are any of these redundant?
    • Are these current best practices?
    • Are these files being used correctly?
    • Should I be including docs/ in the build?
    • See, ie, Clean up manifest.in #73.

@ehmatthes
Copy link
Owner Author

Documentation

  • Consider a matrix of how well-supported each platform is.
    • Whether --automate-all is supported.
    • Notes about any known issues or limitations.
    • Whether support is planned for other platforms.
    • Links to specific issues relevant to each platform.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Jul 13, 2022

More testing improvements

  • Move shell scripts to a utils/ directory.
  • In unit tests for platform.sh, the test_x_yaml_file() functions can be refactored.
  • Can some platform-agnostic testing be pulled out from the platform-specific test files?
    • Start with tests like test_log_dir().
    • Can't just make a separate, generic test_logs.py file. Would need to be called for each platform being tested.
      • Maybe make that file, but don't start with test_; start with something that isn't called automatically, but can be called by each specific script.
    • There's probably not much to pull out now, but it would be good to start identifying generic tests, and add to these as they are recognized.
  • Update conftest.py to use the currently recommended tmp_path_factory.
  • Integration testing should use a proper tmp directory.
    • See here for notes about creating a temp directory from bash.
    • Consider how well this would work on Windows.
    • Consider the advantage of the current approach, that if the script crashes it's easier to find the tmp directory through a file browser and destroy it manually.
  • Write a unit test that fails for this now? (from Don't raise unclean git status error if only uncommitted change is adding simple_deploy to INSTALLED_APPS. #110)
    • No, not right now. I'm not sure best way to test this. Currently, simple_deploy is added to settings.py in unit_tests/setup_project.sh, then a commit is made. I'm not quite sure how to efficiently set up a test without this commit. Either get feedback around djangocon, or rethink this on my own a little later. Probably set up a unit testing workflow where a number of things are tested in the process of the test run. For example, call setup_project.sh without adding simple_deploy, run some tests, then add simple_deploy, run some tests, then make a commit, run some tests, etc.
  • Consider using playwright for testing deployed app. relevant twitter conversation
  • Note benchmarks somewhere; for now see first attempts in Update flyio Dockerfile #115.
  • Retry final test of deployed app after 10s if there's an initial failure.
    • Common issue with integration testing is the deployed project is not quite ready fast enough to pass the final tests that are run in the script. But, not destroying assets immediately and manually running the test (python test_deployed_app_functionality.py) from the tmp dir works.
  • Refactor local functionality testing block out of integration tests.
  • Consider moving some content in unit_tests/platforms/platform_name/reference_files/ to an examples/ directory, if we want them more public facing. See related thread.
  • Consider writing an auto-run fixture to give every test function the sd_root_dir location.

@ehmatthes
Copy link
Owner Author

ehmatthes commented Nov 9, 2022

Even more testing improvements

  • In unit tests for logs, check for specific lines such as "Generated Dockerfile."
    • May be better to store a reference log file and compare against that file, ignoring timestamps.
  • Clean up the commented-out test_heroku_host_in_allowed_hosts() in test_heroku_config.py, which also relates to the crufty modify_allowed_hosts.sh in unit_tests/platforms/heroku/.
  • Update all shell scripts that use OPTARGS to use positional args; much easier for people less familiar with bash to follow.
  • unit test documenation
    • Add section about extending unit tests.
    • Add section about drafting an initial set of unit tests for a newly-supported platform.
  • See Sentry's article about improving test efficiency from ~40 min to ~5 min
  • Is there any way to allow setting the pkg_manager parameters from the command line?
    • Need for this depends on how long the test suite takes overall.
    • See Dynamic scope for fixtures. I wonder if there's a similar way to dynamically set the parameters from the command line? ie if CLI pkg_manager passed, use that, otherwise, use all existing values. Could limit CLI option to specifying one specific pkg_manager.
    • This would allow people to run tests for one specific package manager. See Unit test all dependency management approaches #205.
    • See SO How to pass arguments in pytest by command line
    • This would be helpful when generating new reference files, ie run a test against a specific package manager, go into the test temp dir and grab the generated files.
  • Figure out how to make the pkg_manager fixture in conftest.py an autouse fixture. When I tried it originally, it was not available to individual test functions. Maybe an ordering issue? autouse fixtures generally run first, so maybe we need to force it to run after reset_test_project? See Pytest sources in Unit test all dependency management approaches #205.
  • Excellent article on when to use xfail and when to use skip.

@ehmatthes
Copy link
Owner Author

Dependency management systems

  • From Clean up dependency management work #202
    • Be consistent about terminology, ie "package" vs "requirement".
    • Write an ADR about all of this package management/ requirements thinking, including terminology.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant