This is the base repository for Nanopore analyses from the Dampier lab. It includes docker
containers, conda environments, analysis scripts, and results notebooks. All data is kept separate place to avoid overloading git.
- Will Dampier (@judowill)
Create a new environment file that defines full paths to this directory, to the formatted data directory, and the port you would like Jupyter to listen to. Here is mine:
JUPYTER_PORT=8888
NANOPORE_DATA=/deepdata/nanopore
NANOPORE_CODE=/home/will/DamLabResources/nanopore-projects
Once properly set and saved in as .env
in the directory. You can use:
docker-compose build
To fully create this environment. Then use:
docker-compose up -d
To create a running Jupyter lab environment setup at JUPYTER_PORT
.
If you are intending to run with the GPU, docker-compose
currently does not support starting nvidia-containers. If you need GPU capacity you'll need to use the original docker commands.
docker run nanopore/develop -p 8888:8888 -v /deepdata/nanopore:/home/jovyan/data -v /home/will/DamLabResources/nanopore-projects:/home/jovyan/
Development is setup as a fully functional data-science notebook with GPU libraries already installed as well as most scientific/ML packages pre-installed. I have also loaded docker
and singularity
so you should be able to run singularity snakemake runs.
It also has git installed as well as jupyterlab-git ... but that doesn't seem to be working right now.
More stuff to follow.
If you simply want to use this workflow, download and extract the latest release. If you intend to modify and further extend this workflow or want to work under version control, fork this repository as outlined in Advanced. The latter way is recommended.
In any case, if you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this repository and, if available, its DOI (see above).
Configure the workflow according to your needs via editing the file config.yaml
.
Test your configuration by performing a dry-run via
snakemake --use-conda -n
Execute the workflow locally via
snakemake --use-conda --cores $N
using $N
cores or run it in a cluster environment via
snakemake --use-conda --cluster qsub --jobs 100
or
snakemake --use-conda --drmaa --jobs 100
If you not only want to fix the software stack but also the underlying OS, use
snakemake --use-conda --use-singularity
in combination with any of the modes above. See the Snakemake documentation for further details.
After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake --report report.html
This report can, e.g., be forwarded to your collaborators.
The following recipe provides established best practices for running and extending this workflow in a reproducible way.
- Fork the repo to a personal or lab account.
- Clone the fork to the desired working directory for the concrete project/run on your machine.
- Create a new branch (the project-branch) within the clone and switch to it. The branch will contain any project-specific modifications (e.g. to configuration, but also to code).
- Modify the config, and any necessary sheets (and probably the workflow) as needed.
- Commit any changes and push the project-branch to your fork on github.
- Run the analysis.
- Optional: Merge back any valuable and generalizable changes to the upstream repo via a pull request. This would be greatly appreciated.
- Optional: Push results (plots/tables) to the remote branch on your fork.
- Optional: Create a self-contained workflow archive for publication along with the paper (snakemake --archive).
- Optional: Delete the local clone/workdir to free space.
Tests cases are in the subfolder .test
. They are automtically executed via continuous integration with Travis CI.