Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

genbank submission updates #71

Merged
merged 15 commits into from
May 15, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
64 changes: 50 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,68 @@
[![Build Status](https://travis-ci.com/broadinstitute/viral-pipelines.svg?branch=master)](https://travis-ci.com/broadinstitute/viral-pipelines)
[![Documentation Status](https://readthedocs.org/projects/viral-pipelines/badge/?version=latest)](http://viral-pipelines.readthedocs.io/en/latest/?badge=latest)

viral-pipelines
===============
# viral-pipelines

A set of scripts and tools for the analysis of viral NGS data.

Workflows are written in [WDL](https://github.com/openwdl/wdl) format. This is a portable workflow language that allows for easy execution on a wide variety of platforms:
- on individual machines (using miniWDL or Cromwell to execute)
- on commercial cloud platforms like GCP, AWS, or Azure (using Cromwell or CromwellOnAzure)
- on institutional HPC systems (using Cromwell)
- on commercial platform as a service vendors (like DNAnexus)
- on academic cloud platforms (like Terra)
- on individual machines (using [miniWDL](https://github.com/chanzuckerberg/miniwdl) or [Cromwell](https://github.com/broadinstitute/cromwell) to execute)
- on commercial cloud platforms like GCP, AWS, or Azure (using [Cromwell](https://github.com/broadinstitute/cromwell) or [CromwellOnAzure](https://github.com/microsoft/CromwellOnAzure))
- on institutional HPC systems (using [Cromwell](https://github.com/broadinstitute/cromwell))
- on commercial platform as a service vendors (like [DNAnexus](https://dnanexus.com/))
- on academic cloud platforms (like [Terra](https://app.terra.bio/))

Currently, all workflows are regularly deployed to a GCS bucket: [gs://viral-ngs-wdl](https://console.cloud.google.com/storage/browser/viral-ngs-wdl?forceOnBucketsSortingFiltering=false&organizationId=548622027621&project=gcid-viral-seq).

## Obtaining the latest WDL workflows

Workflows from this repository are continuously deployed to [Dockstore](https://dev.dockstore.net/organizations/BroadInstitute/collections/pgs), a GA4GH Tool Repository Service. They can then be easily imported to any bioinformatic compute platform that utilizes the TRS API and understands WDL (this includes Terra, DNAnexus, DNAstack, etc).

Flattened workflows are also continuously deployed to a GCS bucket: [gs://viral-ngs-wdl](https://console.cloud.google.com/storage/browser/viral-ngs-wdl?forceOnBucketsSortingFiltering=false&organizationId=548622027621&project=gcid-viral-seq) and can be downloaded for local use.

Workflows are also available in the [Terra featured workspace](https://app.terra.bio/#workspaces/pathogen-genomic-surveillance/COVID-19).

Workflows are continuously deployed to a [DNAnexus CI project](https://platform.dnanexus.com/projects/F8PQ6380xf5bK0Qk0YPjB17P).

Continuous deploy to [Dockstore](https://dockstore.org/) is pending.

Basic execution
---------------
## Basic execution

The easiest way to get started is on a single, Docker-capable machine (your laptop, shared workstation, or virtual machine) using [miniWDL](https://github.com/chanzuckerberg/miniwdl). MiniWDL can be installed either via `pip` or `conda` (via conda-forge). After confirming that it works (`miniwdl run_self_test`, you can use [miniwdl run](https://github.com/chanzuckerberg/miniwdl#miniwdl-run) to invoke WDL workflows from this repository.

For example, to list the inputs for the assemble_refbased workflow:

```
miniwdl run https://storage.googleapis.com/viral-ngs-wdl/quay.io/broadinstitute/viral-pipelines/2.0.21.3/assemble_refbased.wdl
```

This will emit:
```
missing required inputs for assemble_refbased: reads_unmapped_bams, reference_fasta

required inputs:
Array[File]+ reads_unmapped_bams
File reference_fasta

optional inputs:
<really long list>

outputs:
<really long list>
```

To then execute this workflow on your local machine, invoke it with like this:
```
miniwdl run \
https://storage.googleapis.com/viral-ngs-wdl/quay.io/broadinstitute/viral-pipelines/2.0.21.3/assemble_refbased.wdl \
reads_unmapped_bams=PatientA_library1.bam \
reads_unmapped_bams=PatientA_library2.bam \
reference_fasta=/refs/NC_045512.2.fasta \
trim_coords_bed=/refs/NC_045512.2-artic_primers-3.bed \
sample_name=PatientA \
```

In the above example, reads from two sequencing runs are aligned and merged together before consensus calling. The optional bed file provided turns on primer trimming at the given coordinates.

The easiest way to get started is on a single, Docker-capable machine (your laptop, shared workstation, or virtual machine) using [miniWDL](https://github.com/chanzuckerberg/miniwdl). MiniWDL can be installed via `pip` or `conda` (via conda-forge). After confirming that it works (`miniwdl run_self_test`, you can use [miniwdl run](https://github.com/chanzuckerberg/miniwdl#miniwdl-run) to invoke WDL workflows from this repository. For example: `miniwdl run https://storage.googleapis.com/viral-ngs-wdl/quay.io/broadinstitute/viral-pipelines/2.0.20.3/assemble_refbased.wdl` will execute the reference-based assembly pipeline, when provided with the appropriate inputs.

Available workflows
-------------------
## Available workflows

The workflows provided here are more fully documented at our [ReadTheDocs](https://viral-pipelines.readthedocs.io/) page.
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,5 @@ Contents

description
pipes-wdl
ncbi_submission
workflows
108 changes: 108 additions & 0 deletions docs/ncbi_submission.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
Submitting viral sequences to NCBI
==================================

Register your BioProject
------------------------
*If you want to add samples to an existing BioProject, skip to Step 2.*

1. Go to: https://submit.ncbi.nlm.nih.gov and login (new users - create new login).
#. Go to the Submissions tab and select BioProject - click on New Submission.
#. Follow the onscreen instructions and then click submit - you will receive a BioProject ID (``PRJNA###``) via email almost immediately.


Register your BioSamples
------------------------

1. Go to: https://submit.ncbi.nlm.nih.gov and login.
#. Go to the Submissions tab and select BioSample - click on New Submission.
#. Follow instructions, selecting "batch submission type" where applicable.
#. The metadata template to use is likely: "Pathogen affecting public health".
#. Follow template instructions (careful about date formatting) and submit as .txt file.
#. You will receive BioSamples IDs (``SAMN####``) via email (often 1-2 days later).


Set up an NCBI author template
------------------------------
*If different author lists are used for different sets of samples, create a new .sbt file for each list*

1. Go to: https://submit.ncbi.nlm.nih.gov/genbank/template/submission/
#. Fill out the form including all authors and submitter information (if unpublished, the reference title can be just a general description of the project).
#. At the end of the form, include the BioProject number from Step 1 but NOT the BioSample number'
#. Click "create template" which will download an .sbt file to your computer'
#. Save file as "authors.sbt" or similar. If you have multiple author files, give each file a different name and prep your submissions as separate batches, one for each authors.sbt file.


Set up the BioSample map file
-----------------------------

1. Set up an Excel spreadsheet in exactly the format below:

========= =============
sample BioSample
sample1-1 SAMNxxxxxxxxx
sample2-1 SAMNxxxxxxxxx
========= =============

2. The BioSample is the BioSample number (i.e., ``SAMNxxxxxxxx``) given to you by NCBI.
3. The sample name should match the FASTA header (not necessarily the file name).
a. Make sure your FASTA headers include segment numbers (i.e., IRF001-1) -- viral-ngs will fail otherwise!
b. If submitting a segmented virus (i.e., Lassa virus), each line should be a different segment, see example below (assumes sample2 is a 2-segmented virus)
c. For samples with multiple segments, the BioSample number should be the same for all segments

========= =============
sample BioSample
sample1-1 SAMN04488486
sample2-1 SAMN04488657
sample2-2 SAMN04488657
sample3-1 SAMN04489002
========= =============

4. Save the file as as a tab delimited text file (e.g. "biosample-map.txt").
5. If preparing the file on a Mac computer in Microsoft Excel (which saves tab files in a 20th-century era OS9 format), ensure that tabs and newlines are entered correctly by opening the file (via the command line) in an editor such as Nano and unchecking the [Mac-format] option (in Nano: edit the file, save the file, then click OPTION-M). You can also opt to create this file directly in a text editor, ensuring there is exactly one tab character between columns (i.e., sample<tab>BioSample in the first row). Command line converters such as ``mac2unix`` also work.


Set up the metadata file (aka Source Modifier Table)
----------------------------------------------------
1. Set up an Excel spreadsheet in exactly the format below
a. This example shows sample2 as a 2-segmented virus.
b. All data should be on the same line (there are 9 columns). Here they are shown as separate tables simply for space reasons.
c. The "Sequence_ID" should match the "sample" field in the BioSample map (see Step 4). Note that this should match the FASTA header.
d. Shown are the some of the fields we typically use in NCBI submissions, but fields can be added or removed to suit your sample needs. Other fields we often include are: "isolation_source" (i.e., serum), "collected_by" (i.e., Redeemer's University), and "genotype". Here is the full list of fields accepted by NCBI: https://www.ncbi.nlm.nih.gov/WebSub/html/help/genbank-source-table.html.
e. The database cross-reference (db_xref) field number can be obtained by navigating to https://www.ncbi.nlm.nih.gov/taxonomy, searching for the organism of interest, and copying the "Taxonomy ID" number from the webpage.

=========== =============== ======= ============================================= ===================== ========== ============ ============ ====================================================================================
Sequence_ID collection_date country isolate organism lab_host host db_xref note
sample1-1 10-Mar-2014 Nigeria Ebola virus/H.sapiens-tc/GIN/2014/Makona-C05 Zaire ebolavirus Vero cells Homo sapiens taxon:186538 Harvest date: 01-Jan-2016; passaged 2x in cell culture (parent stock: SAMN01110234)
sample2-1 12-Mar-2014 Nigeria Lassa virus Macenta Lassa mammarenavirus Vero cells Homo sapiens taxon:11620
sample2-2 12-Mar-2014 Nigeria Lassa virus Macenta Lassa mammarenavirus Vero cells Homo sapiens taxon:11620
sample3-1 16-Mar-2014 Nigeria Ebola virus/H.sapiens-tc/GIN/2014/Makona-1121 Zaire ebolavirus Vero cells Homo sapiens taxon:186538 This sample was collected by Dr. Blood from a very sick patient.
=========== =============== ======= ============================================= ===================== ========== ============ ============ ====================================================================================

2. The data in this table is what actually shows up on NCBI with the genome. In many cases, it is a subset of the metadata you submitted when you registered the BioSamples.
3. Save this table as sample_meta.txt. If you make the file in Excel, double check the date formatting is preserved when you save -- it should be dd-mmm-yyyy format.
4. If preparing the file on a Mac computer in Microsoft Excel (which saves tab files in a 20th-century era OS9 format), ensure that tabs and newlines are entered correctly by opening the file (via the command line) in an editor such as Nano and unchecking the [Mac-format] option (in Nano: edit the file, save the file, then click OPTION-M). You can also opt to create this file directly in a text editor, ensuring there is exactly one tab character between columns (i.e., sample<tab>BioSample in the first row). Command line converters such as ``mac2unix`` also work.


Prepare requisite input files for your submission batches
---------------------------------------------------------

1. Stage the above files you've prepared and other requisite inputs into the environment you plan to execute the :doc:`genbank` WDL workflow. If that is Terra, push these files into the appropriate GCS bucket, if DNAnexus, drop your files there. If you plan to execute locally (e.g. with ``miniwdl run``), move the files to an appropriate directory on your machine. The files you will need are the following:
a. The files you prepared above: the submission template (authors.sbt), the biosample map (biosample-map.txt), and the source modifier table (sample_meta.txt)
#. All of the assemblies you want to submit. These should be in fasta files, one per genome. Multi-segment/multi-chromosome genomes (such as Lassa virus, Influenza A, etc) should contain all segments within one fasta file.
#. Your reference genome, as a fasta file. Multi-segment/multi-chromosome genomes should contain all segments within one fasta file. The fasta sequence headers should be Genbank accession numbers.
#. Your reference gene annotations, as a series of TBL files, one per segment/chromosome. These must correspond to the accessions in you reference genome.
#. A genome coverage table as a two-column tabular text file (optional, but helpful).
#. The organism name (which should match what NCBI taxonomy calls the species you are submitting for). This is a string input to the workflow, not a file.
#. The sequencing technology used. This is a string input, not a file.
#. The reference genome you provide should be annotated in the way you want your genomes annotated on NCBI. If one doesn't exist, see the addendum below about creating your own feature list.
#. Note that you will have to run the pipeline separately for each virus you are submitting AND separately for each author list.


Run the genbank submission pipeline
-----------------------------------

1. Run the :doc:`genbank` WDL workflow. This performs the following steps: it aligns your assemblies against a Genbank reference sequence, transfers gene annotation from that Genbank reference into your assemblies' coordinate spaces, and then takes your genomes, the transferred annotations, and all of the sample metadata prepared above, and produces a zipped bundle that you send to NCBI. There are two zip bundles: ``sequins_only.zip`` is the file to email to NCBI. ``all_files.zip`` contains a full set of files for your inspection prior to submission.
#. In the ``all_files.zip`` output, for each sample, you will see a ``.sqn``, ``.gbf``, ``.val``, and ``.tbl`` file. You should also see an ``errorsummary.val`` file that you can use to check for annotation errors (or you can check the ``.val`` file for each sample individually). Ideally, your samples should be error-free before you submit them to NCBI. For an explanation of the cryptic error messages, see: https://www.ncbi.nlm.nih.gov/genbank/genome_validation/.
#. Note: we've recently had trouble running tbl2asn with a molType specified. TO DO: describe how to deal with this.
#. Check your ``.gbf`` files for a preview of what your genbank entries will look like. Once you are happy with your files email the ``sequins_only.zip`` file to gb-sub@ncbi.nlm.nih.gov.
#. It often takes 2-8 weeks to receive a response and accession numbers for your samples. Do follow up if you haven’t heard anything for a few weeks!
9 changes: 2 additions & 7 deletions docs/pipes-wdl.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,7 @@
Using the WDL pipelines
=======================

Rather than chaining together viral-ngs pipeline steps as series of tool
commands called in isolation, it is possible to execute them as a
complete automated pipeline, from processing raw sequencer output to
creating files suitable for GenBank submission. This utilizes the Workflow
Description Language, which is documented at:
Rather than chaining together viral-ngs pipeline steps as series of tool commands called in isolation, it is possible to execute them as a complete automated pipeline, from processing raw sequencer output to creating files suitable for GenBank submission. This utilizes the Workflow Description Language, which is documented at:
https://github.com/openwdl/wdl

There are various methods for executing these workflows on your infrastructure
which are more thoroughly documented in our `README <https://github.com/broadinstitute/viral-pipelines/blob/master/README.md#viral-pipelines>`_.
There are various methods for executing these workflows on your infrastructure which are more thoroughly documented in our `README <https://github.com/broadinstitute/viral-pipelines/blob/master/README.md#viral-pipelines>`_.
17 changes: 9 additions & 8 deletions docs/workflows.rst
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
WDL Workflows
=============

Documentation for each workflow is provided here. Although there are many workflows
that serve different functions, some of the primary workflows we use most often include:
- demux_plus (on every sequencing run)
- classify_krakenuniq (included in demux_plus)
- assemble_denovo (for most viruses)
- assemble_refbased (for less diverse viruses, such as those from single point source human outbreaks)
- build_augur_tree (for nextstrain-based visualization of phylogeny)
Documentation for each workflow is provided here. Although there are many workflows that serve different functions, some of the primary workflows we use most often include:

.. toctree::
- :doc:`demux_plus` (on every sequencing run)
- :doc:`classify_krakenuniq` (included in demux_plus)
- :doc:`assemble_denovo` (for most viruses)
- :doc:`assemble_refbased` (for less diverse viruses, such as those from single point source human outbreaks)
- :doc:`build_augur_tree` (for nextstrain-based visualization of phylogeny)
- :doc:`genbank` (for NCBI Genbank submission)

.. toctree::
3 changes: 2 additions & 1 deletion pipes/WDL/tasks/tasks_interhost.wdl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ task multi_align_mafft_ref {
input {
File reference_fasta
Array[File]+ assemblies_fasta # fasta files, one per sample, multiple chrs per file okay
String fasta_basename = basename(reference_fasta, '.fasta')
Int? mafft_maxIters
Float? mafft_ep
Float? mafft_gapOpeningPenalty
Expand All @@ -13,6 +12,8 @@ task multi_align_mafft_ref {
String docker="quay.io/broadinstitute/viral-phylo"
}

String fasta_basename = basename(reference_fasta, '.fasta')

command {
interhost.py --version | tee VERSION
interhost.py multichr_mafft \
Expand Down
Loading