Snakemake pipeline for paired-end sanger sequences of 16s rRNA

public public 1yr ago 0 bookmarks

This pipeline implements an analysis for bacterial 16S rRNA produced by Sanger sequencing reads. It trims reads, merge them and generates a genus list using blast and classifier classify from the RDP project. Also, it has quality control steps and summarize them pre and post trimming using multiqc . See the picture of the DAG at the end of this document for more details.

Authors

  • Jose Maturana (@matrs)

Usage

Simple

Step 1: Install workflow

If you simply want to use this workflow, download and extract the latest release . If you intend to modify and further extend this workflow or want to work under version control, fork this repository as outlined in Advanced . The latter way is recommended.

In any case, if you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this repository and, if available, its DOI (see above).

Step 2: Configure workflow

Configure the workflow according to your needs via editing the file config.yaml .

Step 3: Execute workflow

Test your configuration by performing a dry-run via

snakemake --use-conda -n

Execute the workflow locally via

snakemake --use-conda --cores $N

using $N cores or run it in a cluster environment via

snakemake --use-conda --cluster qsub --jobs 100

or

snakemake --use-conda --drmaa --jobs 100

If you not only want to fix the software stack but also the underlying OS, use

snakemake --use-conda --use-singularity

in combination with any of the modes above. See the Snakemake documentation for further details.

Advanced

The following recipe provides established best practices for running and extending this workflow in a reproducible way.

  1. Fork the repo to a personal or lab account.

  2. Clone the fork to the desired working directory for the concrete project/run on your machine.

  3. Create a new branch (the project-branch) within the clone and switch to it. The branch will contain any project-specific modifications (e.g. to configuration, but also to code).

  4. Modify the config, and any necessary sheets (and probably the workflow) as needed.

  5. Commit any changes and push the project-branch to your fork on github.

  6. Run the analysis.

  7. Optional: Merge back any valuable and generalizable changes to the upstream repo via a pull request . This would be greatly appreciated .

  8. Optional: Push results (plots/tables) to the remote branch on your fork.

  9. Optional: Create a self-contained workflow archive for publication along with the paper (snakemake --archive).

  10. Optional: Delete the local clone/workdir to free space.

Pipeline's directed acyclic graph

DAG

Code Snippets

7
8
script:
    "../scripts/abi_to_fastq.py"
22
23
script:
    "../scripts/blast_top_hits.py"
6
7
shell:
    "seqtk seq -r  {input} > {output}"
7
8
wrapper:
    "0.35.1/bio/fastqc"
16
17
shell:
    "seqtk trimfq -q 0.05 {input} | seqtk  seq -q 13 -n N > {output}"
25
26
wrapper:
    "0.35.1/bio/fastqc"
38
39
wrapper:
    "0.35.1/bio/multiqc"
48
49
script:
    "../scripts/merger_qc_plot.py"
SnakeMake From line 48 of rules/qc.smk
10
11
shell:
    "SequenceMatch seqmatch {params.trainee} {input} > {output}"
SnakeMake From line 10 of rules/rdp.smk
22
23
shell:
    "classifier classify {input} -o {output[0]} -h {output[1]}"
SnakeMake From line 22 of rules/rdp.smk
38
39
script:
    "../scripts/tree_top_seqmatch.py"
SnakeMake From line 38 of rules/rdp.smk
1
2
3
4
5
6
7
from Bio import SeqIO, Seq
from pathlib import Path

for abi in snakemake.input:
    abi_name=Path(abi).name
    out_fastq= "{0}/{1}".format("fastq", abi_name.replace("ab1", "fastq"))
    SeqIO.convert(abi, "abi", out_fastq, "fastq")
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
def Read_blast(blast_tab):
    import pandas as pd

    blast_table = pd.read_csv(blast_tab, sep="\t", comment="#", names=['QAcc', 'SubAcc',
    'Perc_ident', 'Align_len', 'Num_mis','Num_gaps','Q_start', 'Q_stop', 'Sub_start', 'Sub_end', 'Evalue','Bitscore', 'Sub_len','Q_cov', 'Q_covhsp', 'Q_covus','S_taxid', 'S_sci_names'])

    return blast_table

table_name = snakemake.output[0]
df = Read_blast(snakemake.input[0])
print("table {} already read".format(table_name))
df.sort_values(by=['Bitscore','Q_cov','Perc_ident','Align_len'], axis=0,
               ascending=False).head(10).to_csv(path_or_buf=table_name, sep='\t',index=False,float_format = "%.3f")
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
import re
from collections import defaultdict
import pandas as pd
import sys
import matplotlib
matplotlib.use("agg")
import matplotlib.pyplot as plt
from pathlib import Path
import seaborn as sns

# matplotlib.use("agg")
sys.stdout = open(snakemake.log[0], 'w')
print("pandas version:", pd.__version__)
merger_files =snakemake.input #glob("*[1-9]*.merger")
pat = re.compile(r'([0-9]+\.[0-9]+)%')
parse_dic = defaultdict(list)

for f in merger_files:
    print("merger_files", repr(f))
    with open(f, mode='r', encoding='utf8') as fh:
        for line in fh:
            match = re.search(pat, line)
            if match:
                idx_name = Path(f).name
                parse_dic[idx_name].append(match.group(1)) 
print(parse_dic)
df = pd.DataFrame.from_dict(parse_dic, orient='index',columns=["Identity %","Similarity %","number of Gaps"]).astype(float)
print(df)
# fig, ax = plt.subplots(figsize=(8,6))
# plot = df.plot(rot=60, alpha=0.5, ax=ax, title="Phred quality")
# ax.set_xlabel("Merged reads")
# #fig = plot.get_figure()
# locs, labels = plt.xticks()
# plt.setp(labels, rotation=50)

fig, ax = plt.subplots(figsize=(8,6))
sns.lineplot(data=df, sort=False, alpha=0.6)
locs, labels = plt.xticks()
plt.setp(labels, rotation=50)
plt.tight_layout()
fig.savefig(snakemake.output[0])
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
from ete3 import NCBITaxa

#The first time this will download the taxonomic NCBI database and save a parsed version
#of it in  `~/.etetoolkit/taxa.sqlite`.May take some minutes
ncbi = NCBITaxa()
print("ncbi.dbfile", ncbi.dbfile)

with open(snakemake.input[0], 'r', encoding='utf8') as fh:
    genus_list = fh.read().strip().split('\n')

genus_to_taxid = ncbi.get_name_translator(genus_list)
tax_id_vals = genus_to_taxid.values()

tree = ncbi.get_topology([genus_id for subls in tax_id_vals for genus_id in subls], intermediate_nodes=True)

# `get_ascii()` has a bug, prints the taxons before to genus without any separation between them, so a way to avoid that is using extra attribues, `dist` seems to be less invasive. Also, numbers from 'dist' are replaced
with open(snakemake.output[0], mode='w', encoding='utf8') as fh:
    print(tree.get_ascii(attributes=["dist", "sci_name"]).replace('1.0,','-'), file=fh)
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
__author__ = "Julian de Ruiter"
__copyright__ = "Copyright 2017, Julian de Ruiter"
__email__ = "julianderuiter@gmail.com"
__license__ = "MIT"


from os import path
from tempfile import TemporaryDirectory

from snakemake.shell import shell

log = snakemake.log_fmt_shell(stdout=False, stderr=True)

def basename_without_ext(file_path):
    """Returns basename of file path, without the file extension."""

    base = path.basename(file_path)

    split_ind = 2 if base.endswith(".gz") else 1
    base = ".".join(base.split(".")[:-split_ind])

    return base


# Run fastqc, since there can be race conditions if multiple jobs 
# use the same fastqc dir, we create a temp dir.
with TemporaryDirectory() as tempdir:
    shell("fastqc {snakemake.params} --quiet "
          "--outdir {tempdir} {snakemake.input[0]}"
          " {log}")

    # Move outputs into proper position.
    output_base = basename_without_ext(snakemake.input[0])
    html_path = path.join(tempdir, output_base + "_fastqc.html")
    zip_path = path.join(tempdir, output_base + "_fastqc.zip")

    if snakemake.output.html != html_path:
        shell("mv {html_path} {snakemake.output.html}")

    if snakemake.output.zip != zip_path:
        shell("mv {zip_path} {snakemake.output.zip}")
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
__author__ = "Julian de Ruiter"
__copyright__ = "Copyright 2017, Julian de Ruiter"
__email__ = "julianderuiter@gmail.com"
__license__ = "MIT"


from os import path

from snakemake.shell import shell


input_dirs = set(path.dirname(fp) for fp in snakemake.input)
output_dir = path.dirname(snakemake.output[0])
output_name = path.basename(snakemake.output[0])
log = snakemake.log_fmt_shell(stdout=True, stderr=True)

shell(
    "multiqc"
    " {snakemake.params}"
    " --force"
    " -o {output_dir}"
    " -n {output_name}"
    " {input_dirs}"
    " {log}")
ShowHide 9 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/matrs/16s-rRNA-Sanger
Name: 16s-rrna-sanger
Version: 1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...