Renders a collection of sequences into a pangenome graph.

public public 1yr ago Version: dev 0 bookmarks
Loading...

Introduction

nf-core/pangenome is a bioinformatics best-practice analysis pipeline for pangenome graph construction. The pipeline renders a collection of sequences into a pangenome graph. Its goal is to build a graph that is locally directed and acyclic while preserving large-scale variation. Maintaining local linearity is important for interpretation, visualization, mapping, comparative genomics, and reuse of pangenome graphs.

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website .

Pipeline summary

  • All versus all alignment ( WFMASH )

  • Graph induction ( SEQWISH )

  • Graph normalization ( SMOOTHXG )

  • Remove redundancy ( GFAFFIX )

  • Graph statistics and qualitative visualizations ( ODGI )

  • Combine diagnostic information into a report ( MULTIQC )

Usage

Note If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

Now, you can run the pipeline using:

nextflow run nf-core/pangenome -r dev --input <BGZIPPED_FASTA> --n_haplotypes <NUM_HAPS_IN_FASTA> --outdir <OUTDIR> -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>

Warning: Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters ; see docs .

For more details and further functionality, please refer to the usage documentation and the parameter documentation .

Advantages over PGGB

This Nextflow pipeline version's major advantage is that it can distribute the usually computationally heavy all versus all alignment step across a whole cluster. It is capable of splitting the initial approximate alignments into problems of equal size. The base-level alignments are then distributed across several processes. Assuming you have a cluster with 10 nodes and you are the only one using it, we would recommend to set --wfmash_chunks 10 . If you have a cluster with 20 nodes, but you have to share it with others, maybe setting it to --wfmash_chunks 10 could be a good fit, because then you don't have to wait too long for your jobs to finish.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation .

Credits

nf-core/pangenome was originally adapted from PGGB by Simon Heumos , Michael Heuer .

Simon Heumos is currently the sole developer.

Many thanks to all who have helped out and contributed along the way, including (but not limited to)*:

Name Affiliation
Philipp Ehmele Institute of Computational Biology, Helmholtz Zentrum München, Munich, Germany
Gisela Gabernet Quantitative Biology Center (QBiC) Tübingen, University of Tübingen, Germany
Department of Pathology, Yale School of Medicine, New Haven, USA
Erik Garrison The University of Tennessee Health Science Center, Memphis, Tennessee, TN, USA
Andrea Guarracino Genomics Research Centre, Human Technopole, Milan, Italy
The University of Tennessee Health Science Center, Memphis, Tennessee, TN, USA
Friederike Hanssen Quantitative Biology Center (QBiC) Tübingen, University of Tübingen, Germany
Biomedical Data Science, Department of Computer Science, University of Tübingen, Germany
Michael Heuer Mammoth Biosciences, Inc., San Francisco, CA, USA
Lukas Heumos Institute of Computational Biology, Helmholtz Zentrum München, Munich, Germany
Institute of Lung Biology and Disease and Comprehensive Pneumology Center, Helmholtz Zentrum München, Munich, Germany
Simon Heumos Quantitative Biology Center (QBiC) Tübingen, University of Tübingen, Germany
Biomedical Data Science, Department of Computer Science, University of Tübingen, Germany
Susanne Jodoin Quantitative Biology Center (QBiC) Tübingen, University of Tübingen, Germany
Júlia Mir Petrol Quantitative Biology Center (QBiC) Tübingen, University of Tübingen, Germany

* Listed in alphabetical order

Acknowledgments

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #pangenome channel (you can join with this invite ), or contact me Simon Heumos .

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Changelog

CHANGELOG

Code Snippets

22
23
24
25
26
27
28
29
30
31
"""
samtools \\
    faidx \\
    $fasta \\
    \$(cat ${community}) > ${community}.fa
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
29
30
31
32
33
34
35
36
37
38
39
40
41
"""
multiqc \\
    --force \\
    $args \\
    $config \\
    $extra_config \\
    .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
44
45
46
47
48
49
50
51
52
53
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
net2communities.py \
-e ${prefix}.paf.edges.list.txt \
-w ${prefix}.paf.edges.weights.txt \
-n ${prefix}.paf.vertices.id2name.txt \
--accurate-detection \
--output-prefix ${prefix} \
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    pggb: \$(pggb --version 2>&1 | grep -o 'pggb .*' | cut -f2 -d ' ')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
"""
paf2net.py -p $paf \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    pggb: \$(pggb --version 2>&1 | grep -o 'pggb .*' | cut -f2 -d ' ')
END_VERSIONS
"""
NextFlow From line 23 of paf2net/main.nf
23
24
25
26
27
28
29
30
31
"""
split_approx_mappings_in_chunks.py $paf \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    pggb: \$(pggb --version 2>&1 | grep -o 'pggb .*' | cut -f2 -d ' ')
END_VERSIONS
"""
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
"""
ref=\$(echo "$vcf_spec" | cut -f 1 -d:)
delim=\$(echo "$vcf_spec" | cut -f 2 -d:)
pop_length=\$(echo "$vcf_spec" | cut -f 3 -d:)

if [[ -z \$pop_length ]]; then
    pop_length=0
fi

vcf="${graph}".\$(echo \$ref | tr '/|' '_').vcf
vg deconstruct -P \$ref -H \$delim -e -a -t "${task.cpus}" "${graph}" > \$vcf
bcftools stats \$vcf > \$vcf.stats
if [[ \$pop_length -gt 0 ]]; then
    vcf_decomposed=${graph}.final.\$(echo \$ref | tr '/|' '_').decomposed.vcf
    vcf_decomposed_tmp=\$vcf_decomposed.tmp.vcf
    bgzip -c -@ ${task.cpus} \$vcf > \$vcf.gz
    vcfbub -l 0 -a \$pop_length --input \$vcf.gz | vcfwave -I 1000 -t ${task.cpus} > \$vcf_decomposed_tmp
    #TODO: to remove when vcfwave will be bug-free
    # The TYPE info sometimes is wrong/missing
    # There are variants without the ALT allele
    bcftools annotate -x INFO/TYPE \$vcf_decomposed_tmp  | awk '\$5 != "."' > \$vcf_decomposed
    rm \$vcf_decomposed_tmp \$vcf.gz
    bcftools stats \$vcf_decomposed > \$vcf_decomposed.stats
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    pggb: \$(pggb --version 2>&1 | grep -o 'pggb .*' | cut -f2 -d ' ')
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
"""
gfaffix \\
    $args \\
    $gfa \\
    -o ${prefix}.gfaffix.gfa > ${prefix}.affixes.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gfaffix: \$(gfaffix --version 2>&1 | grep -o 'gfaffix .*' | cut -f2 -d ' ')
END_VERSIONS
"""
NextFlow From line 25 of gfaffix/main.nf
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
multiqc \\
    --force \\
    $args \\
    $config \\
    $extra_config \\
    .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
43
44
45
46
47
48
49
50
51
52
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
odgi \\
    build \\
    --threads $task.cpus \\
    --gfa ${graph} \\
    --out ${prefix}.og \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
odgi \\
    draw \\
    --threads $task.cpus \\
    --idx ${graph} \\
    --coords-in ${lay} \\
    --png ${prefix}.png \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
"""
odgi \\
    layout \\
    --threads $task.cpus \\
    --idx ${graph} \\
    $args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
"""
odgi \\
    sort \\
    --threads $task.cpus \\
    --idx ${graph} \\
    --out ${prefix}.og \\
    $args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
ls *.og > files
odgi \\
    squeeze \\
    $args \\
    --threads $task.cpus \\
    --input-graphs files \\
    -o ${prefix}.og


cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v'))
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
"""
odgi \\
    stats \\
    --threads $task.cpus \\
    --idx ${graph} \\
    $args > ${prefix}.$suffix

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
odgi \\
    unchop \\
    --threads $task.cpus \\
    --idx ${graph} \\
    --out ${prefix}.og \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
odgi \\
    view \\
    --threads $task.cpus \\
    --idx ${graph} \\
    --to-gfa \\
    $args > ${prefix}.gfa

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
odgi \\
    viz \\
    --threads $task.cpus \\
    --idx ${graph} \\
    --out ${prefix}.png \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    odgi: \$(echo \$(odgi version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
"""
samtools \\
    faidx \\
    $args \\
    $fasta

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
36
37
38
39
40
41
42
43
"""
touch ${fasta}.fai
cat <<-END_VERSIONS > versions.yml

"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 36 of faidx/main.nf
29
30
31
32
33
34
35
36
37
38
39
40
41
"""
seqwish \\
    --threads $task.cpus \\
    --paf-alns=$input \\
    --seqs=$fasta \\
    --gfa=${prefix}.gfa \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    seqwish: \$(echo \$(seqwish --version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
smoothxg \\
    --threads=$task.cpus \\
    --gfa-in=${gfa} \\
    --smoothed-out=${prefix}.smoothxg.gfa \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    smoothxg: \$(smoothxg --version 2>&1 | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
32
33
34
35
36
37
38
39
"""
bgzip $command -c $args -@${task.cpus} $input > ${output}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    tabix: \$(echo \$(tabix -h 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 32 of bgzip/main.nf
46
47
48
49
50
51
52
53
"""
touch ${output}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    tabix: \$(echo \$(tabix -h 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 46 of bgzip/main.nf
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
"""
wfmash \\
    ${fasta_gz} \\
    $query \\
    $query_list \\
    --threads $task.cpus \\
    $paf_mappings \\
    $args > ${prefix}.paf


cat <<-END_VERSIONS > versions.yml
"${task.process}":
    wfmash: \$(echo \$(wfmash --version 2>&1) | cut -f 1 -d '-' | cut -f 2 -d 'v')
END_VERSIONS
"""
NextFlow From line 29 of wfmash/main.nf
ShowHide 20 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public Anas
URL: https://nf-co.re/pangenome
Name: pangenome
Version: dev
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...