Pipeline for RNA and DNA integrated analysis for somatic mutation detection

public public 1yr ago Version: dev 0 bookmarks

:warning: UNDER ACTIVE DEVELOPMENT :warning:

Hi beta-tester, thanks for agreeing to help out in this repo. I have created a project to add issues/tasks. Plese feel free to use the issues tab . I am currently working on a draft for this methods so if you end up using this pipeline or bits of it I would appreciate your citation (to be added).

Note for beta testers

Profiles have not been tested yet, please use the -c option as nextflow run main -c conf/tcga_train_set. config . Change the config option so it works on your end, I left as an example the one I am using for my analysis. At the moment to run the pipeline you will need:

  • DNA and RNA tumour BAM/FASTQ files and DNA normal BAM file (this is what I have been testing so far)

  • Reference files

    • Use the same reference file you used for your BAM file

    • There are other reference files for some of the filtering steps. Those files are optional so should not be required for testing, however if you want to run it with everything let me know and I can provide them for you.

  • I recommend that for starting thing you generate a profile similar to the example I have in config/tcga_train_set.config and a input table like assets/TargetsFileTCGA1Sample.csv

  • Note that RNA only mode has not been tested yet. At the moment everything run with DNA-RNA in parallel. I suspect it should be okay until the consensus step, which might require some changes to allow RNA only data.

Introduction

The nf-core/rnadnavar is a bioinformatics best-practice analysis pipeline for Pipeline for RNA and DNA integrated analysis for somatic mutation detection.

Initially designed for cancer research, the pipeline
uses different variant calling algorithms and applies a consensus approach. A final filtering stage, should provide a set of annotated somatic variants.

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website .

Pipeline summary

  1. Read QC ( FastQC )

  2. Present QC for raw reads ( MultiQC )

  3. Alignment (BWA/STAR)

  4. GATK pre-processing

  5. Variant calling

  6. Normalise calls

  7. Annotation

  8. Consensus

  9. Filtering

  10. Realignment [OPT]

  11. RNA filtering

Quick Start

  1. Install Nextflow ( >=21.10.3 )

  2. Install any of Docker , Singularity (you can follow this tutorial ), Podman , Shifter or Charliecloud for full pipeline reproducibility (you can use Conda both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs ) .

  3. Download the pipeline and test it on a minimal dataset with a single command:

    # Not working/tested yet - it is the todo list
    nextflow run nf-core/rnadnavar -profile test,YOURPROFILE --outdir <OUTDIR>
    

    Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile ( YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string.

    • The pipeline comes with config profiles called docker , singularity , podman , shifter , charliecloud and conda which instruct the pipeline to use the named tool for software management. For example, -profile test,docker .

    • Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use -profile <institute> in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment.

    • If you are using singularity , please use the nf-core download command to download images first, before running the pipeline. Setting the NXF_SINGULARITY_CACHEDIR or singularity.cacheDir Nextflow options enables you to store and re-use the images from a central location for future pipeline runs.

    • If you are using conda , it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs.

  4. Start running your own analysis!

    nextflow run nf-core/rnadnavar --input samplesheet.
    csv --outdir <OUTDIR> --genome GRCh38 -profile 
    <docker/singularity/to/add/test/here>
    

Documentation

The nf-core/rnadnavar pipeline comes with documentation about the pipeline usage , parameters and output .

Credits

The nf-core/rnadnavar was originally written by Raquel Manzano Garcia at Cancer Research UK Cambridge Institute with the initial and continuous support of Maxime U Garcia. The workflow is based on RNA-MuTect which was originally published by Yizhak, et al 2019 (Science)

We thank the following people for their assistance in the development of this pipeline: TBC

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #rnadnavar channel (you can join with this invite ).

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

20
21
22
23
24
25
26
27
"""
awk -v FS='\t' -v OFS='\t' '{ print \$1, \"0\", \$2 }' ${fasta_fai} > ${fasta_fai.baseName}.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gawk: \$(awk -Wversion | sed '1!d; s/.*Awk //; s/,.*//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
awk -vFS="\t" '{
    t = \$5  # runtime estimate
    if (t == "") {
        # no runtime estimate in this row, assume default value
        t = (\$3 - \$2) / ${params.nucleotides_per_second}
    }
    if (name == "" || (chunk > 600 && (chunk + t) > longest * 1.05)) {
        # start a new chunk
        name = sprintf("%s_%d-%d.bed", \$1, \$2+1, \$3)
        chunk = 0
        longest = 0
    }
    if (t > longest)
        longest = t
    chunk += t
    print \$0 > name
}' ${intervals}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gawk: \$(awk -Wversion | sed '1!d; s/.*Awk //; s/,.*//')
END_VERSIONS
"""
49
50
51
52
53
54
55
56
57
58
59
"""
grep -v '^@' ${intervals} | awk -vFS="\t" '{
    name = sprintf("%s_%d-%d", \$1, \$2, \$3);
    printf("%s\\t%d\\t%d\\n", \$1, \$2-1, \$3) > name ".bed"
}'

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gawk: \$(awk -Wversion | sed '1!d; s/.*Awk //; s/,.*//')
END_VERSIONS
"""
61
62
63
64
65
66
67
68
69
70
71
"""
awk -vFS="[:-]" '{
    name = sprintf("%s_%d-%d", \$1, \$2, \$3);
    printf("%s\\t%d\\t%d\\n", \$1, \$2-1, \$3) > name ".bed"
}' ${intervals}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gawk: \$(awk -Wversion | sed '1!d; s/.*Awk //; s/,.*//')
END_VERSIONS
"""
26
27
28
29
30
31
32
"""
filter_mutations.py -i $maf --output ${prefix}.maf --ref $ref $args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(echo \$(python --version 2>&1) | sed 's/^.*Python (//;s/).*//')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
Rscript --no-save -<<'RCODE'
    gtf = read.table("${gtf}", sep="\t")
    gtf = subset(gtf, V3 == "exon")
    write.table(data.frame(chrom=gtf[,'V1'], start=gtf[,'V4'], end=gtf[,'V5']), "tmp.exome.bed", quote = F, sep="\t", col.names = F, row.names = F)
RCODE

awk '{print \$1 "\t" (\$2 - 1) "\t" \$3}' tmp.exome.bed > exome.bed
rm tmp.exome.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    Rscript: \$(echo \$(Rscript --version 2>&1) | sed 's/R scripting front-end version //')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
"""
filter_rna_mutations.py \\
    --maf $maf_first_pass \\
    --ref $fasta \\
    --output ${prefix}.maf \\
    $maf_second_opt \\
    $args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(echo \$(python --version 2>&1) | sed 's/^.*Python (//;s/).*//')
END_VERSIONS
"""
27
28
29
30
31
32
33
34
35
36
37
38
39
"""
gzip -d $vcf -c > ${vcf_decompressed}
vcf2maf.pl \\
    --input-vcf ${vcf_decompressed} \\
    --output-maf ${prefix}.maf \\
    --ref-fasta $fasta \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    vcf2maf: $VERSION
END_VERSIONS
"""
NextFlow From line 27 of vcf2maf/main.nf
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
"""
gzip -d $vcf -c > ${vcf_decompressed}
# GQ is a float when empty which can happen with some tools like freebayes - this is a fix
sed -i -E 's/(##FORMAT=<ID=GQ\\S+)(Integer)/\\1Float/' ${vcf_decompressed}

    vt \\
        decompose \\
        ${vcf_decompressed} \\
        $args \\
        -o ${prefix}.vcf 2> ${prefix}.stats
    gzip ${prefix}.vcf

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    vt decompose: \$(vt decompose -? 2>&1 | head -n1 | sed 's/^.*decompose //; s/ .*\$//')
END_VERSIONS
"""
17
18
19
20
21
22
23
24
25
26
"""
check_samplesheet.py \\
    $samplesheet \\
    samplesheet.valid.csv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -s $reads ${prefix}.fastq.gz
fastqc $args --threads $task.cpus ${prefix}.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
36
37
38
39
40
41
42
43
44
45
"""
[ ! -f  ${prefix}_1.fastq.gz ] && ln -s ${reads[0]} ${prefix}_1.fastq.gz
[ ! -f  ${prefix}_2.fastq.gz ] && ln -s ${reads[1]} ${prefix}_2.fastq.gz
fastqc $args --threads $task.cpus ${prefix}_1.fastq.gz ${prefix}_2.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
23
24
25
26
27
28
29
30
"""
multiqc -f $args .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
ShowHide 10 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/rnadnavar
Name: rnadnavar
Version: dev
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...