gatk4 RNA variant calling pipeline

public public 1yr ago Version: 1.0.0 0 bookmarks

Introduction

nf-core/rnavar is a bioinformatics best-practice analysis pipeline for GATK4 RNA variant calling.

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website .

Pipeline summary

  1. Merge re-sequenced FastQ files ( cat )

  2. Read QC ( FastQC )

  3. Align reads to reference genome ( STAR )

  4. Sort and index alignments ( SAMtools )

  5. Duplicate read marking ( GATK4 MarkDuplicates )

  6. Splits reads that contain Ns in their cigar string ( GATK4 SplitNCigarReads )

  7. Estimate and correct systematic bias using base quality score recalibration ( GATK4 BaseRecalibrator , GATK4 ApplyBQSR )

  8. Convert a BED file to a Picard Interval List ( GATK4 BedToIntervalList )

  9. Scatter one interval-list into many interval-files ( GATK4 IntervalListTools )

  10. Call SNPs and indels ( GATK4 HaplotypeCaller )

  11. Merge multiple VCF files into one VCF ( GATK4 MergeVCFs )

  12. Index the VCF ( Tabix )

  13. Filter variant calls based on certain criteria ( GATK4 VariantFiltration )

  14. Annotate variants ( snpEff , Ensembl VEP )

  15. Present QC for raw read, alignment, gene biotype, sample similarity, and strand-specificity checks ( MultiQC , R )

Summary of tools and version used in the pipeline:

Tool Version
FastQC 0.11.9
STAR 2.7.9a
Samtools 1.15.1
GATK 4.2.6.1
Tabix 1.11
SnpEff 5.0
Ensembl VEP 104.3
MultiQC 1.12

Quick Start

  1. Install Nextflow ( >=21.10.3 )

  2. Install any of Docker , Singularity (you can follow this tutorial ), Podman , Shifter or Charliecloud for full pipeline reproducibility (you can use Conda both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs ) .

  3. Download the pipeline and test it on a minimal dataset with a single command:

    nextflow run nf-core/rnavar -profile test,YOURPROFILE --outdir <OUTDIR>
    

    Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile ( YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string.

    • The pipeline comes with config profiles called docker , singularity , podman , shifter , charliecloud and conda which instruct the pipeline to use the named tool for software management. For example, -profile test,docker .

    • Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use -profile <institute> in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment.

    • If you are using singularity , please use the nf-core download command to download images first, before running the pipeline. Setting the NXF_SINGULARITY_CACHEDIR or singularity.cacheDir Nextflow options enables you to store and re-use the images from a central location for future pipeline runs.

    • If you are using conda , it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs.

  4. Start running your own analysis!

    nextflow run nf-core/rnavar -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --input samplesheet.csv --genome GRCh38
    

Documentation

The nf-core/rnavar pipeline comes with documentation about the pipeline usage , parameters and output .

Credits

These scripts were originally written in Nextflow DSL2 for use at the Barntumörbanken, Karolinska Institutet , by Praveen Raj ( @praveenraj2018 ) and Maxime U. Garcia ( @maxulysse ).

The pipeline is primarily maintained by Praveen Raj ( @praveenraj2018 ) and Maxime U. Garcia ( @maxulysse ) from Barntumörbanken, Karolinska Institutet .

Many thanks to other who have helped out along the way too, including (but not limited to): @ewels , @drpatelh .

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #rnavar channel (you can join with this invite ).

Citations

If you use nf-core/rnavar for your analysis, please cite it using the following doi: 10.5281/zenodo.6669637

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
Rscript --no-save -<<'RCODE'
    gtf = read.table("${gtf}", sep="\t")
    gtf = subset(gtf, V3 == "exon")
    write.table(data.frame(chrom=gtf[,'V1'], start=gtf[,'V4'], end=gtf[,'V5']), "tmp.exome.bed", quote = F, sep="\t", col.names = F, row.names = F)
RCODE

awk '{print \$1 "\t" (\$2 - 1) "\t" \$3}' tmp.exome.bed > exome.bed
rm tmp.exome.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    Rscript: \$(echo \$(Rscript --version 2>&1) | sed 's/R scripting front-end version //')
END_VERSIONS
"""
20
21
22
23
24
25
26
27
28
29
"""
check_samplesheet.py \\
    $samplesheet \\
    samplesheet.valid.csv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
26
27
28
29
30
31
32
33
"""
cat ${readList.join(' ')} > ${prefix}.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 26 of fastq/main.nf
40
41
42
43
44
45
46
47
48
"""
cat ${read1.join(' ')} > ${prefix}_1.merged.fastq.gz
cat ${read2.join(' ')} > ${prefix}_2.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 40 of fastq/main.nf
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
"""
mkdir $prefix

vep \\
    -i $vcf \\
    -o ${prefix}.ann.vcf \\
    $args \\
    --assembly $genome \\
    --species $species \\
    --cache \\
    --cache_version $cache_version \\
    --dir_cache $dir_cache \\
    --fork $task.cpus \\
    --vcf \\
    --stats_file ${prefix}.summary.html

rm -rf $prefix

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    ensemblvep: \$( echo \$(vep --help 2>&1) | sed 's/^.*Versions:.*ensembl-vep : //;s/ .*\$//')
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -s $reads ${prefix}.fastq.gz
fastqc $args --threads $task.cpus ${prefix}.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
36
37
38
39
40
41
42
43
44
45
"""
[ ! -f  ${prefix}_1.fastq.gz ] && ln -s ${reads[0]} ${prefix}_1.fastq.gz
[ ! -f  ${prefix}_2.fastq.gz ] && ln -s ${reads[1]} ${prefix}_2.fastq.gz
fastqc $args --threads $task.cpus ${prefix}_1.fastq.gz ${prefix}_2.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
50
51
52
53
54
55
56
57
58
"""
touch ${prefix}.html
touch ${prefix}.zip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
"""
gatk --java-options "-Xmx${avail_mem}g" ApplyBQSR \\
    --input $input \\
    --output ${prefix}.${input.getExtension()} \\
    --reference $fasta \\
    --bqsr-recal-file $bqsr_table \\
    $interval_command \\
    --tmp-dir . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
"""
gatk --java-options "-Xmx${avail_mem}g" BaseRecalibrator  \\
    --input $input \\
    --output ${prefix}.table \\
    --reference $fasta \\
    $interval_command \\
    $sites_command \\
    --tmp-dir . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
31
32
33
34
35
36
37
38
39
40
41
42
43
"""
gatk --java-options "-Xmx${avail_mem}g" BedToIntervalList \\
    --INPUT $bed \\
    --OUTPUT ${prefix}.interval_list \\
    --SEQUENCE_DICTIONARY $dict \\
    --TMP_DIR . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
47
48
49
50
51
52
53
54
"""
touch ${prefix}.interval_list

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
29
30
31
32
33
34
35
36
37
38
39
40
"""
gatk --java-options "-Xmx${avail_mem}g" CreateSequenceDictionary \\
    --REFERENCE $fasta \\
    --URI $fasta \\
    --TMP_DIR . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
43
44
45
46
47
48
49
50
"""
touch test.dict

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
"""
gatk --java-options "-Xmx${avail_mem}g" HaplotypeCaller \\
    --input $input \\
    --output ${prefix}.vcf.gz \\
    --reference $fasta \\
    $dbsnp_command \\
    $interval_command \\
    --tmp-dir . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
29
30
31
32
33
34
35
36
37
38
39
"""
gatk --java-options "-Xmx${avail_mem}g" IndexFeatureFile \\
    --input $feature_file \\
    --tmp-dir . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
"""

mkdir ${prefix}_split

gatk --java-options "-Xmx${avail_mem}g" IntervalListTools \\
    --INPUT $intervals \\
    --OUTPUT ${prefix}_split \\
    --TMP_DIR . \\
    $args

python3 <<CODE
import glob, os
# The following python code snippet rename the output files into different name to avoid overwriting or name conflict
intervals = sorted(glob.glob("*_split/*/*.interval_list"))
for i, interval in enumerate(intervals):
    (directory, filename) = os.path.split(interval)
    newName = os.path.join(directory, str(i + 1) + filename)
    os.rename(interval, newName)
CODE

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
"""
mkdir -p ${prefix}_split/temp_0001_of_6
mkdir -p ${prefix}_split/temp_0002_of_6
mkdir -p ${prefix}_split/temp_0003_of_6
mkdir -p ${prefix}_split/temp_0004_of_6
touch ${prefix}_split/temp_0001_of_6/1scattered.interval_list
touch ${prefix}_split/temp_0002_of_6/2scattered.interval_list
touch ${prefix}_split/temp_0003_of_6/3scattered.interval_list
touch ${prefix}_split/temp_0004_of_6/4scattered.interval_list

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
33
34
35
36
37
38
39
40
41
42
43
44
45
"""
gatk --java-options "-Xmx${avail_mem}g" MergeVcfs \\
    $input_list \\
    --OUTPUT ${prefix}.vcf.gz \\
    $reference_command \\
    --TMP_DIR . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
gatk --java-options "-Xmx${avail_mem}g" SplitNCigarReads \\
    --input $bam \\
    --output ${prefix}.bam \\
    --reference $fasta \\
    $interval_command \\
    --tmp-dir . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
34
35
36
37
38
39
40
41
42
43
44
45
46
"""
gatk --java-options "-Xmx${avail_mem}G" VariantFiltration \\
    --variant $vcf \\
    --output ${prefix}.vcf.gz \\
    --reference $fasta \\
    --tmp-dir . \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gatk4: \$(echo \$(gatk --version 2>&1) | sed 's/^.*(GATK) v//; s/ .*\$//')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
"""
gffread \\
    $gff \\
    $args \\
    -o ${prefix}.gtf
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gffread: \$(gffread --version 2>&1)
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
"""
gunzip \\
    -f \\
    $args \\
    $archive

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gunzip: \$(echo \$(gunzip --version 2>&1) | sed 's/^.*(gzip) //; s/ Copyright.*\$//')
END_VERSIONS
"""
NextFlow From line 23 of gunzip/main.nf
23
24
25
26
27
28
29
30
"""
multiqc -f $args .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
33
34
35
36
37
38
39
40
41
42
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
"""
samtools \\
    faidx \\
    $fasta

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
34
35
36
37
38
39
40
41
"""
touch ${fasta}.fai
cat <<-END_VERSIONS > versions.yml

"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 34 of faidx/main.nf
22
23
24
25
26
27
28
29
30
31
32
33
"""
samtools \\
    flagstat \\
    --threads ${task.cpus-1} \\
    $bam \\
    > ${bam}.flagstat

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
"""
samtools \\
    idxstats \\
    $bam \\
    > ${bam}.idxstats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    index \\
    -@ ${task.cpus-1} \\
    $args \\
    $input

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
"""
touch ${input}.bai
touch ${input}.crai
touch ${input}.csi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 38 of index/main.nf
27
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
samtools \\
    merge \\
    --threads ${task.cpus-1} \\
    $args \\
    ${reference} \\
    ${prefix}.${file_type} \\
    $input_files

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
45
46
47
48
49
50
51
52
"""
touch ${prefix}.${file_type}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 45 of merge/main.nf
24
25
26
27
28
29
30
"""
samtools sort $args -@ $task.cpus -o ${prefix}.bam -T $prefix $bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
34
35
36
37
38
39
40
41
"""
touch ${prefix}.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 34 of sort/main.nf
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
samtools \\
    stats \\
    --threads ${task.cpus-1} \\
    ${reference} \\
    ${input} \\
    > ${input}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
40
41
42
43
44
45
46
47
"""
touch ${input}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 40 of stats/main.nf
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
snpEff \\
    -Xmx${avail_mem}g \\
    $db \\
    $args \\
    -csvStats ${prefix}.csv \\
    $cache_command \\
    $vcf \\
    > ${prefix}.ann.vcf

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    snpeff: \$(echo \$(snpEff -version 2>&1) | cut -f 2 -d ' ')
END_VERSIONS
"""
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
"""
STAR \\
    --genomeDir $index \\
    --readFilesIn $reads  \\
    --runThreadN $task.cpus \\
    --outFileNamePrefix $prefix. \\
    $out_sam_type \\
    $ignore_gtf \\
    $seq_center \\
    $args

$mv_unsorted_bam

if [ -f ${prefix}.Unmapped.out.mate1 ]; then
    mv ${prefix}.Unmapped.out.mate1 ${prefix}.unmapped_1.fastq
    gzip ${prefix}.unmapped_1.fastq
fi
if [ -f ${prefix}.Unmapped.out.mate2 ]; then
    mv ${prefix}.Unmapped.out.mate2 ${prefix}.unmapped_2.fastq
    gzip ${prefix}.unmapped_2.fastq
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
END_VERSIONS
"""
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
"""
mkdir star
STAR \\
    --runMode genomeGenerate \\
    --genomeDir star/ \\
    --genomeFastaFiles $fasta \\
    --sjdbGTFfile $gtf \\
    --runThreadN $task.cpus \\
    $memory \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
    gawk: \$(echo \$(gawk --version 2>&1) | sed 's/^.*GNU Awk //; s/, .*\$//')
END_VERSIONS
"""
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
"""
samtools faidx $fasta
NUM_BASES=`gawk '{sum = sum + \$2}END{if ((log(sum)/log(2))/2 - 1 > 14) {printf "%.0f", 14} else {printf "%.0f", (log(sum)/log(2))/2 - 1}}' ${fasta}.fai`

mkdir star
STAR \\
    --runMode genomeGenerate \\
    --genomeDir star/ \\
    --genomeFastaFiles $fasta \\
    --sjdbGTFfile $gtf \\
    --runThreadN $task.cpus \\
    --genomeSAindexNbases \$NUM_BASES \\
    $memory \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
    gawk: \$(echo \$(gawk --version 2>&1) | sed 's/^.*GNU Awk //; s/, .*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
"""
bgzip  --threads ${task.cpus} -c $args $input > ${prefix}.gz
tabix $args2 ${prefix}.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    tabix: \$(echo \$(tabix -h 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
36
37
38
39
40
41
42
43
44
"""
touch ${prefix}.gz
touch ${prefix}.gz.tbi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    tabix: \$(echo \$(tabix -h 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
"""
tabix $args $tab

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    tabix: \$(echo \$(tabix -h 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
34
35
36
37
38
39
40
41
"""
touch ${tab}.tbi
cat <<-END_VERSIONS > versions.yml

"${task.process}":
    tabix: \$(echo \$(tabix -h 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 34 of tabix/main.nf
ShowHide 36 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/rnavar
Name: rnavar
Version: 1.0.0
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...