A small-RNA sequencing analysis pipeline

public public 1yr ago Version: 2.2.1 0 bookmarks
Loading...

Introduction

nf-core/smrnaseq is a bioinformatics best-practice analysis pipeline for Small RNA-Seq.

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website .

Online videos

A short talk about the history, current status and functionality on offer in this pipeline was given by Lorena Pantano (@lpantano) on 9th November 2021 as part of the nf-core/bytesize series.

You can find numerous talks on the nf-core events page from various topics including writing pipelines/modules in Nextflow DSL2, using nf-core tooling, running nf-core pipelines as well as more generic content like contributing to Github. Please check them out!

Pipeline summary

  1. Raw read QC ( FastQC )

  2. Adapter trimming ( Trim Galore! )

    1. Insert Size calculation

    2. Collapse reads ( seqcluster )

  3. Contamination filtering ( Bowtie2 )

  4. Alignment against miRBase mature miRNA ( Bowtie1 )

  5. Alignment against miRBase hairpin

    1. Unaligned reads from step 3 ( Bowtie1 )

    2. Collapsed reads from step 2.2 ( Bowtie1 )

  6. Post-alignment processing of miRBase hairpin

    1. Basic statistics from step 3 and step 4.1 ( SAMtools )

    2. Analysis on miRBase, or MirGeneDB hairpin counts ( edgeR )

      • TMM normalization and a table of top expression hairpin

      • MDS plot clustering samples

      • Heatmap of sample similarities

    3. miRNA and isomiR annotation from step 4.1 ( mirtop )

  7. Alignment against host reference genome ( Bowtie1 )

    1. Post-alignment processing of alignment against host reference genome ( SAMtools )
  8. Novel miRNAs and known miRNAs discovery ( MiRDeep2 )

    1. Mapping against reference genome with the mapper module

    2. Known and novel miRNA discovery with the mirdeep2 module

  9. miRNA quality control ( mirtrace )

  10. Present QC for raw read, alignment, and expression results ( MultiQC )

Usage

Note If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

Now, you can run the pipeline using:

nextflow run nf-core/smrnaseq \
 -profile <docker/singularity/.../institute> \
 --input samplesheet.csv \
 --genome 'GRCh37' \
 --mirtrace_species 'hsa' \
 --protocol 'illumina' \
 --outdir <OUTDIR>

Warning: Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters ; see docs .

For more details, please refer to the usage documentation and the parameter documentation .

Pipeline output

To see the the results of a test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation .

Credits

nf-core/smrnaseq was originally written by P. Ewels, C. Wang, R. Hammarén, L. Pantano, A. Peltzer.

We thank the following people for their extensive assistance in the development of this pipeline:

Lorena Pantano ( @lpantano ) from MIT updated the pipeline to Nextflow DSL2.

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #smrnaseq channel (you can join with this invite ).

Citations

If you use nf-core/smrnaseq for your analysis, please cite it using the following doi: 10.5281/zenodo.3456879

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

25
26
27
28
29
30
31
32
33
34
35
"""
echo $db_type
awk '/^>/ { x=index(\$6, "transcript_biotype:miRNA") } { if(!x) print }' $contaminants > subset.fa
blat -out=blast8 $mirna subset.fa /dev/stdout | awk 'BEGIN{FS="\t"}{if(\$11 < 1e-5)print \$1;}' | uniq > mirnahit.txt
awk 'BEGIN { while((getline<"mirnahit.txt")>0) l[">"\$1]=1 } /^>/ {x = l[\$1]} {if(!x) print }' subset.fa  > filtered.fa

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    blat: \$(echo \$(blat) | grep Standalone | awk '{ if (match(\$0,/[0-9]*[0-9]/,m)) print m[0] }')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
48
"""
echo $db_type
awk '/^>/ { x=(index(\$6, "transcript_biotype:rRNA") || index(\$6, "transcript_biotype:miRNA")) } { if(!x) print }' $contaminants > subset.fa
blat -out=blast8 $mirna subset.fa /dev/stdout | awk 'BEGIN{FS="\t"}{if(\$11 < 1e-5)print \$1;}' | uniq > mirnahit.txt
awk 'BEGIN { while((getline<"mirnahit.txt")>0) l[">"\$1]=1 } /^>/ {x = l[\$1]} {if(!x) print }' subset.fa  > filtered.fa

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    blat: \$(echo \$(blat) | grep Standalone | awk '{ if (match(\$0,/[0-9]*[0-9]/,m)) print m[0] }')
END_VERSIONS
"""
51
52
53
54
55
56
57
58
59
60
        """
        echo $db_type
        blat -out=blast8 $mirna $contaminants /dev/stdout | awk 'BEGIN{FS="\t"}{if(\$11 < 1e-5)print \$1;}' | uniq > mirnahit.txt
        awk 'BEGIN { while((getline<"mirnahit.txt")>0) l[">"\$1]=1 } /^>/ {x = l[\$1]} {if(!x) print }' $contaminants  > filtered.fa

cat <<-END_VERSIONS > versions.yml
        "${task.process}":
            blat: \$(echo \$(blat) | grep Standalone | awk '{ if (match(\$0,/[0-9]*[0-9]/,m)) print m[0] }')
        END_VERSIONS
        """
20
21
22
23
24
25
26
27
"""
bowtie2-build ${fasta} fasta_bidx --threads ${task.cpus}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie2: \$(echo \$(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*\$//')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
34
"""
# Remove any special base characters from reference genome FASTA file
sed '/^[^>]/s/[^ATGCatgc]/N/g' $fasta > genome.edited.fa
sed -i 's/ .*//' genome.edited.fa

# Build bowtie index
bowtie-build genome.edited.fa genome --threads ${task.cpus}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie: \$(echo \$(bowtie --version 2>&1) | sed 's/^.*bowtie-align-s version //; s/ .*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
"""
INDEX=`find -L ./ -name "*.3.ebwt" | sed 's/.3.ebwt//'`
bowtie2 \\
    --threads ${task.cpus} \\
    --very-sensitive-local \\
    -k 1 \\
    -x \$INDEX \\
    --un ${meta.id}.${contaminant_type}.filter.unmapped.contaminant.fastq \\
    ${reads} \\
    ${args} \\
    -S ${meta.id}.filter.contaminant.sam > ${meta.id}.contaminant_bowtie.log 2>&1

# extracting number of reads from bowtie logs
awk -v type=${contaminant_type} 'BEGIN{tot=0} {if(NR==4 || NR == 5){tot += \$1}} END {print "\\""type"\\": "tot }' ${meta.id}.contaminant_bowtie.log | tr -d , > filtered.${meta.id}_${contaminant_type}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie2: \$(echo \$(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*\$//' | tr -d '\0')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
"""
INDEX=`find -L ./ -name "*.3.ebwt" | sed 's/.3.ebwt//'`
bowtie \\
    -x \$INDEX \\
    -q <(zcat $reads) \\
    -p ${task.cpus} \\
    -t \\
    -k 50 \\
    --best \\
    --strata \\
    -e 99999 \\
    --chunkmbs 2048 \\
    --un ${meta.id}_unmapped.fq -S > ${meta.id}.sam

samtools view -bS ${meta.id}.sam > ${meta.id}.bam

if [ ! -f  "${meta.id}_unmapped.fq" ]
then
    touch ${meta.id}_unmapped.fq
fi
gzip ${meta.id}_unmapped.fq
mkdir unmapped
mv  ${meta.id}_unmapped.fq.gz  unmapped/.

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie: \$(echo \$(bowtie --version 2>&1) | sed 's/^.*bowtie-align-s version //; s/ .*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
20
21
22
23
24
25
26
27
"""
bowtie-build ${fasta} fasta_bidx --threads ${task.cpus}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie: \$(echo \$(bowtie --version 2>&1) | sed 's/^.*bowtie-align-s version //; s/ .*\$//')
END_VERSIONS
"""
20
21
22
23
24
25
26
27
"""
collapse_mirtop.r ${mirtop}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    r-base: \$(echo \$(R --version 2>&1) | sed 's/^.*R version //; s/ .*\$//')
END_VERSIONS
"""
20
21
22
23
24
25
26
27
28
29
30
31
32
33
"""
edgeR_miRBase.r $input_files

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    r-base: \$(echo \$(R --version 2>&1) | sed 's/^.*R version //; s/ .*\$//')
    limma: \$(Rscript -e "library(limma); cat(as.character(packageVersion('limma')))")
    edgeR: \$(Rscript -e "library(edgeR); cat(as.character(packageVersion('edgeR')))")
    data.table: \$(Rscript -e "library(data.table); cat(as.character(packageVersion('data.table')))")
    gplots: \$(Rscript -e "library(gplots); cat(as.character(packageVersion('gplots')))")
    methods: \$(Rscript -e "library(methods); cat(as.character(packageVersion('methods')))")
    statmod: \$(Rscript -e "library(statmod); cat(as.character(packageVersion('statmod')))")
END_VERSIONS
"""
21
22
23
24
25
26
27
"""
readnumber=\$(wc -l ${reads} | awk '{ print \$1/4 }')
cat ./filtered.${meta.id}_*.stats | \\
tr '\n' ', ' | \\
awk -v sample=${meta.id} -v readnumber=\$readnumber '{ print "id: \\"my_pca_section\\"\\nsection_name: \\"Contamination Filtering\\"\\ndescription: \\"This plot shows the amount of reads filtered by contaminant type.\\"\\nplot_type: \\"bargraph\\"\\npconfig:\\n  id: \\"contamination_filter_plot\\"\\n  title: \\"Contamination Plot\\"\\n  ylab: \\"Number of reads\\"\\ndata:\\n    "sample": {"\$0"\\"remaining reads\\": "readnumber"}" }' > ${meta.id}.contamination_mqc.yaml
gzip -c ${reads} > ${meta.id}.filtered.fastq.gz
"""
23
24
25
26
27
28
29
30
"""
fasta_formatter -w 0 -i $fasta -o ${fasta}_idx.fa

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastx_toolkit:  \$(echo "$VERSION")
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
mapper.pl \\
$reads \\
    -e \\
    -h \\
    -i \\
    -j \\
    -m \\
    -p $index_base \\
    -s ${meta.id}_collapsed.fa \\
    -t ${meta.id}_reads_vs_refdb.arf \\
    -o 4

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    mapper: \$(echo "$VERSION")
END_VERSIONS
"""
22
23
24
25
26
27
28
29
"""
pigz -f -d -p $task.cpus $reads

cat <<-END_VERSIONS > versions.yml
${task.process}":
    pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
"""
miRDeep2.pl  \\
    $reads   \\
    $fasta   \\
    $arf     \\
    $mature  \\
    none     \\
    $hairpin \\
    -d       \\
    -z _${reads.simpleName}

cat <<-END_VERSIONS > versions.yml
${task.process}":
    mirdeep2: \$(echo "$VERSION")
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
"""
mirtop gff --hairpin $hairpin --gtf $gtf -o mirtop --sps $filter_species ./bams/*
mirtop counts --hairpin $hairpin --gtf $gtf -o mirtop --sps $filter_species --add-extra --gff mirtop/mirtop.gff
mirtop export --format isomir --hairpin $hairpin --gtf $gtf --sps $filter_species -o mirtop mirtop/mirtop.gff
mirtop stats mirtop/mirtop.gff --out mirtop/stats
mv mirtop/stats/mirtop_stats.log mirtop/stats/full_mirtop_stats.log

cat <<-END_VERSIONS > versions.yml
${task.process}":
    mirtop: \$(echo \$(mirtop --version 2>&1) | sed 's/^.*mirtop //')
END_VERSIONS
"""
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
"""
export mirtracejar=\$(dirname \$(which mirtrace))

${config_lines.join("\n    ")}

java $java_mem -jar \$mirtracejar/mirtrace.jar --mirtrace-wrapper-name mirtrace qc  \\
    --species $params.mirtrace_species \\
    $primer \\
    $protocol \\
    --config mirtrace_config \\
    --write-fasta \\
    --output-dir mirtrace \\
    --force

cat <<-END_VERSIONS > versions.yml
${task.process}":
    mirtrace: \$(echo \$(mirtrace -v 2>&1))
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
# Uncompress FASTA reference files if necessary
FASTA="$fasta"
if [ \${FASTA: -3} == ".gz" ]; then
    gunzip -f \$FASTA
    FASTA=\${FASTA%%.gz}
fi
sed 's/&gt;/>/g' \$FASTA | sed 's#<br>#\\n#g' | sed 's#</p>##g' | sed 's#<p>##g' > \${FASTA}_html_cleaned.fa
# Remove spaces from miRBase FASTA files
sed '#^[^>]#s#[^AUGCaugc]#N#g' \${FASTA}_html_cleaned.fa > \${FASTA}_parsed.fa

sed -i 's#\s.*##' \${FASTA}_parsed.fa
seqkit grep -r --pattern \".*${filter_species}-.*\" \${FASTA}_parsed.fa > \${FASTA}_sps.fa
seqkit seq --rna2dna \${FASTA}_sps.fa > \${FASTA}_igenome.fa

cat <<-END_VERSIONS > versions.yml
${task.process}":
    seqkit: \$(echo \$(seqkit 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
"""
check_samplesheet.py \\
    $samplesheet \\
    samplesheet.valid.csv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
"""
seqcluster collapse -f $reads -m 1 --min_size 15 -o collapsed
gzip collapsed/*_trimmed.fastq
mkdir final
mv collapsed/*.fastq.gz final/.

cat <<-END_VERSIONS > versions.yml
${task.process}":
    seqcluster: \$(echo \$(seqcluster --version 2>&1) | sed 's/^.*seqcluster //')
END_VERSIONS
"""
26
27
28
29
30
31
32
33
"""
cat ${readList.join(' ')} > ${prefix}.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 26 of fastq/main.nf
40
41
42
43
44
45
46
47
48
"""
cat ${read1.join(' ')} > ${prefix}_1.merged.fastq.gz
cat ${read2.join(' ')} > ${prefix}_2.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 40 of fastq/main.nf
57
58
59
60
61
62
63
64
"""
touch ${prefix}.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 57 of fastq/main.nf
68
69
70
71
72
73
74
75
76
"""
touch ${prefix}_1.merged.fastq.gz
touch ${prefix}_2.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 68 of fastq/main.nf
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -sf $reads ${prefix}.fastq.gz

fastp \\
    --stdout \\
    --in1 ${prefix}.fastq.gz \\
    --thread $task.cpus \\
    --json ${prefix}.fastp.json \\
    --html ${prefix}.fastp.html \\
    $adapter_list \\
    $fail_fastq \\
    $args \\
    2> ${prefix}.fastp.log \\
| gzip -c > ${prefix}.fastp.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastp: \$(fastp --version 2>&1 | sed -e "s/fastp //g")
END_VERSIONS
"""
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -sf $reads ${prefix}.fastq.gz

fastp \\
    --in1 ${prefix}.fastq.gz \\
    --out1  ${prefix}.fastp.fastq.gz \\
    --thread $task.cpus \\
    --json ${prefix}.fastp.json \\
    --html ${prefix}.fastp.html \\
    $adapter_list \\
    $fail_fastq \\
    $args \\
    2> ${prefix}.fastp.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastp: \$(fastp --version 2>&1 | sed -e "s/fastp //g")
END_VERSIONS
"""
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
"""
[ ! -f  ${prefix}_1.fastq.gz ] && ln -sf ${reads[0]} ${prefix}_1.fastq.gz
[ ! -f  ${prefix}_2.fastq.gz ] && ln -sf ${reads[1]} ${prefix}_2.fastq.gz
fastp \\
    --in1 ${prefix}_1.fastq.gz \\
    --in2 ${prefix}_2.fastq.gz \\
    --out1 ${prefix}_1.fastp.fastq.gz \\
    --out2 ${prefix}_2.fastp.fastq.gz \\
    --json ${prefix}.fastp.json \\
    --html ${prefix}.fastp.html \\
    $adapter_list \\
    $fail_fastq \\
    $merge_fastq \\
    --thread $task.cpus \\
    --detect_adapter_for_pe \\
    $args \\
    2> ${prefix}.fastp.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastp: \$(fastp --version 2>&1 | sed -e "s/fastp //g")
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
printf "%s %s\\n" $rename_to | while read old_name new_name; do
    [ -f "\${new_name}" ] || ln -s \$old_name \$new_name
done

fastqc \\
    $args \\
    --threads $task.cpus \\
    $renamed_files

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
46
47
48
49
50
51
52
53
54
"""
touch ${prefix}.html
touch ${prefix}.zip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
multiqc \\
    --force \\
    $args \\
    $config \\
    $extra_config \\
    .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
43
44
45
46
47
48
49
50
51
52
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
"""
samtools \\
    flagstat \\
    --threads ${task.cpus} \\
    $bam \\
    > ${prefix}.flagstat

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
"""
touch ${prefix}.flagstat

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    idxstats \\
    --threads ${task.cpus-1} \\
    $bam \\
    > ${prefix}.idxstats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
40
41
42
43
44
45
46
47
"""
touch ${prefix}.idxstats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    index \\
    -@ ${task.cpus-1} \\
    $args \\
    $input

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
"""
touch ${input}.bai
touch ${input}.crai
touch ${input}.csi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 38 of index/main.nf
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
samtools sort \\
    $args \\
    -@ $task.cpus \\
    -o ${prefix}.bam \\
    -T $prefix \\
    $bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
41
42
43
44
45
46
47
48
"""
touch ${prefix}.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 41 of sort/main.nf
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
samtools \\
    stats \\
    --threads ${task.cpus} \\
    ${reference} \\
    ${input} \\
    > ${prefix}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
41
42
43
44
45
46
47
48
"""
touch ${prefix}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 41 of stats/main.nf
ShowHide 30 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/smrnaseq
Name: smrnaseq
Version: 2.2.1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...