ATAC-seq peak-calling, QC and differential analysis pipeline

public public 1yr ago Version: 2.0 0 bookmarks

Introduction

nfcore/atacseq is a bioinformatics analysis pipeline used for ATAC-seq data.

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources.The results obtained from the full-sized test can be viewed on the nf-core website .

Pipeline summary

nf-core/atacseq metro map

  1. Raw read QC ( FastQC )

  2. Adapter trimming ( Trim Galore! )

  3. Choice of multiple aligners 1.( BWA ) 2.( Chromap ). For paired-end reads only working until mapping steps, see here 3.( Bowtie2 ) 4.( STAR )

  4. Mark duplicates ( picard )

  5. Merge alignments from multiple libraries of the same sample ( picard )

    1. Re-mark duplicates ( picard )

    2. Filtering to remove:

      • reads mapping to mitochondrial DNA ( SAMtools )

      • reads mapping to blacklisted regions ( SAMtools , BEDTools )

      • reads that are marked as duplicates ( SAMtools )

      • reads that are not marked as primary alignments ( SAMtools )

      • reads that are unmapped ( SAMtools )

      • reads that map to multiple locations ( SAMtools )

      • reads containing > 4 mismatches ( BAMTools )

      • reads that are soft-clipped ( BAMTools )

      • reads that have an insert size > 2kb ( BAMTools ; paired-end only )

      • reads that map to different chromosomes ( Pysam ; paired-end only )

      • reads that arent in FR orientation ( Pysam ; paired-end only )

      • reads where only one read of the pair fails the above criteria ( Pysam ; paired-end only )

    3. Alignment-level QC and estimation of library complexity ( picard , Preseq )

    4. Create normalised bigWig files scaled to 1 million mapped reads ( BEDTools , bedGraphToBigWig )

    5. Generate gene-body meta-profile from bigWig files ( deepTools )

    6. Calculate genome-wide enrichment (optionally relative to control) ( deepTools )

    7. Call broad/narrow peaks ( MACS2 )

    8. Annotate peaks relative to gene features ( HOMER )

    9. Create consensus peakset across all samples and create tabular file to aid in the filtering of the data ( BEDTools )

    10. Count reads in consensus peaks ( featureCounts )

    11. Differential accessibility analysis, PCA and clustering ( R , DESeq2 )

    12. Generate ATAC-seq specific QC html report ( ataqv )

  6. Merge filtered alignments across replicates ( picard )

    1. Re-mark duplicates ( picard )

    2. Remove duplicate reads ( SAMtools )

    3. Create normalised bigWig files scaled to 1 million mapped reads ( BEDTools , bedGraphToBigWig )

    4. Call broad/narrow peaks ( MACS2 )

    5. Annotate peaks relative to gene features ( HOMER )

    6. Create consensus peakset across all samples and create tabular file to aid in the filtering of the data ( BEDTools )

    7. Count reads in consensus peaks relative to merged library-level alignments ( featureCounts )

    8. Differential accessibility analysis, PCA and clustering ( R , DESeq2 )

  7. Create IGV session file containing bigWig tracks, peaks and differential sites for data visualisation ( IGV ).

  8. Present QC for raw read, alignment, peak-calling and differential accessibility results ( ataqv , MultiQC , R )

Usage

Note If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

To run on your data, prepare a tab-separated samplesheet with your input data. Please follow the documentation on samplesheets for more details. An example samplesheet for running the pipeline looks as follows:

sample,fastq_1,fastq_2,replicate
CONTROL,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz,1
CONTROL,AEG588A1_S1_L003_R1_001.fastq.gz,AEG588A1_S1_L003_R2_001.fastq.gz,2
CONTROL,AEG588A1_S1_L004_R1_001.fastq.gz,AEG588A1_S1_L004_R2_001.fastq.gz,3

Now, you can run the pipeline using:

nextflow run nf-core/atacseq --input samplesheet.csv --outdir <OUTDIR> --genome GRCh37 --read_length <50|100|150|200> -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>

See usage docs for all of the available options when running the pipeline.

Warning: Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters ; see docs .

For more details and further functionality, please refer to the usage documentation and the parameter documentation .

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation .

Credits

The pipeline was originally written by Harshil Patel ( @drpatelh ) from Seqera Labs, Spain and converted to Nextflow DSL2 by Björn Langer ( @bjlang ) and Jose Espinosa-Carrasco ( @JoseEspinosa ) from The Comparative Bioinformatics Group at The Centre for Genomic Regulation, Spain under the umbrella of the BovReg project .

Many thanks to others who have helped out and contributed along the way too, including (but not limited to): @ewels , @apeltzer , @crickbabs , drewjbeh , @houghtos , @jinmingda , @ktrns , @MaxUlysse , @mashehu , @micans , @pditommaso and @sven1103 .

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #atacseq channel (you can join with this invite ).

Citations

If you use nf-core/atacseq for your analysis, please cite it using the following doi: 10.5281/zenodo.2634132

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

24
25
26
27
28
29
30
31
32
33
34
"""
bampe_rm_orphan.py \\
    $bam \\
    ${prefix}.bam \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
36
37
38
39
40
41
42
43
"""
ln -s $bam ${prefix}.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
"""
SCALE_FACTOR=\$(grep '[0-9] mapped (' $flagstat | awk '{print 1000000/\$1}')
echo \$SCALE_FACTOR > ${prefix}.scale_factor.txt

bedtools \\
    genomecov \\
    -ibam $bam \\
    -bg \\
    -scale \$SCALE_FACTOR \\
    $pe \\
    $args \\
> tmp.bg

bedtools sort -i tmp.bg > ${prefix}.bedGraph

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bedtools: \$(bedtools --version | sed -e "s/bedtools v//g")
END_VERSIONS
"""
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
"""
deseq2_qc.r \\
    --count_file $counts \\
    --outdir ./ \\
    --outprefix $prefix \\
    --cores $task.cpus \\
    $args

sed 's/deseq2_pca/deseq2_pca_${task.index}/g' <$deseq2_pca_header >tmp.txt
sed -i -e 's/DESeq2 /${meta.id} DESeq2 /g' tmp.txt
cat tmp.txt ${prefix}.pca.vals.txt > ${prefix}.pca.vals_mqc.tsv

sed 's/deseq2_clustering/deseq2_clustering_${task.index}/g' <$deseq2_clustering_header >tmp.txt
sed -i -e 's/DESeq2 /${meta.id} DESeq2 /g' tmp.txt
cat tmp.txt ${prefix}.sample.dists.txt > ${prefix}.sample.dists_mqc.tsv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    r-base: \$(echo \$(R --version 2>&1) | sed 's/^.*R version //; s/ .*\$//')
    bioconductor-deseq2: \$(Rscript -e "library(DESeq2); cat(as.character(packageVersion('DESeq2')))")
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
"""
READS_IN_PEAKS=\$(intersectBed -a $bam -b $peak $args | awk -F '\t' '{sum += \$NF} END {print sum}')
samtools flagstat $bam > ${bam}.flagstat
grep 'mapped (' ${bam}.flagstat | grep -v "primary" | awk -v a="\$READS_IN_PEAKS" -v OFS='\t' '{print "${prefix}", a/\$1}' > ${prefix}.FRiP.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bedtools: \$(bedtools --version | sed -e "s/bedtools v//g")
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
27
28
29
30
31
32
33
34
"""
sortBed -i $blacklist -g $sizes | complementBed -i stdin -g $sizes $mito_filter > $file_out

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bedtools: \$(bedtools --version | sed -e "s/bedtools v//g")
END_VERSIONS
"""
36
37
38
39
40
41
42
43
"""
awk '{print \$1, '0' , \$2}' OFS='\t' $sizes $mito_filter > $file_out

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bedtools: \$(bedtools --version | sed -e "s/bedtools v//g")
END_VERSIONS
"""
20
21
22
23
24
25
26
27
28
29
"""
get_autosomes.py \\
    $fai \\
    ${fai.baseName}.autosomes.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
"""
gtf2bed \\
    $gtf \\
    > ${gtf.baseName}.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    perl: \$(echo \$(perl --version 2>&1) | sed 's/.*v\\(.*\\)) built.*/\\1/')
END_VERSIONS
"""
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
"""
find * -type l -name "*.bigWig" -exec echo -e ""{}"\\t0,0,178" \\; | { grep "^$bigwig_library_publish_dir" || test \$? = 1; } > mLb_bigwig.igv.txt
find * -type l -name "*Peak" -exec echo -e ""{}"\\t0,0,178" \\; | { grep "^$peak_library_publish_dir" || test \$? = 1; } > mLb_peaks.igv.txt
find * -type l -name "*.bed" -exec echo -e ""{}"\\t0,0,0" \\; | { grep "^$consensus_library_publish_dir" || test \$? = 1; } > mLb_bed.igv.txt
find * -type l -name "*.bigWig" -exec echo -e ""{}"\\t0,0,178" \\; | { grep "^$bigwig_replicate_publish_dir" || test \$? = 1; } > mRp_bigwig.igv.txt
find * -type l -name "*Peak" -exec echo -e ""{}"\\t0,0,178" \\; | { grep "^$peak_replicate_publish_dir" || test \$? = 1; } > mRp_peaks.igv.txt
find * -type l -name "*.bed" -exec echo -e ""{}"\\t0,0,0" \\; | { grep "^$consensus_replicate_publish_dir" || test \$? = 1; } > mRp_bed.igv.txt

cat *.txt > igv_files.txt
igv_files_to_session.py igv_session.xml igv_files.txt ../../genome/${fasta.getName()} --path_prefix '../../'

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
NextFlow From line 36 of local/igv.nf
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
"""
sort -T '.' -k1,1 -k2,2n ${peaks.collect{it.toString()}.sort().join(' ')} \\
    | mergeBed -c $mergecols -o $collapsecols > ${prefix}.txt

macs2_merged_expand.py \\
    ${prefix}.txt \\
    ${peaks.collect{it.toString()}.sort().join(',').replaceAll("_peaks.${peak_type}","")} \\
    ${prefix}.boolean.txt \\
    $args \\
    $expandparam

awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$1, \$2, \$3, \$4, "0", "+" }' ${prefix}.boolean.txt > ${prefix}.bed

echo -e "GeneID\tChr\tStart\tEnd\tStrand" > ${prefix}.saf
awk -v FS='\t' -v OFS='\t' 'FNR > 1 { print \$4, \$1, \$2, \$3,  "+" }' ${prefix}.boolean.txt >> ${prefix}.saf

plot_peak_intersect.r -i ${prefix}.boolean.intersect.txt -o ${prefix}.boolean.intersect.plot.pdf

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
    r-base: \$(echo \$(R --version 2>&1) | sed 's/^.*R version //; s/ .*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
"""
cat $peak | wc -l | awk -v OFS='\t' '{ print "${prefix}", \$1 }' | cat $peak_count_header - > ${prefix}.count_mqc.tsv
cat $frip_score_header $frip > ${prefix}.FRiP_mqc.tsv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sed: \$(echo \$(sed --version 2>&1) | sed 's/^.*GNU sed) //; s/ .*\$//')
END_VERSIONS
"""
70
71
72
73
74
75
76
77
78
79
80
81
"""
multiqc \\
    -f \\
    $args \\
    $custom_config \\
    .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
"""
plot_homer_annotatepeaks.r \\
    -i ${annos.join(',')} \\
    -s ${annos.join(',').replaceAll("${suffix}","")} \\
    -p $prefix \\
    $args

find ./ -type f -name "*summary.txt" -exec cat {} \\; | cat $mqc_header - > ${prefix}.summary_mqc.tsv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    r-base: \$(echo \$(R --version 2>&1) | sed 's/^.*R version //; s/ .*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
"""
plot_macs2_qc.r \\
    -i ${peaks.join(',')} \\
    -s ${peaks.join(',').replaceAll("_peaks.${peak_type}","")} \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    r-base: \$(echo \$(R --version 2>&1) | sed 's/^.*R version //; s/ .*\$//')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
"""
check_samplesheet.py \\
    $samplesheet \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
"""
STAR \\
    --genomeDir $index \\
    --readFilesIn $reads  \\
    --runThreadN $task.cpus \\
    --outFileNamePrefix $prefix. \\
    $out_sam_type \\
    $seq_center_tag \\
    $args
$mv_unsorted_bam
if [ -f ${prefix}.Unmapped.out.mate1 ]; then
    mv ${prefix}.Unmapped.out.mate1 ${prefix}.unmapped_1.fastq
    gzip ${prefix}.unmapped_1.fastq
fi
if [ -f ${prefix}.Unmapped.out.mate2 ]; then
    mv ${prefix}.Unmapped.out.mate2 ${prefix}.unmapped_2.fastq
    gzip ${prefix}.unmapped_2.fastq
fi
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
mkdir star
STAR \\
    --runMode genomeGenerate \\
    --genomeDir star/ \\
    --genomeFastaFiles $fasta \\
    --sjdbGTFfile $gtf \\
    --runThreadN $task.cpus \\
    $memory \\
    ${args.join(' ')}
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
END_VERSIONS
"""
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
"""
samtools faidx $fasta
NUM_BASES=`gawk '{sum = sum + \$2}END{if ((log(sum)/log(2))/2 - 1 > 14) {printf "%.0f", 14} else {printf "%.0f", (log(sum)/log(2))/2 - 1}}' ${fasta}.fai`
mkdir star
STAR \\
    --runMode genomeGenerate \\
    --genomeDir star/ \\
    --genomeFastaFiles $fasta \\
    --sjdbGTFfile $gtf \\
    --runThreadN $task.cpus \\
    --genomeSAindexNbases \$NUM_BASES \\
    $memory \\
    ${args.join(' ')}
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
END_VERSIONS
"""
19
20
21
22
23
24
25
26
"""
cat $bed | awk -v FS='\t' -v OFS='\t' '{ if(\$6=="+") \$3=\$2+1; else \$2=\$3-1; print \$1, \$2, \$3, \$4, \$5, \$6;}' > ${bed.baseName}.tss.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sed: \$(echo \$(sed --version 2>&1) | sed 's/^.*GNU sed) //; s/ .*\$//')
END_VERSIONS
"""
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
"""
ataqv \\
    $args \\
    $mito \\
    $peak \\
    $tss \\
    $excl_regs \\
    $autosom_ref \\
    --metrics-file "${prefix}.ataqv.json" \\
    --threads $task.cpus \\
    --name $prefix \\
    $organism \\
    $bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    ataqv: \$( ataqv --version )
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
34
"""
mkarv \\
    $args \\
    --concurrency $task.cpus \\
    --force \\
    ./html/ \\
    jsons/*

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    # mkarv: \$( mkarv --version ) # Use this when version string has been fixed
    ataqv: \$( ataqv --version )
END_VERSIONS
"""
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
"""
INDEX=`find -L ./ -name "*.rev.1.bt2" | sed "s/\\.rev.1.bt2\$//"`
[ -z "\$INDEX" ] && INDEX=`find -L ./ -name "*.rev.1.bt2l" | sed "s/\\.rev.1.bt2l\$//"`
[ -z "\$INDEX" ] && echo "Bowtie2 index files not found" 1>&2 && exit 1

bowtie2 \\
    -x \$INDEX \\
    $reads_args \\
    --threads $task.cpus \\
    $unaligned \\
    $args \\
    2> ${prefix}.bowtie2.log \\
    | samtools $samtools_command $args2 --threads $task.cpus -o ${prefix}.${extension} -

if [ -f ${prefix}.unmapped.fastq.1.gz ]; then
    mv ${prefix}.unmapped.fastq.1.gz ${prefix}.unmapped_1.fastq.gz
fi

if [ -f ${prefix}.unmapped.fastq.2.gz ]; then
    mv ${prefix}.unmapped.fastq.2.gz ${prefix}.unmapped_2.fastq.gz
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie2: \$(echo \$(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
    pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
END_VERSIONS
"""
80
81
82
83
84
85
86
87
88
89
90
91
92
"""
touch ${prefix}.${extension}
touch ${prefix}.bowtie2.log
touch ${prefix}.unmapped_1.fastq.gz
touch ${prefix}.unmapped_2.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie2: \$(echo \$(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
    pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
END_VERSIONS
"""
NextFlow From line 80 of align/main.nf
22
23
24
25
26
27
28
29
"""
mkdir bowtie2
bowtie2-build $args --threads $task.cpus $fasta bowtie2/${fasta.baseName}
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie2: \$(echo \$(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*\$//')
END_VERSIONS
"""
32
33
34
35
36
37
38
39
40
41
"""
mkdir bowtie2
touch bowtie2/${fasta.baseName}.{1..4}.bt2
touch bowtie2/${fasta.baseName}.rev.{1,2}.bt2

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bowtie2: \$(echo \$(bowtie2 --version 2>&1) | sed 's/^.*bowtie2-align-s version //; s/ .*\$//')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
34
"""
mkdir bwa
bwa \\
    index \\
    $args \\
    -p bwa/${fasta.baseName} \\
    $fasta

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bwa: \$(echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//')
END_VERSIONS
"""
37
38
39
40
41
42
43
44
45
46
47
48
49
50
"""
mkdir bwa

touch bwa/genome.amb
touch bwa/genome.ann
touch bwa/genome.bwt
touch bwa/genome.pac
touch bwa/genome.sa

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bwa: \$(echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//')
END_VERSIONS
"""
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
INDEX=`find -L ./ -name "*.amb" | sed 's/\\.amb\$//'`

bwa mem \\
    $args \\
    -t $task.cpus \\
    \$INDEX \\
    $reads \\
    | samtools $samtools_command $args2 --threads $task.cpus -o ${prefix}.bam -

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bwa: \$(echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
55
56
57
58
    """
}
if (meta.single_end) {
    """
NextFlow From line 55 of chromap/main.nf
74
75
76
    """
} else {
    """
NextFlow From line 74 of chromap/main.nf
93
"""
NextFlow From line 93 of chromap/main.nf
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
chromap \\
    -i \\
    $args \\
    -t $task.cpus \\
    -r $fasta \\
    -o ${prefix}.index

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    chromap: \$(echo \$(chromap --version 2>&1))
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
"""
samtools faidx $fasta
cut -f 1,2 ${fasta}.fai > ${fasta}.sizes

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    getchromsizes: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
35
36
37
38
39
40
41
42
43
"""
touch ${fasta}.fai
touch ${fasta}.sizes

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    getchromsizes: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
36
37
38
"""
computeMatrix \\
    $args \\
    --regionsFileName $bed \\
    --scoreFileName $bigwig \\
    --outFileName ${prefix}.computeMatrix.mat.gz \\
    --outFileNameMatrix ${prefix}.computeMatrix.vals.mat.tab \\
    --numberOfProcessors $task.cpus

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    deeptools: \$(computeMatrix --version | sed -e "s/computeMatrix //g")
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
plotFingerprint \\
    $args \\
    $extend \\
    --bamfiles ${bams.join(' ')} \\
    --plotFile ${prefix}.plotFingerprint.pdf \\
    --outRawCounts ${prefix}.plotFingerprint.raw.txt \\
    --outQualityMetrics ${prefix}.plotFingerprint.qcmetrics.txt \\
    --numberOfProcessors $task.cpus

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    deeptools: \$(plotFingerprint --version | sed -e "s/plotFingerprint //g")
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
plotHeatmap \\
    $args \\
    --matrixFile $matrix \\
    --outFileName ${prefix}.plotHeatmap.pdf \\
    --outFileNameMatrix ${prefix}.plotHeatmap.mat.tab

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    deeptools: \$(plotHeatmap --version | sed -e "s/plotHeatmap //g")
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
plotProfile \\
    $args \\
    --matrixFile $matrix \\
    --outFileName ${prefix}.plotProfile.pdf \\
    --outFileNameData ${prefix}.plotProfile.tab

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    deeptools: \$(plotProfile --version | sed -e "s/plotProfile //g")
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
printf "%s %s\\n" $rename_to | while read old_name new_name; do
    [ -f "\${new_name}" ] || ln -s \$old_name \$new_name
done

fastqc \\
    $args \\
    --threads $task.cpus \\
    $renamed_files

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
46
47
48
49
50
51
52
53
54
"""
touch ${prefix}.html
touch ${prefix}.zip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
"""
gffread \\
    $gff \\
    $args \\
    -o ${prefix}.gtf
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gffread: \$(gffread --version 2>&1)
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
# Not calling gunzip itself because it creates files
# with the original group ownership rather than the
# default one for that user / the work directory
gzip \\
    -cd \\
    $args \\
    $archive \\
    > $gunzip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gunzip: \$(echo \$(gunzip --version 2>&1) | sed 's/^.*(gzip) //; s/ Copyright.*\$//')
END_VERSIONS
"""
NextFlow From line 23 of gunzip/main.nf
41
42
43
44
45
46
47
"""
touch $gunzip
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gunzip: \$(echo \$(gunzip --version 2>&1) | sed 's/^.*(gzip) //; s/ Copyright.*\$//')
END_VERSIONS
"""
NextFlow From line 41 of gunzip/main.nf
28
29
30
31
32
33
34
35
36
37
38
39
40
41
"""
annotatePeaks.pl \\
    $peak \\
    $fasta \\
    $args \\
    -gtf $gtf \\
    -cpu $task.cpus \\
    > ${prefix}.annotatePeaks.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    homer: $VERSION
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
unique-kmers.py \\
    -k $kmer_size \\
    -R report.txt \\
    $args \\
    $fasta

grep ^number report.txt | sed 's/^.*:.[[:blank:]]//g' > kmers.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    khmer: \$( unique-kmers.py --version 2>&1 | grep ^khmer | sed 's/^khmer //;s/ .*\$//' )
END_VERSIONS
"""
33
34
35
36
37
38
39
40
41
42
43
44
45
46
"""
picard \\
    -Xmx${avail_mem}M \\
    CollectMultipleMetrics \\
    $args \\
    --INPUT $bam \\
    --OUTPUT ${prefix}.CollectMultipleMetrics \\
    $reference

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$(picard CollectMultipleMetrics --version 2>&1 | grep -o 'Version.*' | cut -f2- -d:)
END_VERSIONS
"""
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
"""
touch ${prefix}.CollectMultipleMetrics.alignment_summary_metrics
touch ${prefix}.CollectMultipleMetrics.insert_size_metrics
touch ${prefix}.CollectMultipleMetrics.quality_distribution.pdf
touch ${prefix}.CollectMultipleMetrics.base_distribution_by_cycle_metrics
touch ${prefix}.CollectMultipleMetrics.quality_by_cycle_metrics
touch ${prefix}.CollectMultipleMetrics.read_length_histogram.pdf
touch ${prefix}.CollectMultipleMetrics.base_distribution_by_cycle.pdf
touch ${prefix}.CollectMultipleMetrics.quality_by_cycle.pdf
touch ${prefix}.CollectMultipleMetrics.insert_size_histogram.pdf
touch ${prefix}.CollectMultipleMetrics.quality_distribution_metrics

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$(echo \$(picard CollectMultipleMetrics --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
END_VERSIONS
"""
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
picard \\
    -Xmx${avail_mem}M \\
    MarkDuplicates \\
    $args \\
    --INPUT $bam \\
    --OUTPUT ${prefix}.bam \\
    --REFERENCE_SEQUENCE $fasta \\
    --METRICS_FILE ${prefix}.MarkDuplicates.metrics.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$(echo \$(picard MarkDuplicates --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
END_VERSIONS
"""
51
52
53
54
55
56
57
58
59
60
"""
touch ${prefix}.bam
touch ${prefix}.bam.bai
touch ${prefix}.MarkDuplicates.metrics.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$(echo \$(picard MarkDuplicates --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
END_VERSIONS
"""
31
32
33
34
35
36
37
38
39
40
41
42
"""
picard \\
    -Xmx${avail_mem}M \\
    MergeSamFiles \\
    $args \\
    ${'--INPUT '+bam_files.join(' --INPUT ')} \\
    --OUTPUT ${prefix}.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
END_VERSIONS
"""
44
45
46
47
48
49
50
"""
ln -s ${bam_files[0]} ${prefix}.bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$( echo \$(picard MergeSamFiles --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
"""
preseq \\
    lc_extrap \\
    $args \\
    $paired_end \\
    -output ${prefix}.lc_extrap.txt \\
    $bam
cp .command.err ${prefix}.command.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    preseq: \$(echo \$(preseq 2>&1) | sed 's/^.*Version: //; s/Usage:.*\$//')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
"""
samtools \\
    flagstat \\
    --threads ${task.cpus} \\
    $bam \\
    > ${prefix}.flagstat

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    idxstats \\
    --threads ${task.cpus-1} \\
    $bam \\
    > ${prefix}.idxstats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    index \\
    -@ ${task.cpus-1} \\
    $args \\
    $input

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
"""
touch ${input}.bai
touch ${input}.crai
touch ${input}.csi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 38 of index/main.nf
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
samtools sort \\
    $args \\
    -@ $task.cpus \\
    -o ${prefix}.bam \\
    -T $prefix \\
    $bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
41
42
43
44
45
46
47
48
"""
touch ${prefix}.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 41 of sort/main.nf
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
samtools \\
    stats \\
    --threads ${task.cpus} \\
    ${reference} \\
    ${input} \\
    > ${prefix}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
41
42
43
44
45
46
47
48
"""
touch ${prefix}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 41 of stats/main.nf
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
"""
featureCounts \\
    $args \\
    $paired_end \\
    -T $task.cpus \\
    -a $annotation \\
    -s $strandedness \\
    -o ${prefix}.featureCounts.txt \\
    ${bams.join(' ')}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    subread: \$( echo \$(featureCounts -v 2>&1) | sed -e "s/featureCounts v//g")
END_VERSIONS
"""
42
43
44
45
46
47
48
49
50
51
52
53
54
55
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -s $reads ${prefix}.fastq.gz
trim_galore \\
    ${args_list.join(' ')} \\
    --cores $cores \\
    --gzip \\
    ${prefix}.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//')
    cutadapt: \$(cutadapt --version)
END_VERSIONS
"""
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
"""
[ ! -f  ${prefix}_1.fastq.gz ] && ln -s ${reads[0]} ${prefix}_1.fastq.gz
[ ! -f  ${prefix}_2.fastq.gz ] && ln -s ${reads[1]} ${prefix}_2.fastq.gz
trim_galore \\
    $args \\
    --cores $cores \\
    --paired \\
    --gzip \\
    ${prefix}_1.fastq.gz \\
    ${prefix}_2.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//')
    cutadapt: \$(cutadapt --version)
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
"""
bedGraphToBigWig \\
    $bedgraph \\
    $sizes \\
    ${prefix}.bigWig

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    ucsc: $VERSION
END_VERSIONS
"""
41
42
43
44
45
46
47
48
"""
touch ${prefix}.bigWig

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    ucsc: $VERSION
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
"""
umi_tools \\
    extract \\
    -I $reads \\
    -S ${prefix}.umi_extract.fastq.gz \\
    $args \\
    > ${prefix}.umi_extract.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    umitools: \$(umi_tools --version 2>&1 | sed 's/^.*UMI-tools version://; s/ *\$//')
END_VERSIONS
"""
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
"""
umi_tools \\
    extract \\
    -I ${reads[0]} \\
    --read2-in=${reads[1]} \\
    -S ${prefix}.umi_extract_1.fastq.gz \\
    --read2-out=${prefix}.umi_extract_2.fastq.gz \\
    $args \\
    > ${prefix}.umi_extract.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    umitools: \$(umi_tools --version 2>&1 | sed 's/^.*UMI-tools version://; s/ *\$//')
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
"""
mkdir $prefix

## Ensures --strip-components only applied when top level of tar contents is a directory
## If just files or multiple directories, place all in prefix
if [[ \$(tar -taf ${archive} | grep -o -P "^.*?\\/" | uniq | wc -l) -eq 1 ]]; then
    tar \\
        -C $prefix --strip-components 1 \\
        -xavf \\
        $args \\
        $archive \\
        $args2
else
    tar \\
        -C $prefix \\
        -xavf \\
        $args \\
        $archive \\
        $args2
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    untar: \$(echo \$(tar --version 2>&1) | sed 's/^.*(GNU tar) //; s/ Copyright.*\$//')
END_VERSIONS
"""
NextFlow From line 25 of untar/main.nf
54
55
56
57
58
59
60
61
62
"""
mkdir $prefix
touch ${prefix}/file.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    untar: \$(echo \$(tar --version 2>&1) | sed 's/^.*(GNU tar) //; s/ Copyright.*\$//')
END_VERSIONS
"""
NextFlow From line 54 of untar/main.nf
ShowHide 49 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public hafizmtalha
URL: https://nf-co.re/atacseq
Name: atacseq
Version: 2.0
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Other Versions:
Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...