Pipeline for the identification of extrachromosomal circular DNA (ecDNA) from Circle-seq, WGS, and ATAC-seq data that were generated from cancer and other eukaryotic cells.

public public 1yr ago Version: 1.0.4 0 bookmarks

Introduction

nf-core/circdna is a bioinformatics best-practice analysis pipeline for the identification of extrachromosomal circular DNAs (ecDNAs) in eukaryotic cells. The pipeline is able to process WGS, ATAC-seq data or Circle-Seq data generated from short-read sequencing technologies. Depending on the input data and selected analysis branch, nf-core/circdna is able to identify various types of ecDNAs. This includes the detection of smaller ecDNAs, often referred to as eccDNAs or microDNAs, as well as larger ecDNAs that exhibit amplification. These analyses are facilitated through the use of prominent software tools that are widely recognized in the field of ecDNA or circular DNA research.

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources.The results obtained from the full-sized test can be viewed on the nf-core website .

Pipeline summary

  1. Merge re-sequenced FastQ files ( cat )

  2. Read QC ( FastQC )

  3. Adapter and quality trimming ( Trim Galore! )

  4. Map reads using BWA-MEM ( BWA )

  5. Sort and index alignments ( SAMtools )

  6. Choice of multiple ecDNA identification routes

    1. Circle-Map ReadExtractor -> Circle-Map Realign

    2. Circle-Map ReadExtractor -> Circle-Map Repeats

    3. CIRCexplorer2

    4. Samblaster -> Circle_finder Does not use filtered BAM file, specificied with --keep_duplicates false

    5. Identification of circular amplicons AmpliconArchitect

    6. De Novo Assembly of ecDNAs Unicycler -> Minimap2

  7. Present QC for raw reads ( MultiQC )

Functionality Overview

A graphical view of the pipeline and its diverse branches can be seen below.

nf-core/circdna metromap

Usage

Note If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, prepare a samplesheet with your input data that looks as follows:

samplesheet.csv :

FASTQ input data:

sample,fastq_1,fastq_2
CONTROL_REP1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz

BAM input data:

sample,bam
CONTROL_REP1,AEG588A1_S1_L002_R1_001.bam

Each row represents a pair of fastq files (paired end) or a single bam file generated from paired-end reads.

Now, you can run the pipeline using:

 nextflow run nf-core/circdna --input samplesheet.csv --outdir <OUTDIR> --genome GRCh38 -profile <docker/singularity/podman/shifter/charliecloud/conda/institute> --circle_identifier <CIRCLE_IDENTIFIER>

Available ecDNA identifiers

Please specify the parameter circle_identifier depending on the pipeline branch used for circular DNA identifaction. Please note that some branches/software are only tested with specific NGS data sets.

Identification of putative ecDNA junctions with ATAC-seq or Circle-seq data

circle_finder uses Circle_finder > circexplorer2 uses CIRCexplorer2 > circle_map_realign uses Circle-Map Realign > circle_map_repeats uses Circle-Map Repeats for the identification of repetetive ecDNA

Identification of amplified ecDNAs with WGS data

ampliconarchitect uses AmpliconArchitect

De novo assembly of ecDNAs with Circle-seq data

unicycler uses Unicycler for de novo assembly of ecDNAs and Minimap2 for accurate mapping of the identified circular sequences.

Warning: Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters ; see docs .

For more details, please refer to the usage documentation and the parameter documentation .

Pipeline output

To see the the results of a test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation .

Credits

nf-core/circdna was originally written by Daniel Schreyer , University of Glasgow, Institute of Cancer Sciences, Peter Bailey Lab.

We thank the following people for their extensive assistance in the development of this pipeline:

  • Sébastian Guizard: Review and Discussion of Pipeline

  • Alex Peltzer: Code Review

  • Phil Ewels: Help in setting up the pipeline repository and directing the pipeline development

  • nf-core community: Answering all nextflow and nf-core related questions

  • Peter Bailey: Discussion of Software and Pipeline Architecture

This pipeline has been developed by Daniel Schreyer as part of the PRECODE project. PRECODE received funding from the European Union’s Horizon 2020 Research and Innovation Program under the Marie Skłodowska-Curie grant agreement No 861196.

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #circdna channel (you can join with this invite ).

Citations

If you use nf-core/circdna for your analysis, please cite it using the following doi: 10.5281/zenodo.6685250

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

26
27
28
29
30
31
32
33
34
35
36
37
38
39
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
export AA_SRC=${projectDir}/bin
REF=${params.reference_build}

AmpliconArchitect.py $args \\
    --bam $bam --bed $bed --ref \$REF --out "${prefix}"

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconArchitect: \$(echo \$(AmpliconArchitect.py --version 2>&1) | sed 's/AmpliconArchitect version //g')
END_VERSIONS
"""
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
export AA_SRC=${projectDir}/bin
REF=${params.reference_build}

touch "${prefix}.logs.txt"
touch "${prefix}.cycles.txt"
touch "${prefix}.graph.txt"
touch "${prefix}.out"
touch "${prefix}_cnseg.txt"
touch "${prefix}.pdf"
touch "${prefix}.png"
touch "${prefix}_summary.txt"

AmpliconArchitect.py --help

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconArchitect: \$(echo \$(AmpliconArchitect.py --version 2>&1) | sed 's/AmpliconArchitect version //g')
END_VERSIONS
"""
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
REF=${params.reference_build}
export AA_DATA_REPO=${params.aa_data_repo}
export AA_SRC=${projectDir}/bin

amplicon_classifier.py \\
    --ref \$REF \\
    $args \\
    --input $input_file \\
    > ampliconclassifier.classifier_stdout.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
export AA_SRC=${projectDir}/bin
REF=${params.reference_build}

touch "ampliconclassifier_amplicon_classification_profiles.tsv"
touch "ampliconclassifier_classifier_stdout.log"

amplicon_classifier.py --help

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
REF=${params.reference_build}
export AA_DATA_REPO=${params.aa_data_repo}
export AA_SRC=${projectDir}/bin

amplicon_similarity.py \\
    --ref \$REF \\
    $args \\
    --input $input

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
41
42
43
44
45
46
47
48
49
50
51
52
53
"""
REF=${params.reference_build}
export AA_DATA_REPO=${params.aa_data_repo}
export AA_SRC=${projectDir}/bin

amplicon_similarity.py --help
touch "ampliconclassifier_similarity_scores.tsv"

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
24
25
26
27
28
29
30
31
"""
make_input.sh ./ ampliconclassifier

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
35
36
37
38
39
40
41
42
"""
touch "ampliconclassifier.input"

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
"""
# Create subdirectories in working directory
mkdir ampliconclassifier_classification_bed_files
mv $bed_files ampliconclassifier_classification_bed_files/

make_results_table.py \\
    $args \\
    --input $input_file \\
    --classification_file $class_file

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
48
49
50
51
52
53
54
55
56
57
58
59
"""
make_results_table.py --help

touch ampliconclasifier_result_data.json
touch ampliconclasifier_result_table.tsv
touch index.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    AmpliconClassifier: \$(echo \$(amplicon_classifier.py --version | sed 's/amplicon_classifier //g' | sed 's/ .*//g'))
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
export AA_SRC=${projectDir}/bin
REF=${params.reference_build}

PrepareAA.py \\
    $args \\
    -s $prefix \\
    -t $task.cpus \\
    --cnv_bed $cns \\
    --sorted_bam $bam \\
    --cngain $cngain \\
    --ref $ref

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    prepareaa: \$(echo \$(PrepareAA.py --version) | sed 's/^.*PrepareAA version //')
END_VERSIONS
"""
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
REF=${params.reference_build}

touch "${prefix}_CNV_SEEDS.bed"
touch "${prefix}.log"
touch "${prefix}.run_metadata.json"
touch "${prefix}.sample_metadata.json"
touch "${prefix}.timing_log.txt"
touch "${prefix}_summary.txt"

PrepareAA.py --help

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    prepareaa: \$(echo \$(PrepareAA.py --version) | sed 's/^.*PrepareAA version //')
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
REF=${params.reference_build}

amplified_intervals.py \\
    $args \\
    --bed $bed \\
    --out ${prefix}_AA_CNV_SEEDS \\
    --bam $bam \\
    --gain $cngain \\
    --ref $ref

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: echo \$(python --version 2<&1 | sed 's/Python //g')
END_VERSIONS
"""
49
50
51
52
53
54
55
56
57
58
59
60
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
REF=${params.reference_build}

touch ${prefix}_AA_CNV_SEEDS.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: echo \$(python --version 2<&1 | sed 's/Python //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
"""
bedtools bamtobed $args -i $sorted_bam | \
    sed -e 's/\\// /g' | \
    awk '{printf ("%s\t%d\t%d\t%s\t%d\t%d\t%s\t%s\\n",\$1,\$2,\$3,\$4,\$5,\$6,\$7,\$8)}' > '${prefix}.concordant.txt'

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bedtools: \$(bedtools --version | sed -e "s/bedtools v//g")
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
"""
bedtools bamtobed $args -i $split_bam | \
    sed -e 's/_2\\/2/ 2/g' | \
    sed -e 's/_1\\/1/ 1/g' |
    awk '{printf "%s\t%d\t%d\t%s\t%d\t%d\t%s\t%s\\n", \$1, \$2, \$3, \$4, \$5, \$6, \$7, \$8}' |
    awk 'BEGIN{FS=OFS="\t"} {gsub("M", " M ", \$8)} 1' | \
    awk 'BEGIN{FS=OFS="\t"} {gsub("S", " S ", \$8)} 1' | \
    awk 'BEGIN{FS=OFS="\t"} {gsub("H", " H ", \$8)} 1' | \
    awk 'BEGIN{FS=OFS=" "} {if ((\$9=="M" && \$NF=="H") || \
    (\$9=="M" && \$NF=="S"))  {printf ("%s\tfirst\\n",\$0)} else if ((\$9=="S" && \$NF=="M") || \
    (\$9=="H" && \$NF=="M")) {printf ("%s\tsecond\\n",\$0)} }' | \
    awk 'BEGIN{FS=OFS="\t"} {gsub(" ", "", \$8)} 1' > '${prefix}.txt'

# Software Version
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bedtools: \$(bedtools --version | sed -e "s/bedtools v//g")
END_VERSIONS
"""
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
INDEX=`find -L ./ -name "*.amb" | sed 's/\\.amb\$//'`

bwa mem \\
    $args \\
    -t $task.cpus \\
    \$INDEX \\
    $reads \\
    | samtools $samtools_command $args2 --threads $task.cpus -o ${prefix}.bam -

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bwa: \$(echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
49
50
51
52
53
54
55
56
57
"""
touch ${prefix}.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bwa: \$(echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 49 of mem/main.nf
24
25
26
27
28
29
30
31
32
33
"""
CIRCexplorer2 parse $args $bam -b ${prefix}.temp.bed > ${prefix}_CIRCexplorer2_parse.log
cat ${prefix}.temp.bed | tr "/" "\t" > ${prefix}.circexplorer_circdna.bed
rm ${prefix}.temp.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    CIRCexplorer2: \$(echo \$(CIRCexplorer2 --version))
END_VERSIONS
"""
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
    """
    #!/usr/bin/env bash

    # Function to output an error if files do not exist
    file_exists () {
        [[ ! -s \$1 ]] && \
        echo "
    ERROR - CIRCLE_FINDER - $prefix
    ===================================
    Error \$1 does not exist or is empty.
    Stopped circle_finder.
    No circular DNA was identified.
    ===================================
    " > ${prefix}.circle_finder_exit_log.txt && exit
    }

    awk '{print \$4}' ${split} | sort -T ./ | uniq -c > ${prefix}.split.id-freq.txt
    #This file "${prefix}.split.id-freq.txt" will be used for collecting split id that have frequency equal to 4.
    awk '\$1=="2" {print \$2}' ${prefix}.split.id-freq.txt > ${prefix}.split.id-freq2.txt
    # awk '\$1=="4" {print \$2}' ${prefix}.split.id-freq.txt > ${prefix}.split.id-freq4.txt

    awk '{print \$4}' ${concordant} | sort -T ./ | uniq -c > ${prefix}.concordant.id-freq.txt
    #The following command will chose (may not be always true) one concordant and 2 split read

    awk '\$1=="3" {print \$2}' ${prefix}.concordant.id-freq.txt > ${prefix}.concordant.id-freq3.txt
    # awk '\$1>3 {print \$2}' ${prefix}.concordant.id-freq.txt > ${prefix}.concordant.id-freqGr3.txt

    file_exists ${prefix}.concordant.id-freq3.txt

    grep -w -Ff ${prefix}.split.id-freq2.txt ${split} > ${prefix}.split_freq2.txt
    # grep -w -Ff ${prefix}.split.id-freq4.txt ${split} > ${prefix}.split_freq4.txt

    file_exists ${prefix}.split_freq2.txt

    #Selecting concordant pairs that were 1) mapped uniquely and 2) mapped on more than one loci (file "freqGr3.txt")
    grep -w -Ff ${prefix}.concordant.id-freq3.txt ${concordant} > ${prefix}.concordant_freq3.txt
#    grep -w -Ff ${prefix}.concordant.id-freqGr3.txt ${concordant} > ${prefix}.concordant_freqGr3.txt

    file_exists ${prefix}.concordant_freq3.txt

    #Step 7: Putting split read with same id in one line
    sed 'N;s/\\n/\\t/' ${prefix}.split_freq2.txt > ${prefix}.split_freq2.oneline.txt

    file_exists ${prefix}.split_freq2.oneline.txt

    #Step 8: Split reads map on same chromosome and map on same strand. Finally extracting id (split read same chromosome, split read same strand), collecting all the split reads that had quality >0
    awk '\$1==\$10 && \$7==\$16 && \$6>0 && \$15>0 {print \$4} ' ${prefix}.split_freq2.oneline.txt > \
        ${prefix}.split_freq2.oneline.S-R-S-CHR-S-ST.ID.txt

    file_exists ${prefix}.split_freq2.oneline.S-R-S-CHR-S-ST.ID.txt

    #Step 9: Based on unique id I am extracting one continuously mapped reads and their partner mapped as split read (3 lines for each id)
    grep -w -Ff "${prefix}.split_freq2.oneline.S-R-S-CHR-S-ST.ID.txt" "${prefix}.concordant_freq3.txt" > \
        "${prefix}.concordant_freq3.2SPLIT-1M.txt"

    file_exists ${prefix}.concordant_freq3.2SPLIT-1M.txt

    #Step 10: Sorting based on read-id followed by length of mapped reads.
    awk 'BEGIN{FS=OFS="\\t"} {gsub("M", " M ", \$8)} 1' ${prefix}.concordant_freq3.2SPLIT-1M.txt | \
        awk 'BEGIN{FS=OFS="\\t"} {gsub("S", " S ", \$8)} 1' | \
        awk 'BEGIN{FS=OFS="\\t"} {gsub("H", " H ", \$8)} 1' | \
        awk 'BEGIN{FS=OFS=" "} {if ((\$9=="M" && \$NF=="H") || \
            (\$9=="M" && \$NF=="S"))  {printf ("%s\\tfirst\\n",\$0)} \
            else if ((\$9=="S" && \$NF=="M") || (\$9=="H" && \$NF=="M")) {printf ("%s\\tsecond\\n",\$0)} \
            else  {printf ("%s\\tconfusing\\n",\$0)}}' | \
        awk 'BEGIN{FS=OFS="\\t"} {gsub(" ", "", \$8)} 1' | \
        awk '{printf ("%s\\t%d\\n",\$0,(\$3-\$2)+1)}' | \
        sort -T ./ -k4,4 -k10,10n | sed 'N;N;s/\\n/\\t/g' | \
        awk '{if (\$5==\$15) {print \$0}  \
            else if ((\$5=="1" && \$15=="2" && \$25=="1") || (\$5=="2" && \$15=="1" && \$25=="2")) \
                {printf ("%s\\t%d\\t%d\\t%s\\t%d\\t%d\\t%s\\t%s\\t%s\\t%d\\t%s\\t%d\\t%d\\t%s\\t%d\\t%d\\t%s\\t%s\\t%s\\t%d\\t%s\\t%d\\t%d\\t%s\\t%d\\t%d\\t%s\\t%s\\t%s\\t%d\\n", \$1,\$2,\$3,\$4,\$5,\$6,\$7,\$8,\$9,\$10,\$21,\$22,\$23,\$24,\$25,\$26,\$27,\$28,\$29,\$30,\$11,\$12,\$13,\$14,\$15,\$16,\$17,\$18,\$19,\$20)} \
            else if ((\$5=="1" && \$15=="2" && \$25=="2") || (\$5=="2" && \$15=="1" && \$25=="1")) \
            {printf ("%s\\t%d\\t%d\\t%s\\t%d\\t%d\\t%s\\t%s\\t%s\\t%d\\t%s\\t%d\\t%d\\t%s\\t%d\\t%d\\t%s\\t%s\\t%s\\t%d\\t%s\\t%d\\t%d\\t%s\\t%d\\t%d\\t%s\\t%s\\t%s\\t%d\\n", \$11,\$12,\$13,\$14,\$15,\$16,\$17,\$18,\$19,\$20,\$21,\$22,\$23,\$24,\$25,\$26,\$27,\$28,\$29,\$30,\$1,\$2,\$3,\$4,\$5,\$6,\$7,\$8,\$9,\$10)} }' \
        > ${prefix}.concordant_freq3.2SPLIT-1M.inoneline.txt

    file_exists ${prefix}.concordant_freq3.2SPLIT-1M.inoneline.txt

    #Step 11: Unique number of microDNA with number of split reads
    awk '\$1==\$11 && \$1==\$21 && \$7==\$17 && length(\$8)<=12 && length(\$18)<=12 && length(\$28)<=12'  ${prefix}.concordant_freq3.2SPLIT-1M.inoneline.txt | \
        awk '(\$7=="+" && \$27=="-") || (\$7=="-" && \$27=="+")' | \
        awk '{if (\$17=="+" && \$19=="second" && \$12<\$2 && \$22>=\$12 && \$23<=\$3) {printf ("%s\\t%d\\t%d\\n",\$1,\$12,\$3)} \
            else if (\$7=="+" && \$9=="second" && \$2<\$12 && \$22>=\$2 && \$23<=\$13) {printf ("%s\\t%d\\t%d\\n",\$1,\$2,\$13)} \
            else if (\$17=="-" && \$19=="second" && \$12<\$2 && \$22>=\$12 && \$23<=\$3) {printf ("%s\\t%d\\t%d\\n",\$1,\$12,\$3)} \
        else if (\$7=="-" && \$9=="second" && \$2<\$12 && \$22>=\$2 && \$23<=\$13) {printf ("%s\\t%d\\t%d\\n",\$1,\$2,\$13)} }' | \
        sort -T ./ | uniq -c | awk '{printf ("%s\\t%d\\t%d\\t%d\\n",\$2,\$3,\$4,\$1)}' > ${prefix}.microDNA-JT.txt
    """
20
21
22
23
24
25
26
27
28
29
"""
circle_map.py \\
    ReadExtractor -i $qname_bam \\
    -o ${prefix}.circular_read_candidates.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    Circle-Map: \$(echo \$(circle_map.py --help 2<&1 | grep -o "version=[0-9].[0-9].[0-9]" | sed 's/version=//g'))
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
circle_map.py \\
    Realign \\
    $args \\
    -i $re_bam \\
    -qbam $qname \\
    -sbam $sbam \\
    -fasta $fasta \\
    --threads $task.cpus \\
    -o ${prefix}_circularDNA_coordinates.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    Circle-Map: \$(echo \$(circle_map.py --help 2<&1 | grep -o "version=[0-9].[0-9].[0-9]" | sed 's/version=//g'))
END_VERSIONS
"""
20
21
22
23
24
25
26
27
28
29
30
31
"""
circle_map.py \\
    Repeats \\
    $args \\
    -i $bam \\
    -o ${prefix}_circularDNA_repeats_coordinates.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    Circle-Map: \$(echo \$(circle_map.py --help 2<&1 | grep -o "version=[0-9].[0-9].[0-9]" | sed 's/version=//g'))
END_VERSIONS
"""
31
32
33
34
35
36
37
38
39
40
41
42
43
44
"""
cnvkit.py \\
    batch \\
    $bam \\
    $fasta_args \\
    $reference_args \\
    --processes $task.cpus \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cnvkit: \$(cnvkit.py version | sed -e "s/cnvkit v//g")
END_VERSIONS
"""
52
53
54
55
56
57
58
59
60
61
62
"""
touch ${prefix}.bed
touch ${prefix}.cnn
touch ${prefix}.cnr
touch ${prefix}.cns

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cnvkit: \$(cnvkit.py version | sed -e "s/cnvkit v//g")
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
"""
cnvkit.py \\
    segment \\
    $cnr \\
    -p $task.cpus \\
    -m "cbs" \\
    -o ${prefix}.cnvkit.segment.cns
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cnvkit: \$(cnvkit.py version | sed -e "s/cnvkit v//g")
END_VERSIONS
"""
38
39
40
41
42
43
44
45
"""
touch ${prefix}.cnvkit.segment.cns

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cnvkit: \$(cnvkit.py version | sed -e "s/cnvkit v//g")
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
"""
collect_seeds.py \\
    --sample $prefix \\
    --cns $cns \\
    --cngain $cngain

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
40
41
42
43
44
45
46
47
48
49
50
51
"""
export AA_DATA_REPO=${params.aa_data_repo}
export MOSEKLM_LICENSE_FILE=${params.mosek_license_dir}
REF=${params.reference_build}

touch ${prefix}.bed

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
"""
zcat $fastq > temp.fastq
if grep -q "circular=true" temp.fastq; then
    cat temp.fastq | grep -A3 "circular=true" | \\
        grep -v "^--" | \\
        gzip --no-name > \\
        ${prefix}.fastq.gz
fi
rm temp.fastq

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
37
38
39
40
41
42
43
44
45
46
47
"""
multiqc \\
    -f \\
    $args \\
    $custom_config \\
    .
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
50
51
52
53
54
55
56
57
58
59
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
"""
samtools \\
    view \\
    -h \\
    -@ $task.cpus \\
    $bam |
samblaster \\
    $args \\
    -s ${prefix}.split.sam \\
    > /dev/null

samtools view \\
    -@ $task.cpus \\
    -o ${prefix}.split.bam \\
    -bS ${prefix}.split.sam

rm ${prefix}.split.sam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samblaster: \$(echo \$(samblaster --version 2>&1) | sed 's/samblaster: Version //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
"""
check_samplesheet.py \\
    $samplesheet \\
    samplesheet.valid.csv \\
    $params.input_format

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
"""
seqtk \\
    seq \\
    $args \\
    -F "#" \\
    $fasta | \\
    gzip -c > ${prefix}.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    seqtk: \$(echo \$(seqtk 2>&1) | sed 's/^.*Version: //; s/ .*\$//')
END_VERSIONS
"""
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
"""
unicycler \\
    --threads $task.cpus \\
    $args \\
    $short_reads \\
    --out ./

mv assembly.fasta ${prefix}.scaffolds.fa
gzip -n ${prefix}.scaffolds.fa
mv assembly.gfa ${prefix}.assembly.gfa
gzip -n ${prefix}.assembly.gfa
mv unicycler.log ${prefix}.unicycler.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    unicycler: \$(echo \$(unicycler --version 2>&1) | sed 's/^.*Unicycler v//; s/ .*\$//')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
34
"""
mkdir bwa
bwa \\
    index \\
    $args \\
    -p bwa/${fasta.baseName} \\
    $fasta

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bwa: \$(echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//')
END_VERSIONS
"""
37
38
39
40
41
42
43
44
45
46
47
48
49
50
"""
mkdir bwa

touch bwa/genome.amb
touch bwa/genome.ann
touch bwa/genome.bwt
touch bwa/genome.pac
touch bwa/genome.sa

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    bwa: \$(echo \$(bwa 2>&1) | sed 's/^.*Version: //; s/Contact:.*\$//')
END_VERSIONS
"""
26
27
28
29
30
31
32
33
"""
cat ${readList.join(' ')} > ${prefix}.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 26 of fastq/main.nf
40
41
42
43
44
45
46
47
48
"""
cat ${read1.join(' ')} > ${prefix}_1.merged.fastq.gz
cat ${read2.join(' ')} > ${prefix}_2.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 40 of fastq/main.nf
57
58
59
60
61
62
63
64
"""
touch ${prefix}.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 57 of fastq/main.nf
68
69
70
71
72
73
74
75
76
"""
touch ${prefix}_1.merged.fastq.gz
touch ${prefix}_2.merged.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
NextFlow From line 68 of fastq/main.nf
28
29
30
31
32
33
34
35
36
37
38
"""
printf "%s %s\\n" $rename_to | while read old_name new_name; do
    [ -f "\${new_name}" ] || ln -s \$old_name \$new_name
done
fastqc $args --threads $task.cpus $renamed_files

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
42
43
44
45
46
47
48
49
50
"""
touch ${prefix}.html
touch ${prefix}.zip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
minimap2 \\
    $args \\
    -t $task.cpus \\
    "${reference ?: reads}" \\
    "$reads" \\
    $cigar_paf \\
    $set_cigar_bam \\
    $bam_output


cat <<-END_VERSIONS > versions.yml
"${task.process}":
    minimap2: \$(minimap2 --version 2>&1)
END_VERSIONS
"""
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
picard \\
    -Xmx${avail_mem}M \\
    MarkDuplicates \\
    $args \\
    --INPUT $bam \\
    --OUTPUT ${prefix}.bam \\
    --REFERENCE_SEQUENCE $fasta \\
    --METRICS_FILE ${prefix}.MarkDuplicates.metrics.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$(echo \$(picard MarkDuplicates --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
END_VERSIONS
"""
51
52
53
54
55
56
57
58
59
60
"""
touch ${prefix}.bam
touch ${prefix}.bam.bai
touch ${prefix}.MarkDuplicates.metrics.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    picard: \$(echo \$(picard MarkDuplicates --version 2>&1) | grep -o 'Version:.*' | cut -f2- -d:)
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    faidx \\
    $fasta \\
    $args

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
"""
touch ${fasta}.fai
cat <<-END_VERSIONS > versions.yml

"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 38 of faidx/main.nf
23
24
25
26
27
28
29
30
31
32
33
34
"""
samtools \\
    flagstat \\
    --threads ${task.cpus} \\
    $bam \\
    > ${prefix}.flagstat

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    idxstats \\
    --threads ${task.cpus-1} \\
    $bam \\
    > ${prefix}.idxstats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    index \\
    -@ ${task.cpus-1} \\
    $args \\
    $input

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
"""
touch ${input}.bai
touch ${input}.crai
touch ${input}.csi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 38 of index/main.nf
26
27
28
29
30
31
32
33
34
35
36
37
38
39
"""
samtools sort \\
    $args \\
    -@ $task.cpus \\
    -m ${sort_memory}M \\
    -o ${prefix}.bam \\
    -T $prefix \\
    $bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
43
44
45
46
47
48
49
50
"""
touch ${prefix}.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 43 of sort/main.nf
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
samtools \\
    stats \\
    --threads ${task.cpus} \\
    ${reference} \\
    ${input} \\
    > ${prefix}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
41
42
43
44
45
46
47
48
"""
touch ${prefix}.stats

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 41 of stats/main.nf
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
"""
samtools \\
    view \\
    --threads ${task.cpus-1} \\
    ${reference} \\
    ${readnames} \\
    $args \\
    -o ${prefix}.${file_type} \\
    $input \\
    $args2

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
57
58
59
60
61
62
63
64
65
"""
touch ${prefix}.bam
touch ${prefix}.cram

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 57 of view/main.nf
42
43
44
45
46
47
48
49
50
51
52
53
54
55
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -s $reads ${prefix}.fastq.gz
trim_galore \\
    ${args_list.join(' ')} \\
    --cores $cores \\
    --gzip \\
    ${prefix}.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//')
    cutadapt: \$(cutadapt --version)
END_VERSIONS
"""
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
"""
[ ! -f  ${prefix}_1.fastq.gz ] && ln -s ${reads[0]} ${prefix}_1.fastq.gz
[ ! -f  ${prefix}_2.fastq.gz ] && ln -s ${reads[1]} ${prefix}_2.fastq.gz
trim_galore \\
    $args \\
    --cores $cores \\
    --paired \\
    --gzip \\
    ${prefix}_1.fastq.gz \\
    ${prefix}_2.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    trimgalore: \$(echo \$(trim_galore --version 2>&1) | sed 's/^.*version //; s/Last.*\$//')
    cutadapt: \$(cutadapt --version)
END_VERSIONS
"""
ShowHide 45 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/circdna
Name: circdna
Version: 1.0.4
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...