MEDIPIPE: (cf)MeDIP-seq Data QC and Analysis Pipeline (v1.1.0)

public public 1yr ago 0 bookmarks

Introduction

The MEDIPIPE is designed for automated end-to-end quality control (QC) and analysis of (cf)MeDIP-seq data. The pipeline starts from the raw FASTQ files all the way to QC, alignment, methylation quantifications and aggregation. The pipeline was developed by Yong Zeng based on some prior work of Wenbin Ye, Eric Zhao .

Features

  • Portability : MEDIPIPE was developed with Snakemake , which will automatically deploy the execution environments. Users can also specify the version of softwares to be installed in YAML files. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.

  • Flexibility : MEDIPIPE can deal with single-end or paired-end reads, which comes along with or without spike-in/UMI sequences. It can be applied to individual samples, as well as to aggregate multiple samples from large-scale profiling.

Citation

Yong Zeng and others, MEDIPIPE: an automated and comprehensive pipeline for cfMeDIP-seq data quality control and analysis, Bioinformatics, Volume 39, Issue 7, July 2023, btad423, https://doi.org/10.1093/bioinformatics/btad423

How it works

This schematic diagram shows you how pipeline works: Schematic_diagram

Installation

  1. Make sure that you have a Conda-based Python3 distribution(e.g.,the Miniconda ). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is preferred to avoid potential conflicts. The installation of Mamba is also recommended:

    $ conda install -n base -c conda-forge mamba
    
  2. Git clone this pipeline.

    $ cd
    $ git clone https://github.com/pughlab/MEDIPIPE
    
  3. Install pipeline's core environment

    $ cd MEDIPIPE
    $ conda activate base
    $ mamba env create --file conda_env.yaml
    
  4. Test run

    IMPORTANT : EXTRA ENVIRONMENTS WILL BE INSTALLED, MAKE SURE YOU STILL HAVE INTERNET ACCESS.

    • Step 1: Prepare reference, samples FASTQ and aggregation files according to templates .

    • Step 2: Specify input configuration file by following the instructions here .

    • NOTE: For testing run, you can simply run the SED command below to specify files in Step1, 2. This test dataset aims for a swift extra environments installation, which will fail to fit the sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.

    $ sed -i 's,/path/to,'"$PWD"',g' ./test/*template.*
    $ conda activate MEDIPIPE
    $ snakemake --snakefile ./workflow/Snakefile \
     --configfile ./test/config_template.yaml \
     --conda-prefix ${CONDA_PREFIX}_extra_env \
     --use-conda --cores 4 -p
    
    • Step 3 (Optional): You can perform the full-scale test run using another toy dataset by following step 1, 2.
  5. Run on HPCs

    You can also submit this pipeline to clusters with the template ./workflow/sbatch_Snakefile_template.sh. This template is for SLURM, however, it could be modified to different resource management systems. More details about cluster configuration can be found at here .

    ## Test run by SLURM submission
    $ sed -i 's,/path/to,'"$PWD"',g' ./workflow/sbatch_Snakefile_template.sh # replace PATHs for testing
    $ sbatch ./workflow/sbatch_Snakefile_template.sh
    

Assets and Troubleshooting

There are several scripts are enclosed in the folder assets , allowing you to download/build reference index and manifest table, to fogre BSgeome package for spike-in controls, to filter regions for fragment profiling calculation. Please also see this document for troubleshooting. I will keep updating this document for errors reported by users.

Code Snippets

24
25
shell:
    "(multiqc -f {input} -o aggregated/QC_pe/) 2> {log}"
39
40
shell:
    "(multiqc -f {input} -o aggregated_spikein/QC_pe/) 2> {log}"
59
60
shell:
    "(multiqc -f {input} -o aggregated/QC_se/) 2> {log}"
72
73
74
shell:
    "head -n 1 {input[0]} > {output} && "
    "cat {input} | sed '1~2d' >> {output}"
83
84
85
shell:
    "head -n 1 {input[0]} > {output} && "
    "cat {input} | sed '1~2d' >> {output}"
107
108
109
shell:
    "(Rscript --vanilla {params.scr_dir}/workflow/scripts/aggr_qc_report.R "
    "{params.aggr} {params.ispaired} {input} {params.scr_dir} {params.out_dir}) 2> {log}"
130
131
132
shell:
    "(cp {input.bin1} {output.bin1} && paste {output.bin1} {input.mb1}  >  {output.mb1} && "
    " cp {input.bin5} {output.bin5} && paste {output.bin5} {input.mb5}  >  {output.mb5}) 2> {log}"
165
166
167
168
169
170
171
172
173
174
shell:
    "(cp {input.bin}  {output.bin} && "
    "paste {output.bin} {input.cnt}  | bgzip > {output.cnt} && tabix -p bed {output.cnt} && "
    "paste {output.bin} {input.rpkm} | bgzip > {output.rpkm} && tabix -p bed {output.rpkm} && "
    "paste {output.bin} {input.CNV_qsea}   |  bgzip > {output.CNV_qsea} && tabix -p bed {output.CNV_qsea} && "
    "paste {output.bin} {input.beta_qsea}  |  bgzip > {output.beta_qsea} && tabix -p bed {output.beta_qsea} && "
    "paste {output.bin} {input.nrpm_qsea}  |  bgzip > {output.nrpm_qsea} && tabix -p bed {output.nrpm_qsea} && "
    "paste {output.bin} {input.rms_medips} |  bgzip > {output.rms_medips} && tabix -p bed {output.rms_medips} && "
    "paste {output.bin} {input.rms_medestrand} | bgzip > {output.rms_medestrand} && tabix -p bed {output.rms_medestrand} && "
    "paste {output.bin} {input.logitbeta_qsea} | bgzip > {output.logitbeta_qsea} && tabix -p bed {output.logitbeta_qsea})  2> {log}"
192
193
194
195
shell:
    "(cp {input.bin}  {output.bin} && "
    "paste {output.bin} {input.cnt}  | bgzip > {output.cnt} && tabix -p bed {output.cnt} && "
    "paste {output.bin} {input.rpkm} | bgzip > {output.rpkm} && tabix -p bed {output.rpkm})  2> {log}"
251
252
253
254
255
256
257
258
259
shell:
    "tabix -p bed -R {input.bin} -h {input.cnt} | bgzip > {output.cnt} && tabix -p bed {output.cnt} && "
    "tabix -p bed -R {input.bin} -h {input.rpkm} | bgzip > {output.rpkm} && tabix -p bed {output.rpkm} && "
    "tabix -p bed -R {input.bin} -h {input.CNV_qsea}   |  bgzip > {output.CNV_qsea} && tabix -p bed {output.CNV_qsea} && "
    "tabix -p bed -R {input.bin} -h {input.beta_qsea}  |  bgzip > {output.beta_qsea} && tabix -p bed {output.beta_qsea} && "
    "tabix -p bed -R {input.bin} -h {input.nrpm_qsea}  |  bgzip > {output.nrpm_qsea} && tabix -p bed {output.nrpm_qsea} && "
    "tabix -p bed -R {input.bin} -h {input.rms_medips} |  bgzip > {output.rms_medips} && tabix -p bed {output.rms_medips} && "
    "tabix -p bed -R {input.bin} -h {input.rms_medestrand} | bgzip > {output.rms_medestrand} && tabix -p bed {output.rms_medestrand} && "
    "tabix -p bed -R {input.bin} -h {input.logitbeta_qsea} | bgzip > {output.logitbeta_qsea} && tabix -p bed {output.logitbeta_qsea}"
17
18
shell:
    'touch {output}'
26
27
shell:
    'touch {output}'
34
35
shell:
    'touch {output}'
42
43
shell:
    'touch {output}'
50
51
shell:
    'touch {output}'
58
59
shell:
    'touch {output}'
66
67
shell:
    'touch {output}'
20
21
22
shell:
    "(Rscript --vanilla {params.scr_dir}/workflow/scripts/fragment_profile.R "
    "{wildcards.sample} {input} {params.bsgenome} {params.scr_dir}) 2> {log}"
14
15
16
shell:
    "(bwa mem -M -t {threads}  {input} | "
    "samtools view -Sb --threads {threads} - > {output}) 2> {log}"
54
55
56
57
58
shell:
    "(samtools view -b -f 2 -F 2828 --threads {threads} {input} | "
    "samtools markdup -@ {threads} -r - {output.bam} && "
    "samtools index -@ {threads} {output.bam} && "
    "samtools stats -@ {threads} {output.bam} > {output.stat})"
69
70
71
72
73
shell:
    "(samtools view -b -F 2820 --threads {threads} {input} | "
    "samtools markdup -@ {threads} -r - {output.bam} && "
    "samtools index -@ {threads} {output.bam} && "
    "samtools stats -@ {threads} {output.bam} > {output.stat})"
163
164
165
166
shell:
    "(java -jar {params.pipeline_env}/share/picard-2.26.6-0/picard.jar "
    "CollectInsertSizeMetrics M=0.05 I={input} O={output.txt} "
    "H={output.hist}) 2> {log}"
180
181
182
183
shell:
    "(java -jar {params.pipeline_env}/share/picard-2.26.6-0/picard.jar "
    "CollectInsertSizeMetrics M=0.05 I={input} O={output.txt} "
    "H={output.hist}) 2> {log}"
31
32
33
34
shell:
    "(Rscript --vanilla {params.scr_dir}/workflow/scripts/medips_medestrand_qsea.R "
    "{wildcards.sample} {input} {params.bsgenome} {params.ispaired} {params.window_size} "
    "{params.scr_dir}/workflow/dependencies/MeDEStrand) 2> {log}"
61
62
63
64
shell:
    "(Rscript --vanilla {params.scr_dir}/workflow/scripts/medips_4spikein.R "
    "{wildcards.sample} {input} {params.ispaired} {params.window_size} "
    "{params.bsgenome} {params.scr_dir}/assets/Spike-in_genomes/{params.bsgenome_pkg}) 2> {log}"
12
13
shell:
    "cat {input} > {output}"
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
    shell:
        "cat {input.R1} > {output[0]} && "
        "cat {input.R2} > {output[1]} "

"""
## Obsoleted !!
###########################################################
### extract UMI barcode and add it to FASTQ headers, p.r.n.
### pair-end unzipped FASTQ only with ConsensusCruncher
###########################################################
rule extract_barcode:
    input:
        get_renamed_fastq
    output:
        temp("barcoded_fq/{sample}_R1.fastq"),
        temp("barcoded_fq/{sample}_R2.fastq"),
        "barcoded_fq/{sample}_R1.fastq.gz",
        "barcoded_fq/{sample}_R2.fastq.gz"
    conda:
        "extra_env/ConsensusCruncher.yaml"
    params:
        src = pipe_dir + "/workflow/dependencies/ConsensusCruncher/ConsensusCruncher/extract_barcodes.py",
        blist = umi_list,                 ## barcode list
        outfile = "barcoded_fq/{sample}"
    shell:
        ## unzip gz files
        "gunzip {input[0]} -c > {params.outfile}_R1.fastq && "
        "gunzip {input[1]} -c > {params.outfile}_R2.fastq && "

        ## extract barcodes
        #"python  {params.src} --bpattern NNT "
        "python  {params.src} --blist {params.blist} "
        "--read1  {input[0]}  --read2   {input[1]} "
        "--outfile  {params.outfile} && "

        ## gzip
        "gzip  {params.outfile}_barcode_R*.fastq  && "

        ## change to consistant names for following steps
        "mv {params.outfile}_barcode_R1.fastq.gz  {params.outfile}_R1.fastq.gz && "
        "mv {params.outfile}_barcode_R2.fastq.gz  {params.outfile}_R2.fastq.gz"
"""
83
84
85
86
shell:
    "umi_tools extract --extract-method=regex --stdin={input[0]} --read2-in={input[1]} "
    "--bc-pattern={params.bcp} --bc-pattern2={params.bcp} "
    "--stdout={output[0]} --read2-out={output[1]} --log={output[2]}"
 99
100
101
102
shell:
    "umi_tools extract --extract-method=regex "
    "--stdin={input[0]} --bc-pattern={params.bcp} "
    "--stdout={output[0]}  --log={output[1]}"
123
124
125
shell:
    "(trim_galore -q 20 --stringency 3 --length 20 "
    "--cores {threads} -o {params.path} {input}) 2> {log}"
142
143
144
shell:
    "(trim_galore -q 20 --stringency 3 --length 20 "
    "--cores {threads} --paired -o {params.path} {input}) 2> {log}"
158
159
160
run:
    for fq in input:
        shell("fastqc {} -t 8 --outdir fastqc_se/".format(fq))
172
173
174
run:
    for fq in input:
        shell("fastqc {} -t 8 --outdir fastqc_pe/".format(fq))
ShowHide 28 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/yzeng-lol/MEDIPIPE
Name: medipipe
Version: 1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...