This workflow performs an RNA-seq analysis from the sequencing output data to the differential expression analyses.

public public 1yr ago Version: v2.0.0 0 bookmarks

Author

Thomas Vannier (@metavannier), https://centuri-livingsystems.org/t-vannier/

About

This workflow performs an RNA-seq analysis from the sequencing output data to the differential expression analyses.

You need to install Singularity on your computer. This workflow also work in a slurm environment.

Each snakemake rules call a specific conda environment. In this way you can easily change/add tools for each step if necessary.

3 steps for the analysis:

  • clean.smk: The quality of the raw reads are assessed using FastQC v0.11.9 toolkit . Adapters and low quality reads are trimmed using Trimmomatic v0.39 .

  • count.smk: HiSat2 v2.2.1 is used for mapping the raw reads to the reference genome. The expression for each gene is evaluated using featureCounts from the Subread v2.0.1 package .

  • differential_exp.smk: The low expressed genes are removed from further analysis. The raw counts are normalized and used for differential expression testing using DESeq2 v1.28.0 .

Usage

Step 1: Install workflow

You can use this workflow by downloading and extracting the latest release. If you intend to modify and further extend this workflow or want to work under version control, you can fork this repository.

We would be pleased if you use this workflow and participate in its improvement. If you use it in a paper, don't forget to give credits to the author by citing the URL of this repository and, if available, its DOI (see above).

Step 2: Configure workflow

Configure the workflow according to your needs via editing the files and repositories:

  • 00_RawData need the single or pair-end fastq file of each run to analyse

  • 01_Reference the fasta file and gff/gtf of your reference genome for the mapping step

  • sample.tsv , coldata.tsv and condition.tsv to indicate the samples, run, condition, etc. for the analyse.

  • config.yaml indicating the parameters to use.

Step 3: Execute workflow

  • You need Singularity v3.5.3 installed on your computer or cluster.

  • Load snakemake from a docker container and run the workflow from the root by using these commands:

singularity run docker://snakemake/snakemake:v6.3.0

  • Then execute the workflow locally via

snakemake --use-conda --use-singularity --cores 10

Step 4: Investigate results

After successful execution, you can create a self-contained interactive HTML report with all results via:

snakemake --report report.html

Code Snippets

 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
library(DESeq2)
library(readr)

parallel <- FALSE
if (snakemake@threads > 1) {
    library("BiocParallel")
    # setup parallelization
    register(MulticoreParam(snakemake@threads))
    parallel <- TRUE
}

# Loading the parameters
project=snakemake@params[["project"]]
samples=snakemake@params[["samples"]]
ref_level=snakemake@params[["ref_level"]]
normalized_counts_file=snakemake@output[["normalized_counts_file"]]

# Rename column name of the count matrix as coldata
# colData and countData must have the same sample order
cts <- as.matrix(read.table(snakemake@input[["cts"]], header=T, row.names = 1))
coldata_read <- read.delim(snakemake@input[["coldata"]], header=TRUE, comment.char="#", quote="")
colnames(cts) <- coldata_read[,1]

coldata <- coldata_read[,-1]
rownames(coldata) <- coldata_read[,1]
coldata$condition <- factor(coldata_read$condition)
coldata$type <- factor(coldata_read$type)

rmproj_list = as.list(strsplit(snakemake@params[["rmproj_list"]], ",")[[1]])

if(length(rmproj_list)!=0){
  for (i in 1:length(rmproj_list)) {
      name <- rmproj_list[[i]]
      coldata <- coldata[-match((name), table = rownames(coldata)), ]
  }
}

# Check that sample names match in both files
if (all(colnames(cts) %in% rownames(coldata)) & all(colnames(cts) == rownames(coldata))){
  # Create the DESeq2 object
  dds <- DESeqDataSetFromMatrix(countData = cts,
                                colData = coldata,
                                design = ~ condition)
} else {
  print("sample names doesn't match in both files")
}

# Remove uninformative columns (to do when filter not already done with the CPM threshold)
#dds <- dds[ rowSums(counts(dds)) > 10, ]

# Specifying the reference level
dds$condition <- relevel(dds$condition, ref = ref_level)

# DESeq : Normalization and preprocessing (counts divided by sample-specific size factors
# determined by median ratio of gene counts relative to geometric mean per gene)
dds <- DESeq(dds, parallel=parallel)
# To save the object in a file for later use
saveRDS(dds, file=snakemake@output[["rds"]])

# Already done in the DESeq function
dds <- estimateSizeFactors( dds)
print(sizeFactors(dds))
# Save the normalized data matrix
normalized_counts <- counts(dds, normalized=TRUE)
write.table(normalized_counts, file=normalized_counts_file, sep="\t", quote=F, col.names=NA)
19
20
shell:
  "fastqc --outdir 05_Output/01_fastqc/ {input}"
37
38
39
40
41
42
43
44
45
46
shell:
  """
  sample=({input.sample})
  sample_trimmed=({output.sample_trimmed})
  sample_untrimmed=({output.sample_untrimmed})
  len=${{#sample[@]}}
  for (( i=0; i<$len; i=i+2 ))
  do trimmomatic PE -threads 4 ${{sample[$i]}} ${{sample[$i+1]}} ${{sample_trimmed[$i]}} ${{sample_untrimmed[$i]}} ${{sample_trimmed[$i+1]}} ${{sample_untrimmed[$i+1]}} LEADING:20 TRAILING:15 SLIDINGWINDOW:4:15 MINLEN:36
  done
  """
62
63
shell:
  "fastqc --outdir 05_Output/03_fastqc/ {input}"
78
79
80
81
82
shell: 
  """
  multiqc -n {output.trim_multi_html} {input.trim_qc} --force #run multiqc
  rm -rf {params.multiqc_output_trim} #clean-up
  """
26
27
28
29
30
31
32
33
34
35
shell:
  """
  input={input.ref}
  output={params.index}
  if [ ${{input}} == '*.gz' ];then
    gunzip ${{input}}
    input="${{input%.*}}"
  fi
  hisat2-build ${{input}} ${{output}}
  """
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
shell:
  """
  index={params.index}
  sample_trimmed=({input.sample_trimmed})
  len=${{#sample_trimmed[@]}}
  reads=({params.reads})
  sam=({params.sam})
  bam=({output.bam})
  flag=0
  if [ ${{reads}} == 'paired' ];then
    for (( i=0; i<$len; i=i+2 ))
      do hisat2 -p 12 -x ${{index}} -1 ${{sample_trimmed[$i]}} -2 ${{sample_trimmed[$i+1]}} -S ${{sam[$flag]}}
        samtools sort ${{sam[$flag]}} > ${{bam[$flag]}}
        rm ${{sam[$flag]}}
        flag=$((${{flag}}+1))
    done
  elif [ ${{reads}} == 'unpaired' ];then
    for (( i=0; i<$len; i++ ))
      do hisat2 -p 12 -x ${{index}} -U ${{sample_trimmed[$i]}} -S ${{sam[$i]}}
        samtools sort ${{sam[$i]}} > ${{bam[$i]}}
        rm ${{sam[$i]}}
    done
  else
    echo "Your fastq files have to be paired or unpaired. Please check the config.yaml"
  fi
  """
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
shell:
  """
  reads=({params.reads})
  geneid=({params.geneid})
  annotation={input.annotation}
  bam=({input.bam})
  countmatrices=({output.countmatrices})
  len=${{#bam[@]}}
  if [ ${{reads}} == 'paired' ];then
    for (( i=0; i<$len; i++ ))
      do featureCounts -T 12 -p -t exon -g ${{geneid}} -a ${{annotation}} -o ${{countmatrices[$i]}} ${{bam[$i]}}
    done
  elif [ ${{reads}} == 'unpaired' ];then
    for (( i=0; i<$len; i++ ))
      do featureCounts -T 12 -t exon -g ${{geneid}} -a ${{annotation}} -o ${{countmatrices[$i]}} ${{bam[$i]}}
    done
  fi
  """
142
143
script:
  SCRIPTDIR + "cpm_filtering.R"
31
32
script:
  "../03_Script/deseq2.R"
59
60
script:
  SCRIPTDIR + "diffexp_reports_compilation.R"
ShowHide 4 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/centuri-engineering/differential-expression_workflow
Name: differential-expression_workflow
Version: v2.0.0
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: BSD 3-Clause "New" or "Revised" License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...