RNAseq Differential Expression Analysis Workflow with STAR and DESeq2

public public 1yr ago Version: 6 0 bookmarks

This workflow performs a differential expression analysis using STAR and DESeq2 . It performs a wide range of quality control steps and bundles there results in a qc report via MultiQC . Reported results include:

  • PCA plot of all samples

  • differentially up and down regulated genes for each analysis (ie treatment vs control)

  • MA plot for each analysis

Usage

Step 1: Configure workflow

You will likely run these analyses on the HPCC (currently Elzar), but you will want access to the results locally. Our strategy is to have the analysis repository on the local machine and on the HPCC, in the same file path location (relative to the home directory).

We will start by creating the local repository (on your computer):

  • Click on 'Use this template', which will copy the content of this repository to a new remote repositiory on github. Find a good name for your analysis and use it as the name for the repository;

  • Copy git link from this new repository, move to the parent folder for your RNA-seq analysis (ie where you want to save your new analyses/results) and create a local repository on your computer using git clone;

  • Move into the newly created directory (repository);

  • Configure the workflow by editing the file config.yaml ;

    • Choose mouse or human for the STAR Index;

    • Modify the design for deseq2;

  • Configure the samples.txt and units.txt files with project specific annotations and file;

    • Hint: Check for rogue leading/trailing spaces in your file paths/sample names etc;
  • Git push changes to master.

We will now set up the analysis on the HPCC:

  • Copy the git link again and create a repository on the cluster (same path, see above) using git clone;

  • Move into the newly created directory (repository);

  • Create a reads directory and copy over the reads (fastq.gz files) from the sequencing facility.

To run the analysis, activate your conda snakemake environment:

 conda activate snakemake

Step 2: Execute workflow

Test your configuration by performing a dry-run via

snakemake --use-conda -n

Execute the workflow locally via

snakemake --use-conda --cores $N

using $N cores or run it on the UGE cluster environment

snakemake --use-conda --profile uge

Step 3: Investigate results

After successful execution, you can create a self-contained interactive HTML report via:

snakemake --report report.html
  • to look at the results, use secure copy scp to transfer the report.html and the quality control report qc/multiqc.html from the HPCC to your local computer (this is where storing them in the same location relative to the home directory comes in handy);

  • explore the results;

  • forward to your collaborators.

This workflow was kindly provided by the Meyer lab .

Code Snippets

51
52
53
54
55
56
57
58
59
60
61
62
shell:
    """
    STAR \
    {params.extra} \
        --runThreadN {threads} \
        --runMode alignReads \
        --genomeDir {params.index} \
        --readFilesIn {input.fq1} {input.fq2} {params.readcmd} \
        --outReadsUnmapped Fastq \
        --outSAMtype BAM SortedByCoordinate \
        --outFileNamePrefix star/{wildcards.sample}-{wildcards.unit}/
    """
21
22
script:
    "../scripts/count-matrix.py"
45
46
script:
    "../scripts/deseq2-init.R"
62
63
script:
    "../scripts/plot-pca.R"
87
88
script:
    "../scripts/deseq2.R"
13
14
script:
    "../scripts/gtf2bed.py"
SnakeMake From line 13 of rules/qc.smk
30
31
32
33
34
35
36
37
shell:
    """
    junction_annotation.py {params.extra} \
        -i {input.bam} \
        -r {input.bed} \
        -o {params.prefix}
    > {log[0]} 2>&1
    """
SnakeMake From line 30 of rules/qc.smk
53
54
55
56
57
58
59
60
shell:
    """
    junction_saturation.py {params.extra} \
        -i {input.bam} \
        -r {input.bed} \
        -o {params.prefix}
    > {log} 2>&1
    """
SnakeMake From line 53 of rules/qc.smk
72
73
shell:
    "bam_stat.py -i {input} > {output} 2> {log}"
SnakeMake From line 72 of rules/qc.smk
86
87
shell:
    "infer_experiment.py -r {input.bed} -i {input.bam} > {output} 2> {log}"
SnakeMake From line 86 of rules/qc.smk
102
103
104
105
106
107
108
shell:
    """
    inner_distance.py \
        -r {input.bed} \
        -i {input.bam} \
        -o {params.prefix} > {log} 2>&1
    """
SnakeMake From line 102 of rules/qc.smk
121
122
shell:
    "read_distribution.py -r {input.bed} -i {input.bam} > {output} 2> {log}"
SnakeMake From line 121 of rules/qc.smk
136
137
shell:
    "read_duplication.py -i {input} -o {params.prefix} > {log} 2>&1"
SnakeMake From line 136 of rules/qc.smk
151
152
shell:
    "read_GC.py -i {input} -o {params.prefix} > {log} 2>&1"
SnakeMake From line 151 of rules/qc.smk
182
183
184
185
186
187
188
189
190
191
shell:
    """
    multiqc \
        --force \
        --export \
        --outdir qc \
        --filename multiqc_report.html \
        logs/rseqc/rseqc_junction_annotation qc/rseqc star > {log}
        #{input} > {log}
    """
20
21
22
23
24
25
26
27
28
29
30
shell:
    """
    cutadapt \
        {params.adapters} \
        {params.others} \
        -o {output.fastq1} \
        -p {output.fastq2} \
        -j {threads} \
        {input} \
    > {output.qc}
    """
46
47
48
49
50
51
52
53
54
55
shell:
    """
    cutadapt \
        {params.adapters} \
        {params.others} \
        -o {output.fastq} \
        -j {threads} \
        {input} \
    > {output.qc}
    """
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import pandas as pd

def get_column(strandedness):
    if pd.isnull(strandedness) or strandedness == "none":
        return 1 #non stranded protocol
    elif strandedness == "yes":
        return 2 #3rd column
    elif strandedness == "reverse":
        return 3 #4th column, usually for Illumina truseq
    else:
        raise ValueError(("'strandedness' column should be empty or have the " 
                          "value 'none', 'yes' or 'reverse', instead has the " 
                          "value {}").format(repr(strandedness)))

counts = [pd.read_table(f, index_col=0, usecols=[0, get_column(strandedness)],
          header=None, skiprows=4)
          for f, strandedness in zip(snakemake.input, snakemake.params.strand)]

for t, sample in zip(counts, snakemake.params.samples):
    t.columns = [sample]

matrix = pd.concat(counts, axis=1)
matrix.index.name = "gene"
# collapse technical replicates
if len(set(matrix.columns)) != len(matrix.columns):
    matrix = matrix.groupby(matrix.columns, axis=1).sum()
matrix.to_csv(snakemake.output[0], sep="\t")
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
log <- file(snakemake@log[[1]], open="wt")
sink(log)
sink(log, type="message")

library("tidyverse")
library("AnnotationHub")
library("DESeq2")

parallel <- FALSE
if (snakemake@threads > 1) {
    library("BiocParallel")
    # setup parallelization
    register(MulticoreParam(snakemake@threads))
    parallel <- TRUE
}

message("Reading counts")
cts <- read.table(snakemake@input[["counts"]], header=TRUE, row.names="gene",
                  check.names=FALSE)
message("Reading sample file")
coldata <- read.table(snakemake@params[["samples"]], header=TRUE,
                      row.names="sample", check.names=FALSE, sep="\t")
message("Getting experimental design")
design <- as.formula(snakemake@params[["design"]])

# colData and countData must have the same sample order
if (nrow(coldata) != ncol(cts)) {
    stop("Number of samples in sample sheet and number of samples in counts",
         "matrix is not the same")
}
cts <- cts[,match(rownames(coldata),colnames(cts))]
if (any(c("control", "Control", "CONTROL") %in% levels(coldata$condition))) {
    if ("control" %in% levels(coldata$condition)) {
        coldata$condition <- relevel(coldata$condition, "control" )
    } else if ("Control" %in% levels(coldata$condition)) {
        coldata$condition <- relevel(coldata$condition, "Control" )
    } else {
        coldata$condition <- relevel(coldata$condition, "CONTROL" )
    }
}
dds <- DESeqDataSetFromMatrix(countData=cts,
                              colData=coldata,
                              design=design)

# remove uninformative columns
dds <- dds[rowSums(counts(dds)) > 1,]

# normalization and preprocessing
dds <- DESeq(dds, parallel=parallel)

# Remove build number on ENS gene id
rownames(dds) <- gsub("\\.\\d*", "", rownames(dds))

# Annotate by gene names
hub <- AnnotationHub()
# query(hub,  c("GTF", "Ensembl", "Mus musculus")) "AH7799"
# query(hub,  c("GTF", "Ensembl", "Homo sapiens")) "AH69461"
if (snakemake@params[["annotationhub"]] == "mouse") {
    hubid <- "AH7799"
} else if(snakemake@params[["annotationhub"]] == "human") {
    hubid <- "AH69461"
} else {
    stop("No annotation hub specified for organism:",
         snakemake@params[["annotationhub"]])
}
anno <- hub[[hubid]]
genemap <- tibble(gene_id=anno$gene_id,
                  symbol=anno$gene_name) %>%
    distinct()

featureData <- tibble(gene_id=rownames(dds)) %>%
    left_join(genemap, by="gene_id") %>%
    mutate(symbol=case_when(is.na(symbol) ~ gene_id,
                            TRUE ~ symbol)) %>%
    select(symbol)
mcols(dds) <- DataFrame(mcols(dds), featureData)

saveRDS(dds, file=snakemake@output[[1]])
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
log <- file(snakemake@log[[1]], open="wt")
sink(log)
sink(log, type="message")

#################
## libraries ####
#################
library("DESeq2")
library("tidyverse")

#################
## functions ####
#################
genes_de <- function(deset, thrP=0.05, thrLog2FC=log2(1.5),
                     direction=c('up', 'down', 'any')) {
    tmp <- deset %>%
        as.data.frame %>%
        rownames_to_column(var="gene_id")
    if (direction == "up") {
        tmp <- tmp %>%
            dplyr::filter(padj < thrP,
                          log2FoldChange > thrLog2FC)
    } else if (direction == "down") {
        tmp <- tmp %>%
            dplyr::filter(padj < thrP,
                          log2FoldChange < thrLog2FC)
    } else if (direction == "any") {
        tmp <- tmp %>%
            dplyr::filter(padj < thrP)
    } else {
        stop(direction, "is not a valid option for direction")
    }
    tmp
}

save_up_down <- function(res, setup) {
    up  <- genes_de(res, direction="up")
    down  <- genes_de(res, direction="down")
    write_csv(up, snakemake@output[['up']])
    write_csv(down, snakemake@output[['down']])
    return(list(up=up$gene_id, down=down$gene_id))
}

############
## data ####
############
parallel <- FALSE
if (snakemake@threads > 1) {
    library("BiocParallel")
    # setup parallelization
    register(MulticoreParam(snakemake@threads))
    parallel <- TRUE
}

dds <- readRDS(snakemake@input[[1]])
directory <- "deseq2"

################
## analysis ####
################

## 1. Model fit ####
# Generate named coefficients need for apeglm lfcShrink
elements <- snakemake@params[["contrast"]]
comparison <- paste(elements[1], "vs", elements[2], sep="_")
coef <- paste("condition", elements[1], "vs", elements[2], sep="_")

# Relevel for reference to second element in contrasts
dds$condition <- relevel(dds$condition, elements[2])
dds <- nbinomWaldTest(dds)

## 2. Process results ####
# Extract coefficient specific results
res <- results(dds, name=coef, parallel=parallel)

# shrink fold changes for lowly expressed genes
res <- lfcShrink(dds, coef=coef, res=res, type='apeglm')

# add gene names to results object
res$gene_name <- mcols(dds)$symbol

# sort by p-value
res <- res[order(res$padj),]

## 3. Summarise results ####
## a) All genes for all groups ####
res_format <- res %>%
    as.data.frame() %>%
    rownames_to_column(var="gene_id") %>%
    as_tibble() %>%
    rename_at(vars(-gene_id, -gene_name), ~ paste0(., "_", comparison))

# normalised expression values
rld <- rlog(dds, blind = FALSE)
deg_genes <- assay(rld)

combined <- deg_genes %>%
    as.data.frame() %>%
    rownames_to_column(var="gene_id") %>%
    as_tibble() %>%
    right_join(res_format, by="gene_id") %>%
    dplyr::select(gene_id, gene_name, everything())

write_csv(combined, snakemake@output[["table"]])

## b) Up/Down genes ####
genes_up_down <- save_up_down(res=res, setup=coef)

## 4. Visualise results ####
svg(snakemake@output[["ma_plot"]])
plotMA(res, ylim=c(-2,2))
dev.off()

pdf(snakemake@output[["ma_pdf"]])
plotMA(res, ylim=c(-2,2))
dev.off()
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
import gffutils

db = gffutils.create_db(snakemake.input[0],
                        dbfn=snakemake.output.db,
                        force=True,
                        keep_order=True,
                        merge_strategy='merge',
                        sort_attribute_values=True,
                        disable_infer_genes=True,
                        disable_infer_transcripts=True)

with open(snakemake.output.bed, 'w') as outfileobj:
    for tx in db.features_of_type('transcript', order_by='start'):
        bed = [s.strip() for s in db.bed12(tx).split('\t')]
        bed[3] = tx.id
        outfileobj.write('{}\n'.format('\t'.join(bed)))
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
log <- file(snakemake@log[[1]], open="wt")
sink(log)
sink(log, type="message")

library("DESeq2")
library("ggplot2")
library("cowplot")
library("limma")
library("ggrepel")

dds <- readRDS(snakemake@input[[1]])
pca_color <- snakemake@params[['color']]
pca_fill <- snakemake@params[['fill']]


if (all(c(pca_color, pca_fill) != "")) {
    intgroup <- c(pca_color, pca_fill)
} else if (pca_color != "") {
    intgroup <- pca_color
} else if (pca_fill != "") {
    intgroup <- pca_fill
} else {
    stop("At least one of fill or color have to be specified")
}

# obtain normalized counts
counts <- rlog(dds, blind=FALSE)
#assay(counts) <- limma::removeBatchEffect(assay(counts), counts$individual)
pcaData <- plotPCA(counts, intgroup = intgroup, returnData = TRUE)
percentVar <- round(100 * attr(pcaData, "percentVar"))


if (all(c(pca_color, pca_fill) != "")) {
    color_sym = sym(pca_color)
    fill_sym = sym(pca_fill)
    p <- ggplot(pcaData, aes(x = PC1, y = PC2, color = !!pca_color,
                                                          fill=!!pca_fill))
        p <- p + geom_point(pch=22,  size=3, stroke=2)
        p <- p + geom_text(aes(label=name), check_overlap=TRUE)
} else {
        intgroup_sym = sym(intgroup)
    p <- ggplot(pcaData, aes(x = PC1, y = PC2, color = !!intgroup_sym))
        p <- p + geom_point() +
                    scale_color_brewer(type="qual", palette="Dark2")
                    p <- p + geom_text(aes(label=name), check_overlap=TRUE)
}

p <- p +
    labs(x=paste0("PC1: ", percentVar[1], "% variance"),
         y=paste0("PC2: ", percentVar[2], "% variance")) +
    coord_fixed() +
    theme_cowplot() +
    theme(legend.position = "bottom",
          legend.justification = 0)
ggsave(plot=p, height=4.5, width=7.5, filename = "results/pca.pdf")
ggsave(plot=p, height=4.5, width=7.5, filename = "results/pca.svg")
ShowHide 15 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/cshlwyang/RNAseq
Name: rnaseq
Version: 6
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...