Pipeline for differential gene expression (DGE) and differential transcript usage (DTU) analysis using long reads

public public 7mo ago Version: v0.5.0 0 bookmarks

ONT_logo

This project is deprecated. Please see our newer wf-transcriptomes , which contains functionality for differential expression .

Pipeline for differential gene expression (DGE) and differential transcript usage (DTU) analysis using long reads

This pipeline uses snakemake , minimap2 , salmon , edgeR , DEXSeq and stageR to automate simple differential gene expression and differential transcript usage workflows on long read data.

If you have paired samples (e.g for example treated and untreated samples from the same individuals) use the paired_dge_dtu branch.

Getting Started

Input

The input files and parameters are specified in config.yml :

  • transcriptome - the input transcriptome.

  • annotation - the input annotation in GFF format.

  • control_samples - a dictionary with control sample names and paths to the fastq files.

  • treated_samples - a dictionary with treated sample names and paths to the fastq files.

Output

  • alignments/*.bam - unsorted transcriptome alignments (input to salmon ).

  • alignments_sorted/*.bam - sorted and indexed transcriptome alignments.

  • counts - counts generated by salmon .

  • merged/all_counts.tsv - the transcript count table including all samples.

  • merged/all_counts_filtered.tsv - the transcript count table including all samples after filtering.

  • merged//all_gene_counts.tsv - the gene count table including all samples.

  • de_analysis/coldata.tsv - the condition table used to build model matrix.

  • de_analysis/de_params.tsv - analysis parameters generated from config.yml .

  • de_analysis/results_dge.tsv and de_analysis/results_dge.pdf - results of edgeR differential gene expression analysis.

  • de_analysis/results_dtu_gene.tsv , de_analysis/results_dtu_transcript.tsv and de_analysis/results_dtu.pdf - results of differential transcript usage by DEXSeq .

  • de_analysis/results_dtu_stageR.tsv - results of the stageR analysis of the DEXSeq output.

  • de_analysis/dtu_plots.pdf - DTU results plot based on the stageR results and filtered counts.

Dependencies

Layout

  • README.md

  • Snakefile - master snakefile

  • config.yml - YAML configuration file

  • snakelib/ - snakefiles collection included by the master snakefile

  • lib/ - python files included by analysis scripts and snakefiles

  • scripts/ - analysis scripts

  • data/ - input data needed by pipeline - use with caution to avoid bloated repo

  • results/ - pipeline results to be commited - use with caution to avoid bloated repo

Installation

Clone the repository:

git clone https://github.com/nanoporetech/pipeline-transcriptome-de.git

Usage

Edit config.yml to set the input datasets and parameters then issue:

snakemake --use-conda -j <num_cores> all

Help

Licence and Copyright

(c) 2018 Oxford Nanopore Technologies Ltd.

This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.

References and Supporting Information

This pipeline is largely based on the approach described in the following paper:

  • Love MI, Soneson C and Patro R. Swimming downstream: statistical analysis of differential transcript usage following Salmon quantification. F1000Research 2018, 7:952 (doi: 10.12688/f1000research.15398.3 )

See the post announcing the tool at the Oxford Nanopore Technologies community here .

Code Snippets

 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
suppressMessages(library(dplyr))
suppressMessages(library(ggplot2))
suppressMessages(library(tidyr))

# Set up sample data frame:
coldata <- read.csv("de_analysis/coldata.tsv", row.names="sample", sep="\t")
coldata$sample_id <- rownames(coldata)
coldata$condition <- factor(coldata$condition, levels=rev(levels(coldata$condition)))
coldata$type <-NULL
coldata$patient <-NULL

# Read stageR results:
stageR <- read.csv("de_analysis/results_dtu_stageR.tsv", sep="\t")
names(stageR) <- c("gene_id", "transcript_id", "p_gene", "p_transcript");

# Read filtered counts:
counts <- read.csv("merged/all_counts_filtered.tsv", sep="\t");
names(counts)[2]<-"transcript_id"

# Join counts and stageR results:
df <- counts %>% left_join(stageR, by = c("gene_id", "transcript_id"))
df <- df[order(df$p_gene),]

scols <- setdiff(names(df),c("gene_id", "transcript_id", "p_gene", "p_transcript"))

# Normalise counts:
for(sc in scols){
    df[sc] <- df[sc] / sum(df[sc])
}

# Melt data frame:
tdf <- df %>% gather(key='sample', value='norm_count',-gene_id, -transcript_id, -p_gene, -p_transcript)

# Add sample group column:
sampleToGroup<-function(x){
    return(coldata[x,]$condition)
}

tdf$group <- sampleToGroup(tdf$sample)

# Filter for significant genes:
sig_level <- 0.05
genes <- as.character(tdf[which(tdf$p_gene < sig_level),]$gene_id)
genes <- unique(genes)

pdf("de_analysis/dtu_plots.pdf")

for(gene in genes){
    gdf<-tdf[which(tdf$gene_id==gene),]
    p_gene <- unique(gdf$p_gene)
    p <- ggplot(gdf, aes(x=transcript_id, y=norm_count)) + geom_bar(stat="identity", aes(fill=sample), position="dodge")
    p <- p + facet_wrap(~ group) + coord_flip()
    p <- p + ggtitle(paste(gene," : p_value=",p_gene,sep=""))
    print(p)
}
28
29
30
shell:"""
conda list > {output.ver} 
"""
41
42
43
shell:"""
    minimap2 -t {threads} {params.opts} -I 1000G -d {output.index} {input.genome}
"""
58
59
60
61
62
63
shell:"""
minimap2 -t {threads} -ax map-ont -p {params.psec} -N {params.msec} {params.opts} {input.index} {input.fastq}\
| samtools view -Sb > {output.bam};
samtools sort -@ {threads} {output.bam} -o {output.sbam};
samtools index {output.sbam};
"""
76
77
78
shell: """
    salmon quant --noErrorModel -p {threads} -t {input.trs} -l {params.libtype} -a {input.bam} -o {params.tsv_dir}
"""
86
87
88
shell:"""
{SNAKEDIR}/scripts/merge_count_tsvs.py -z -o {output.tsv} {input.count_tsvs}
"""
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
run:
    samples, conditions, types = [], [], []
    for sample in control_samples.keys():
        samples.append(sample)
        conditions.append("untreated")
        types.append("single-read")
    for sample in treated_samples.keys():
        samples.append(sample)
        conditions.append("treated")
        types.append("single-read")

    df = pd.DataFrame(OrderedDict([('sample', samples),('condition', conditions),('type', types)]))
    df.to_csv(output.coldata, sep="\t", index=False)
112
113
114
115
116
117
118
119
120
run:
    d = OrderedDict()
    d["Annotation"] = [config["annotation"]]
    d["min_samps_gene_expr"] = [config["min_samps_gene_expr"]]
    d["min_samps_feature_expr"] = [config["min_samps_feature_expr"]]
    d["min_gene_expr"] = [config["min_gene_expr"]]
    d["min_feature_expr"] = [config["min_feature_expr"]]
    df = pd.DataFrame(d)
    df.to_csv(output.de_params, sep="\t", index=False)
SnakeMake From line 112 of master/Snakefile
137
138
139
shell:"""
{SNAKEDIR}/scripts/de_analysis.R
"""
SnakeMake From line 137 of master/Snakefile
148
149
150
shell: """
{SNAKEDIR}/scripts/plot_dtu_results.R
"""
SnakeMake From line 148 of master/Snakefile
15
16
17
18
run:
    print("--------------------------------------------")
    [generate_help(sfile) for sfile in input]
    print("--------------------------------------------")
23
shell: "rm -r {input.wdir}"
28
shell: "rm -fr {input.res}/*"
36
37
38
39
40
run:
    print("Pipeline name: ", params.name)
    print("Pipeline working directory: ", params.wdir)
    print("Pipeline results directory: ", params.res)
    print("Pipeline repository: ", params.repo)
ShowHide 10 more snippets with no or duplicated tags.

Free

Created: 7mo ago
Updated: 7mo ago
Maitainers: public
URL: https://github.com/nanoporetech/pipeline-transcriptome-de
Name: pipeline-transcriptome-de
Version: v0.5.0
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...