Pipeline for processing spatially-resolved gene counts with spatial coordinates, image data, and optionally single cell RNA-seq data, designed for 10x genomics visium and single cell transcriptomics.

public public 1yr ago Version: dev 0 bookmarks
Loading...

Introduction

nf-core/spatialtranscriptomics is a bioinformatics analysis pipeline for Spatial Transcriptomics. It can process and analyse 10X spatial data either directly from raw data by running Space Ranger or data already processed by Space Ranger. The pipeline currently consists of the following steps:

  1. Raw data processing with Space Ranger (optional)

  2. Quality controls and filtering

  3. Normalisation

  4. Dimensionality reduction and clustering

  5. Differential gene expression testing

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources. The results obtained from the full-sized test can be viewed on the nf-core website .

Usage

Note
If you are new to nextflow and nf-core, please refer to this page on how to set-up nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

You can run the pipeline using:

nextflow run nf-core/spatialtranscriptomics \
 -profile <docker/singularity/.../institute> \
 --input samplesheet.csv \
 --outdir <OUTDIR>

Warning
Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters ; see docs .

For more details and further functionality, please refer to the usage documentation and the parameter documentation .

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation .

Credits

nf-core/spatialtranscriptomics was originally developed by the Jackson Laboratory1, up to the 0.1.0 tag. It was further developed in a collaboration between the National Bioinformatics Infrastructure Sweden and National Genomics Infastructure within SciLifeLab ; it is currently developed and maintained by Erik Fasterius and Christophe Avenel .

Many thanks to others who have helped out along the way too, especially Gregor Sturm !

1 Supported by grants from the US National Institutes of Health U24CA224067 and U54AG075941 . Original authors Dr. Sergii Domanskyi , Prof. Jeffrey Chuang and Dr. Anuj Srivastava.

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #spatialtranscriptomics channel (you can join with this invite ).

Citations

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

36
37
38
39
40
41
42
43
44
45
46
47
48
"""
quarto render ${report} \
    --output "st_clustering.html" \
    -P fileNameST:${st_adata_norm} \
    -P resolution:${params.st_cluster_resolution} \
    -P saveFileST:st_adata_processed.h5ad

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    quarto: \$(quarto -v)
    scanpy: \$(python -c "import scanpy; print(scanpy.__version__)")
END_VERSIONS
"""
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
"""
quarto render ${report} \
    --output st_qc_and_normalisation.html \
    -P rawAdata:${st_raw} \
    -P pltFigSize:${params.st_preprocess_fig_size} \
    -P minCounts:${params.st_preprocess_min_counts} \
    -P minGenes:${params.st_preprocess_min_genes} \
    -P minCells:${params.st_preprocess_min_cells} \
    -P histplotQCmaxTotalCounts:${params.st_preprocess_hist_qc_max_total_counts} \
    -P histplotQCminGeneCounts:${params.st_preprocess_hist_qc_min_gene_counts} \
    -P histplotQCbins:${params.st_preprocess_hist_qc_bins} \
    -P nameDataPlain:st_adata_plain.h5ad \
    -P nameDataNorm:st_adata_norm.h5ad

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    quarto: \$(quarto -v)
    scanpy: \$(python -c "import scanpy; print(scanpy.__version__)")
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
"""
read_st_data.py \\
    --SRCountDir "${meta.id}" \\
    --outAnnData st_adata_raw.h5ad

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    scanpy: \$(python -c "import scanpy; print(scanpy.__version__)")
END_VERSIONS
"""
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
"""
quarto render ${report} \
    --output "st_spatial_de.html" \
    -P fileNameST:${st_adata_norm} \
    -P numberOfColumns:${params.st_spatial_de_ncols} \
    -P saveDEFileName:st_gde.csv \
    -P saveSpatialDEFileName:st_spatial_de.csv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    quarto: \$(quarto -v)
    leidenalg: \$(python -c "import leidenalg; print(leidenalg.version)")
    scanpy: \$(python -c "import scanpy; print(scanpy.__version__)")
    SpatialDE: \$(python -c "from importlib.metadata import version; print(version('SpatialDE'))")
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
printf "%s %s\\n" $rename_to | while read old_name new_name; do
    [ -f "\${new_name}" ] || ln -s \$old_name \$new_name
done

fastqc \\
    $args \\
    --threads $task.cpus \\
    $renamed_files

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
46
47
48
49
50
51
52
53
54
"""
touch ${prefix}.html
touch ${prefix}.zip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
multiqc \\
    --force \\
    $args \\
    $config \\
    $extra_config \\
    .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
43
44
45
46
47
48
49
50
51
52
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
18
19
20
21
22
23
24
25
26
27
"""
check_samplesheet.py \\
    $samplesheet \\
    samplesheet.valid.csv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""  
#!/bin/bash

mitoUrl="ftp://ftp.broadinstitute.org/distribution/metabolic/papers/Pagliarini/MitoCarta2.0/${sample_info.species}.MitoCarta2.0.txt"

fname=${outdir}/`basename "\${mitoUrl}"`
echo saving to: \$fname

[ ! -d ${outdir} ] && mkdir ${outdir}

if [ ! -f \$fname ]
then
    wget --quiet \${mitoUrl} --output-document=\$fname
fi
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 21 of local/tasks.nf
58
59
60
61
62
63
64
65
66
67
68
"""
#!/bin/bash

dname=${outdir}/${sample_id}

[ ! -d \${dname} ] && mkdir \${dname}

python $projectDir/bin/script_read_st_data.py ${sample_info.st_data_dir} \${dname}/st_adata_raw.h5ad raw_feature_bc_matrix.h5
python $projectDir/bin/script_read_sc_data.py ${sample_info.sc_data_dir} \${dname}/sc_adata_raw.h5ad
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 58 of local/tasks.nf
89
90
91
92
93
94
95
96
97
"""
#!/bin/bash

dname=${outdir}/${sample_id}

Rscript $projectDir/bin/calculateSumFactors.R \${dname}/ st_adata_counts_in_tissue
Rscript $projectDir/bin/calculateSumFactors.R \${dname}/ sc_adata_counts
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 89 of local/tasks.nf
121
122
123
124
125
126
127
128
129
130
"""
#!/bin/bash

dname=${outdir}/${sample_id}

mitoFile=${outdir}/${sample_info.species}.MitoCarta2.0.txt

python $projectDir/bin/stPreprocess.py \${dname}/ st_adata_counts_in_tissue st_adata_raw.h5ad \$mitoFile
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 121 of local/tasks.nf
154
155
156
157
158
159
160
161
162
163
"""
#!/bin/bash

dname=${outdir}/${sample_id}

mitoFile=${outdir}/${sample_info.species}.MitoCarta2.0.txt

python $projectDir/bin/scPreprocess.py \${dname}/ sc_adata_counts sc_adata_raw.h5ad \$mitoFile
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 154 of local/tasks.nf
196
197
198
199
200
201
202
203
204
205
"""
#!/bin/bash

sample_id=${sample_id_gr}

dname=${outdir}/\${sample_id}

Rscript $projectDir/bin/characterization_STdeconvolve.R \${dname}/ ${sample_info.st_data_dir}
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 196 of local/tasks.nf
228
229
230
231
232
233
234
235
236
237
"""
#!/bin/bash

sample_id=${sample_id_gr}

dname=${outdir}/\${sample_id}

Rscript $projectDir/bin/characterization_SPOTlight.R \${dname}/ ${sample_info.st_data_dir}
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 228 of local/tasks.nf
260
261
262
263
264
265
266
267
268
269
"""
#!/bin/bash

sample_id=${sample_id_gr}

dname=${outdir}/\${sample_id}

Rscript $projectDir/bin/characterization_BayesSpace.R \${dname}/
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 260 of local/tasks.nf
292
293
294
295
296
297
298
299
300
301
"""
#!/bin/bash

sample_id=${sample_id_gr}

dname=${outdir}/\${sample_id}

python $projectDir/bin/stSpatialDE.py \${dname}/ st_adata_norm.h5ad
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 292 of local/tasks.nf
338
339
340
341
342
343
344
345
346
347
"""
#!/bin/bash

sample_id=${sample_id_gr}

dname=${outdir}/\${sample_id}

python $projectDir/bin/stClusteringWorkflow.py \${dname}/
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 338 of local/tasks.nf
370
371
372
373
374
375
376
377
378
379
"""
#!/bin/bash

sample_id=${sample_id_gr}

dname=${outdir}/\${sample_id}

echo \${dname}/  
echo "completed" > "output.out" && outpath=`pwd`/output.out
"""
NextFlow From line 370 of local/tasks.nf
23
24
25
26
27
28
29
30
31
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -s $reads ${prefix}.fastq.gz
fastqc $args --threads $task.cpus ${prefix}.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
33
34
35
36
37
38
39
40
41
42
"""
[ ! -f  ${prefix}_1.fastq.gz ] && ln -s ${reads[0]} ${prefix}_1.fastq.gz
[ ! -f  ${prefix}_2.fastq.gz ] && ln -s ${reads[1]} ${prefix}_2.fastq.gz
fastqc $args --threads $task.cpus ${prefix}_1.fastq.gz ${prefix}_2.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
20
21
22
23
24
25
26
27
"""
multiqc -f $args .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
ShowHide 21 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/spatialtranscriptomics
Name: spatialtranscriptomics
Version: dev
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...