Pipeline to fetch metadata and raw FastQ files from public and private databases

public public 1yr ago Version: 1.10.0 0 bookmarks

Introduction

nf-core/fetchngs is a bioinformatics pipeline to fetch metadata and raw FastQ files from both public and private databases. At present, the pipeline supports SRA / ENA / DDBJ / GEO / Synapse ids (see usage docs ).

Usage

Note If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, prepare a samplesheet with your input data that looks as follows:

ids.csv :

SRR9984183
SRR13191702
ERR1160846
ERR1109373
DRR028935
DRR026872

Each line represents a database id. Please see next section for supported ids.

Now, you can run the pipeline using:

nextflow run nf-core/fetchngs \
 -profile <docker/singularity/.../institute> \
 --input ids.csv \
 --outdir <OUTDIR>

Warning: Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters ; see docs .

For more details, please refer to the usage documentation and the parameter documentation .

Supported ids

Via a single file of ids, provided one-per-line (see example input file ) the pipeline performs the following steps:

SRA / ENA / DDBJ / GEO ids

  1. Resolve database ids back to appropriate experiment-level ids and to be compatible with the ENA API

  2. Fetch extensive id metadata via ENA API

  3. Download FastQ files:

    • If direct download links are available from the ENA API, fetch in parallel via curl and perform md5sum check

    • Otherwise use sra-tools to download .sra files and convert them to FastQ

  4. Collate id metadata and paths to FastQ files in a single samplesheet

Synapse ids

  1. Resolve Synapse directory ids to their corresponding FastQ files ids via the synapse list command.

  2. Retrieve FastQ file metadata including FastQ file names, md5sums, etags, annotations and other data provenance via the synapse show command.

  3. Download FastQ files in parallel via synapse get

  4. Collate paths to FastQ files in a single samplesheet

Pipeline output

The columns in the output samplesheet can be tailored to be accepted out-of-the-box by selected nf-core pipelines (see usage docs ), these currently include:

To see the the results of a test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation .

Credits

nf-core/fetchngs was originally written by Harshil Patel ( @drpatelh ) from Seqera Labs, Spain and Jose Espinosa-Carrasco ( @JoseEspinosa ) from The Comparative Bioinformatics Group at The Centre for Genomic Regulation, Spain . Support for download of sequencing reads without FTP links via sra-tools was added by Moritz E. Beber ( @Midnighter ) from Unseen Bio ApS, Denmark . The Synapse workflow was added by Daisy Han @daisyhan97 and Bruno Grande @BrunoGrandePhD from Sage Bionetworks, Seattle .

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #fetchngs channel (you can join with this invite ).

Citations

If you use nf-core/fetchngs for your analysis, please cite it using the following doi: 10.5281/zenodo.5070524

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

17
18
19
20
21
22
23
24
25
26
"""
multiqc_mappings_config.py \\
    $csv \\
    multiqc_config.yml

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
curl \\
    $args \\
    -L ${fastq[0]} \\
    -o ${meta.id}.fastq.gz

echo "${meta.md5_1}  ${meta.id}.fastq.gz" > ${meta.id}.fastq.gz.md5
md5sum -c ${meta.id}.fastq.gz.md5

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    curl: \$(echo \$(curl --version | head -n 1 | sed 's/^curl //; s/ .*\$//'))
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
"""
curl \\
    $args \\
    -L ${fastq[0]} \\
    -o ${meta.id}_1.fastq.gz

echo "${meta.md5_1}  ${meta.id}_1.fastq.gz" > ${meta.id}_1.fastq.gz.md5
md5sum -c ${meta.id}_1.fastq.gz.md5

curl \\
    $args \\
    -L ${fastq[1]} \\
    -o ${meta.id}_2.fastq.gz

echo "${meta.md5_2}  ${meta.id}_2.fastq.gz" > ${meta.id}_2.fastq.gz.md5
md5sum -c ${meta.id}_2.fastq.gz.md5

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    curl: \$(echo \$(curl --version | head -n 1 | sed 's/^curl //; s/ .*\$//'))
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
"""
echo $id > id.txt
sra_ids_to_runinfo.py \\
    id.txt \\
    ${id}.runinfo.tsv \\
    $metadata_fields

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
"""
head -n 1 `ls ./samplesheets/* | head -n 1` > samplesheet.csv
for fileid in `ls ./samplesheets/*`; do
    awk 'NR>1' \$fileid >> samplesheet.csv
done

head -n 1 `ls ./mappings/* | head -n 1` > id_mappings.csv
for fileid in `ls ./mappings/*`; do
    awk 'NR>1' \$fileid >> id_mappings.csv
done

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sed: \$(echo \$(sed --version 2>&1) | sed 's/^.*GNU sed) //; s/ .*\$//')
END_VERSIONS
"""
17
18
19
20
21
22
23
24
25
26
"""
sra_runinfo_to_ftp.py \\
    ${runinfo.join(',')} \\
    ${runinfo.toString().tokenize(".")[0]}.runinfo_ftp.tsv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
synapse \\
    -c $config \\
    get \\
    $args \\
    $meta.id

echo "${meta.md5} \t ${meta.name}" > ${meta.id}.md5

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    synapse: \$(synapse --version | sed -e "s/Synapse Client //g")
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
synapse \\
    -c $config \\
    list \\
    $args \\
    $id \\
    $args2 \\
    > ${id}.list.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    syanpse: \$(synapse --version | sed -e "s/Synapse Client //g")
END_VERSIONS
"""
17
18
19
20
21
22
23
24
25
26
27
"""
head -n 1 `ls ./samplesheets/* | head -n 1` > samplesheet.csv
for fileid in `ls ./samplesheets/*`; do
    awk 'NR>1' \$fileid >> samplesheet.csv
done

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sed: \$(echo \$(sed --version 2>&1) | sed 's/^.*GNU sed) //; s/ .*\$//')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
synapse \\
    -c $config \\
    show \\
    $args \\
    $id \\
    $args2 \\
    > ${id}.metadata.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    synapse: \$(synapse --version | sed -e "s/Synapse Client //g")
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
"""
export NCBI_SETTINGS="\$PWD/${ncbi_settings}"

fasterq-dump \\
    $args \\
    --threads $task.cpus \\
    --outfile $outfile \\
    ${key_file} \\
    ${sra.name}

pigz \\
    $args2 \\
    --no-name \\
    --processes $task.cpus \\
    *.fastq

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sratools: \$(fasterq-dump --version 2>&1 | grep -Eo '[0-9.]+')
    pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
END_VERSIONS
"""
17
18
19
20
21
22
23
24
25
26
"""
multiqc_mappings_config.py \\
    $csv \\
    multiqc_config.yml

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
curl \\
    $args \\
    -L ${fastq[0]} \\
    -o ${meta.id}.fastq.gz

echo "${meta.md5_1}  ${meta.id}.fastq.gz" > ${meta.id}.fastq.gz.md5
md5sum -c ${meta.id}.fastq.gz.md5

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    curl: \$(echo \$(curl --version | head -n 1 | sed 's/^curl //; s/ .*\$//'))
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
"""
curl \\
    $args \\
    -L ${fastq[0]} \\
    -o ${meta.id}_1.fastq.gz

echo "${meta.md5_1}  ${meta.id}_1.fastq.gz" > ${meta.id}_1.fastq.gz.md5
md5sum -c ${meta.id}_1.fastq.gz.md5

curl \\
    $args \\
    -L ${fastq[1]} \\
    -o ${meta.id}_2.fastq.gz

echo "${meta.md5_2}  ${meta.id}_2.fastq.gz" > ${meta.id}_2.fastq.gz.md5
md5sum -c ${meta.id}_2.fastq.gz.md5

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    curl: \$(echo \$(curl --version | head -n 1 | sed 's/^curl //; s/ .*\$//'))
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
"""
echo $id > id.txt
sra_ids_to_runinfo.py \\
    id.txt \\
    ${id}.runinfo.tsv \\
    $metadata_fields

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
"""
head -n 1 `ls ./samplesheets/* | head -n 1` > samplesheet.csv
for fileid in `ls ./samplesheets/*`; do
    awk 'NR>1' \$fileid >> samplesheet.csv
done

head -n 1 `ls ./mappings/* | head -n 1` > id_mappings.csv
for fileid in `ls ./mappings/*`; do
    awk 'NR>1' \$fileid >> id_mappings.csv
done

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sed: \$(echo \$(sed --version 2>&1) | sed 's/^.*GNU sed) //; s/ .*\$//')
END_VERSIONS
"""
17
18
19
20
21
22
23
24
25
26
"""
sra_runinfo_to_ftp.py \\
    ${runinfo.join(',')} \\
    ${runinfo.toString().tokenize(".")[0]}.runinfo_ftp.tsv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
34
35
36
"""
synapse \\
    -c $config \\
    get \\
    $args \\
    $meta.id

echo "${meta.md5} \t ${meta.name}" > ${meta.id}.md5

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    synapse: \$(synapse --version | sed -e "s/Synapse Client //g")
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
synapse \\
    -c $config \\
    list \\
    $args \\
    $id \\
    $args2 \\
    > ${id}.list.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    syanpse: \$(synapse --version | sed -e "s/Synapse Client //g")
END_VERSIONS
"""
17
18
19
20
21
22
23
24
25
26
27
"""
head -n 1 `ls ./samplesheets/* | head -n 1` > samplesheet.csv
for fileid in `ls ./samplesheets/*`; do
    awk 'NR>1' \$fileid >> samplesheet.csv
done

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sed: \$(echo \$(sed --version 2>&1) | sed 's/^.*GNU sed) //; s/ .*\$//')
END_VERSIONS
"""
22
23
24
25
26
27
28
29
30
31
32
33
34
35
"""
synapse \\
    -c $config \\
    show \\
    $args \\
    $id \\
    $args2 \\
    > ${id}.metadata.txt

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    synapse: \$(synapse --version | sed -e "s/Synapse Client //g")
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
"""
export NCBI_SETTINGS="\$PWD/${ncbi_settings}"

fasterq-dump \\
    $args \\
    --threads $task.cpus \\
    --outfile $outfile \\
    ${key_file} \\
    ${sra.name}

pigz \\
    $args2 \\
    --no-name \\
    --processes $task.cpus \\
    *.fastq

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    sratools: \$(fasterq-dump --version 2>&1 | grep -Eo '[0-9.]+')
    pigz: \$( pigz --version 2>&1 | sed 's/pigz //g' )
END_VERSIONS
"""
ShowHide 21 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/fetchngs
Name: fetchngs
Version: 1.10.0
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...