Analysis pipeline for the identification of viral integration events in genomes using a chimeric read approach.

public public 1yr ago Version: 0.1.0 0 bookmarks
Loading...

Introduction

nf-core/viralintegration is a bioinformatics best-practice analysis pipeline for the identification of viral integration events in genomes using a chimeric read approach. It was initially based on the CTAT-VirusIntegrationFinder .

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources.The results obtained from the full-sized test can be viewed on the nf-core website .

Pipeline summary

  1. Input Check

    • Input path to sample FASTAs in samplesheet.csv

    • Check that sample meets requirements (samplesheet_check)

  2. Read QC ( FastQC )

  3. Align reads to human genome

    • Generate index and perform alignment (STAR)
  4. Quality trimming for unaligned reads

    • Quality and adaptor trimming (Trimmomatic)

    • Remove polyAs from reads (PolyAStripper)

  5. Identify chimeric reads

    • Combine human and virus FASTAs (cat_fasta)

    • Generate index and perform alignment to combined human + viral reference (STAR)

    • Sort and index alignments (SAMtools)

    • Determine potential insertion site candidates and optimize file (insertion_site_candidates, abridged_TSV)

  6. Virus Report outputs:

    • Viral read counts in a tsv table and png plot

    • Preliminary genome wide abundance plot

    • Bam and bai for reads detected in potential viral insertion site

    • Web based interactive genome viewer for virus infection evidence (VirusDetect.igvjs.html)

  7. Verify chimeric reads

    • Create chimeric FASTA and GTF extracts (extract_chimeric_genomic_targets)

    • Generate index and perform alignment to verify chimeric reads (STAR)

    • Sort and index validated alignments (SAMtools)

    • Remove duplicate alignments (remove_duplicates)

    • Generate evidence counts for chimeric reads (chimeric_contig_evidence_analyzer)

  8. Summary Report outputs:

    • Refined genome wide abundance plog png

    • Insertion site candidates in tab-delimited format with gene annotations (vif.refined.wRefGeneAnnots.tsv)

    • Web based interactive genome viewer for virus insertion sites (vif.html)

  9. Present quality checking and visualization for raw reads, adaptor trimming, and STAR alignments ( MultiQC )

Usage

Note If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, prepare a samplesheet with your input data that looks as follows:

samplesheet.csv :

sample,fastq_1,fastq_2
CONTROL_REP1,AEG588A1_S1_L002_R1_001.fastq.gz,AEG588A1_S1_L002_R2_001.fastq.gz

Each row represents a fastq file (single-end) or a pair of fastq files (paired end).

Now, you can run the pipeline using:

nextflow run nf-core/viralintegration \
 -profile <docker/singularity/.../institute> \
 --input samplesheet.csv \
 --outdir <OUTDIR>

Warning: Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters ; see docs .

For more details and further functionality, please refer to the usage documentation and the parameter documentation .

nextflow run nf-core/viralintegration --input samplesheet.csv --outdir <OUTDIR> --genome GRCh37 -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation .

Credits

nf-core/viralintegration was originally written by Alyssa Briggs ( @alyssa-ab ) and Edmund Miller ( @Emiller88 ) from The Functional Genomics Laboratory at The Univeristy of Texas at Dallas .

We thank the following people for their extensive assistance in the development of this pipeline:

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #viralintegration channel (you can join with this invite ).

Citations

If you use nf-core/viralintegration for your analysis, please cite it using the following doi: 10.5281/zenodo.7783480

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
"""
#!/usr/bin/env python

import pandas as pd
min_reads = ${params.min_reads}

# write abridged tsv
df = pd.read_csv("$full_tsv", sep="\\t")
df.drop('readnames', axis=1).to_csv("${prefix}.full.abridged.tsv", sep="\\t", index=False)

#df = df[ (df.hits <= max_hits) & (df.total >= min_reads)]
df = df[ df.total >= min_reads ]

df.to_csv("${prefix}.filtered.tsv", sep="\\t", index=False)
df.drop('readnames', axis=1).to_csv("${prefix}.filtered.abridged.tsv", sep="\\t", index=False)
"""
19
20
21
22
23
24
25
26
"""
cat $ref_genome_fasta $virus_db_fasta > ${ref_genome_fasta.simpleName}_plus_viraldb.fasta

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    cat: \$(echo \$(cat --version 2>&1) | sed 's/^.*coreutils) //; s/ .*\$//')
END_VERSIONS
"""
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
"""
chimeric_contig_evidence_analyzer.py \
    --patch_db_bam ${bam} \
    --patch_db_gtf ${gtf} \
    --output_prefix ${prefix}

samtools index ${prefix}.evidence.bam

if (( ${num_insertions} > 1 )); then
    echo "true" > ${insertions_validated_file}
else
    echo "false" > ${insertions_validated_file}
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
"""
extract_chimeric_genomic_targets.py \\
    --fasta ${ref_genome_fasta} \\
    --patch_db_fasta ${viral_fasta} \\
    --output_prefix ${prefix} \\
    --chim_events ${insertion_site_candidates_abridged} \\
    ${pad_region_length}


if [ -s ${prefix}.fasta ]; then echo "true" > has_results; else echo "false" > has_results; fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
"""
pre_filter_non_human_virus_chimeric_alignments.py  \\
    --chimJ ${chimeric_junction} \\
    --viral_db_fasta ${viral_fasta} \\
    --output human_virus_chimJ.tsv

chimJ_to_virus_insertion_candidate_sites.py \\
    --chimJ human_virus_chimJ.tsv \\
    --viral_db_fasta ${viral_fasta} \\
    --max_multi_read_alignments ${params.max_hits} \\
    --output_prefix ${prefix}.tmp \\
    ${remove_duplicates}

# extract the chimeric read alignments:
extract_prelim_chimeric_genome_read_alignments.py \\
    --star_bam ${bam} \\
    --vif_full_tsv ${prefix}.tmp.full.tsv \\
    --output_bam ${prefix}.genome_chimeric_evidence.bam

# organize insertion candidates by virus chimeric breakends
greedily_assign_multimapping_reads_among_insertions.py \
    --init_full_tsv ${prefix}.tmp.full.tsv \
    --include_readnames \
    > ${prefix}.tmp.full.virus_breakend_grouped.tsv

# add evidence read stats
incorporate_read_alignment_stats.py \\
    --supp_reads_bam ${prefix}.genome_chimeric_evidence.bam \\
    --vif_full_tsv ${prefix}.tmp.full.virus_breakend_grouped.tsv \\
    --output ${prefix}.full.w_read_stats.tsv

# add seq entropy around breakpoints
incorporate_breakpoint_entropy_n_splice_info.py \\
    --vif_tsv  ${prefix}.full.w_read_stats.tsv \\
    --ref_genome_fasta ${ref_genome_fasta} \\
    --viral_genome_fasta ${viral_fasta} \\
    --output ${prefix}.full.w_brkpt_seq_entropy.tsv

# identify additional primary targets to pursue (set is_primary='Maybe')
revise_primary_target_list_via_brkpt_homologies.py \
    --vif_tsv ${prefix}.full.w_brkpt_seq_entropy.tsv \
    > ${prefix}.full.tsv

rm ${prefix}.tmp.full.tsv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
20
21
22
23
24
25
26
27
28
29
30
31
32
"""
fastq_polyA_stripper.py \\
    --out_prefix ${meta.id} \\
    --left_fq ${reads[0]} \\
    --right_fq ${reads[1]}

pigz *.polyA-trimmed.fastq

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
31
32
33
"""
bam_mark_duplicates.py \
    -i ${input_bam} \
    -o ${prefix}.dedup.bam \
    -r

samtools index ${prefix}.dedup.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
21
22
23
24
25
26
27
28
29
30
"""
check_samplesheet.py \\
    $samplesheet \\
    samplesheet.valid.csv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
"""
STAR \\
    --genomeDir $index \\
    --readFilesIn $reads  \\
    --runThreadN $task.cpus \\
    --outFileNamePrefix $prefix. \\
    --genomeFastaFiles $fasta \\
    $out_sam_type \\
    $seq_center \\
    $args

$mv_unsorted_bam

if [ -f ${prefix}.Unmapped.out.mate1 ]; then
    mv ${prefix}.Unmapped.out.mate1 ${prefix}.unmapped_1.fastq
    gzip ${prefix}.unmapped_1.fastq
fi
if [ -f ${prefix}.Unmapped.out.mate2 ]; then
    mv ${prefix}.Unmapped.out.mate2 ${prefix}.unmapped_2.fastq
    gzip ${prefix}.unmapped_2.fastq
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
END_VERSIONS
"""
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
"""
refine_VIF_output.R \\
    --prelim_counts ${init_counts} \\
    --vif_counts ${vif_counts} \\
    --output ${prefix}.prelim.refined.tsv

examine_flanking_uniq_kmer_composition.py \\
    --vif_tsv ${prefix}.prelim.refined.tsv \\
    --min_frac_uniq ${min_flank_frac_uniq} \\
    --output ${prefix}.refined.tsv

distill_to_primary_target_list_via_brkpt_homologies.py \\
    --vif_tsv ${prefix}.refined.tsv \\
    > ${prefix}.refined.distilled.tsv

make_VIF_genome_abundance_plot.R \\
    --vif_report ${prefix}.refined.tsv \\
    --title "Genome Wide Abundance" \\
    --output_png ${prefix}.genome_plot.png

find_closest.py \\
    -i ${prefix}.refined.tsv \\
    -o ${prefix}.refined.wRefGeneAnnots.tsv \\
    --gtf ${gtf}

if [[ -e ${prefix}.refined.wRefGeneAnnots.tsv ]]; then

    create_insertion_site_inspector_js.py \\
        --VIF_summary_tsv ${prefix}.refined.wRefGeneAnnots.tsv \\
        --json_outfile ${prefix}.igv.json

    # make bed for igvjs
    region_gtf_to_bed.py \\
        ${chim_targets_gtf} \\
        > ${prefix}.bed

    # prep for making the report
    # HACK Let's not hard code this in the future
    /usr/local/src/CTAT-VirusIntegrationFinder/util/bamsifter/bamsifter \\
        -c ${max_coverage} \\
        -o ${prefix}.reads.bam \\
        ${alignment_bam}

    # IGV reports expects to find, __PREFIX__.fa, __PREFIX__.bed, __PREFIX__.reads.bam
    ln -sf ${chim_targets_fasta} ${prefix}.fa

    make_VIF_igvjs_html.py \\
        --html_template $igvjs_VIF \\
        --fusions_json ${prefix}.igv.json \\
        --input_file_prefix ${prefix} \\
        --html_output ${prefix}.html

    # generate the final report
        add_to_html.py \\
        --html ${prefix}.html \\
        --out ${prefix}.html \\
        --image ${prefix}.genome_plot.png \\
        --image ${genome_abundance_plot} \\
        --image ${read_counts_image} \\
        --image ${read_counts_log_image}

fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
"""
make_VIF_genome_abundance_plot.R \\
    --vif_report ${insertion_site_candidates} \\
    --title "Preliminary Genome Wide Abundance" \\
    --output_png ${prefix}.init.genome_plot.png

# restrict bam to only viruses of interest
bam=${bam}
samtools faidx ${viral_fasta}
awk '{printf("%s\t0\t%s\\n",\$1,\$2);}' ${viral_fasta}.fai  > viruses.bed
samtools view -b -L viruses.bed \$bam -o ${prefix}.igvjs.bam
samtools sort -o ${prefix}.sorted.igvjs.bam -T $prefix ${prefix}.igvjs.bam
bam="${prefix}.sorted.igvjs.bam"
samtools index \$bam

# clean up the bam, restrict to proper pairs and non-supplemental alignments
restrict_bam_to_proper_aligns.py \$bam \${bam}.clean.bam
mv \${bam}.clean.bam \${bam}
mv \${bam}.clean.bam.bai \${bam}.bai


if [ "${remove_duplicates}" == "true" ]; then
    bam_mark_duplicates.py -i ${bam} -o dups.removed.bam -r
    mv dups.removed.bam  ${bam}
    samtools index ${bam}
fi

# generates read_counts_summary and images
plot_top_virus_coverage.R \\
    --vif_report ${insertion_site_candidates}  \\
    --virus_fai ${viral_fasta}.fai \\
    --bam \${bam} \\
    --output_prefix ${prefix}

if [[ -s "${prefix}.virus_read_counts_summary.tsv" ]] ; then
    # make bed for igvjs
    create_igvjs_virus_bed.py \\
        --summary ${prefix}.virus_read_counts_summary.tsv \\
        --output_prefix ${prefix} \\
        ${num_top_viruses}

    create_insertion_site_inspector_js.py \\
        --VIF_summary_tsv ${prefix}.igvjs.table.tsv \\
        --json_outfile ${prefix}.igvjs.json

    # prep for making the report
    # HACK Let's not hard code this in the future
    /usr/local/src/CTAT-VirusIntegrationFinder/util/bamsifter/bamsifter \\
        -c 100 \\
        -o ${prefix}.igvjs.reads.bam \\
        \${bam}

    # IGV reports expects to find, __PREFIX__.fa, __PREFIX__.bed, __PREFIX__.reads.bam
    #ln -sf ${viral_fasta} ${prefix}.virus.fa
    create_igvjs_virus_fa.py \\
        ${prefix}.igvjs.bed \\
        ${viral_fasta}  \\
        ${prefix}.igvjs.fa

    # generate the html
    make_VIF_igvjs_html.py \\
            --html_template $igvjs_VIF \\
            --fusions_json ${prefix}.igvjs.json \\
            --input_file_prefix ${prefix}.igvjs \\
            --html_output ${prefix}.igvjs.html
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
"""
printf "%s %s\\n" $rename_to | while read old_name new_name; do
    [ -f "\${new_name}" ] || ln -s \$old_name \$new_name
done
fastqc $args --threads $task.cpus $renamed_files

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
42
43
44
45
46
47
48
49
50
"""
touch ${prefix}.html
touch ${prefix}.zip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
multiqc \\
    --force \\
    $args \\
    $config \\
    $extra_config \\
    .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
43
44
45
46
47
48
49
50
51
52
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
24
25
26
27
28
29
30
31
32
33
34
35
"""
samtools \\
    index \\
    -@ ${task.cpus-1} \\
    $args \\
    $input

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
"""
touch ${input}.bai
touch ${input}.crai
touch ${input}.csi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 38 of index/main.nf
25
26
27
28
29
30
31
"""
samtools sort $args -@ $task.cpus -o ${prefix}.bam -T $prefix $bam
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
35
36
37
38
39
40
41
42
"""
touch ${prefix}.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 35 of sort/main.nf
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
"""
STAR \\
    --genomeDir $index \\
    --readFilesIn $reads  \\
    --runThreadN $task.cpus \\
    --outFileNamePrefix $prefix. \\
    $out_sam_type \\
    $ignore_gtf \\
    $seq_center \\
    $args
$mv_unsorted_bam
if [ -f ${prefix}.Unmapped.out.mate1 ]; then
    mv ${prefix}.Unmapped.out.mate1 ${prefix}.unmapped_1.fastq
    gzip ${prefix}.unmapped_1.fastq
fi
if [ -f ${prefix}.Unmapped.out.mate2 ]; then
    mv ${prefix}.Unmapped.out.mate2 ${prefix}.unmapped_2.fastq
    gzip ${prefix}.unmapped_2.fastq
fi
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
END_VERSIONS
"""
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
"""
mkdir star
STAR \\
    --runMode genomeGenerate \\
    --genomeDir star/ \\
    --genomeFastaFiles $fasta \\
    --sjdbGTFfile $gtf \\
    --runThreadN $task.cpus \\
    $memory \\
    $args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
    gawk: \$(echo \$(gawk --version 2>&1) | sed 's/^.*GNU Awk //; s/, .*\$//')
END_VERSIONS
"""
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
"""
samtools faidx $fasta
NUM_BASES=`gawk '{sum = sum + \$2}END{if ((log(sum)/log(2))/2 - 1 > 14) {printf "%.0f", 14} else {printf "%.0f", (log(sum)/log(2))/2 - 1}}' ${fasta}.fai`
mkdir star
STAR \\
    --runMode genomeGenerate \\
    --genomeDir star/ \\
    --genomeFastaFiles $fasta \\
    --sjdbGTFfile $gtf \\
    --runThreadN $task.cpus \\
    --genomeSAindexNbases \$NUM_BASES \\
    $memory \\
    $args
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    star: \$(STAR --version | sed -e "s/STAR_//g")
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
    gawk: \$(echo \$(gawk --version 2>&1) | sed 's/^.*GNU Awk //; s/, .*\$//')
END_VERSIONS
"""
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
"""
trimmomatic \\
    $trimmed \\
    -threads $task.cpus \\
    -trimlog ${prefix}.trimmomatic.log \\
    $reads \\
    $output \\
    $qual_trim \\
    $args \\
    2> ${prefix}.trim_out.log

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    trimmomatic: \$(trimmomatic -version)
END_VERSIONS
"""
ShowHide 17 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/viralintegration
Name: viralintegration
Version: 0.1.0
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...