Precision HLA typing from next-generation sequencing data

public public 1yr ago Version: 2.0.0 0 bookmarks
Loading...

Introduction

nf-core/hlatyping is a bioinformatics best-practice analysis pipeline for Precision HLA typing from next-generation sequencing data. The pipeline does next-generation sequencing-based Human Leukozyte Antigen (HLA) typing using OptiType . OptiType is a HLA genotyping algorithm based on integer linear programming. Reads of whole exome/genome/transcriptome sequencing data are mapped against a reference of known MHC class I alleles. To produce accurate 4-digit HLA genotyping predictions, all major and minor HLA-I loci are considered simultaneously to find an allele combination that maximizes the number of explained reads.

The pipeline is built using Nextflow , a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources.The results obtained from the full-sized test can be viewed on the nf-core website .

Pipeline summary

  1. Read QC ( FastQC )

  2. Generate reference indices ( yara )

  3. Map reads to reference ( yara )

  4. Run HLA typing ( OptiType )

  5. Present QC for raw reads ( MultiQC )

Quick Start

  1. Install Nextflow ( >=21.10.3 )

  2. Install any of Docker , Singularity (you can follow this tutorial ), Podman , Shifter or Charliecloud for full pipeline reproducibility (you can use Conda both to install Nextflow itself and also to manage software within pipelines. Please only use it within pipelines as a last resort; see docs ) .

  3. Download the pipeline and test it on a minimal dataset with a single command:

    nextflow run nf-core/hlatyping -profile test,YOURPROFILE --outdir <OUTDIR>
    

    Note that some form of configuration will be needed so that Nextflow knows how to fetch the required software. This is usually done in the form of a config profile ( YOURPROFILE in the example command above). You can chain multiple config profiles in a comma-separated string.

    • The pipeline comes with config profiles called docker , singularity , podman , shifter , charliecloud and conda which instruct the pipeline to use the named tool for software management. For example, -profile test,docker .

    • Please check nf-core/configs to see if a custom config file to run nf-core pipelines already exists for your Institute. If so, you can simply use -profile <institute> in your command. This will enable either docker or singularity and set the appropriate execution settings for your local compute environment.

    • If you are using singularity , please use the nf-core download command to download images first, before running the pipeline. Setting the NXF_SINGULARITY_CACHEDIR or singularity.cacheDir Nextflow options enables you to store and re-use the images from a central location for future pipeline runs.

    • If you are using conda , it is highly recommended to use the NXF_CONDA_CACHEDIR or conda.cacheDir settings to store the environments in a central location for future pipeline runs.

  4. Start running your own analysis!

    nextflow run nf-core/hlatyping --input samplesheet.csv --outdir <OUTDIR> -profile <docker/singularity/podman/shifter/charliecloud/conda/institute>
    

Documentation

The nf-core/hlatyping pipeline comes with documentation about the pipeline usage , parameters and output .

Credits

nf-core/hlatyping was originally written by Christopher Mohr from Medical Data Integration Center and Quantitative Biology Center , Alexander Peltzer from Boehringer Ingelheim , and Sven Fillinger from Quantitative Biology Center .

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines .

For further information or help, don't hesitate to get in touch on the Slack #hlatyping channel (you can join with this invite ).

Citations

If you use nf-core/hlatyping for your analysis, please cite it using the following doi: 10.5281/zenodo.1401039

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x .

Code Snippets

18
19
20
21
22
23
24
25
26
27
28
29
"""
if [ \$({ samtools view -H $input -@$task.cpus ; samtools view $input -@$task.cpus | head -n1000; } | samtools view -c -f 1  -@$task.cpus ) -gt 0 ]; then
    echo false > is_singleend.txt
else
    echo true > is_singleend.txt
fi

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
17
18
19
20
21
22
23
24
25
26
"""
check_samplesheet.py \\
    $samplesheet \\
    samplesheet.valid.csv

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    python: \$(python --version | sed 's/Python //g')
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
"""
[ ! -f  ${prefix}.fastq.gz ] && ln -s $reads ${prefix}.fastq.gz
fastqc $args --threads $task.cpus ${prefix}.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
36
37
38
39
40
41
42
43
44
45
"""
[ ! -f  ${prefix}_1.fastq.gz ] && ln -s ${reads[0]} ${prefix}_1.fastq.gz
[ ! -f  ${prefix}_2.fastq.gz ] && ln -s ${reads[1]} ${prefix}_2.fastq.gz
fastqc $args --threads $task.cpus ${prefix}_1.fastq.gz ${prefix}_2.fastq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
50
51
52
53
54
55
56
57
58
"""
touch ${prefix}.html
touch ${prefix}.zip

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    fastqc: \$( fastqc --version | sed -e "s/FastQC v//g" )
END_VERSIONS
"""
23
24
25
26
27
28
29
30
31
32
33
"""
gunzip \\
    -f \\
    $args \\
    $archive

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gunzip: \$(echo \$(gunzip --version 2>&1) | sed 's/^.*(gzip) //; s/ Copyright.*\$//')
END_VERSIONS
"""
NextFlow From line 23 of gunzip/main.nf
37
38
39
40
41
42
43
"""
touch $gunzip
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    gunzip: \$(echo \$(gunzip --version 2>&1) | sed 's/^.*(gzip) //; s/ Copyright.*\$//')
END_VERSIONS
"""
NextFlow From line 37 of gunzip/main.nf
28
29
30
31
32
33
34
35
36
37
38
39
40
"""
multiqc \\
    --force \\
    $args \\
    $config \\
    $extra_config \\
    .

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
43
44
45
46
47
48
49
50
51
52
"""
touch multiqc_data
touch multiqc_plots
touch multiqc_report.html

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    multiqc: \$( multiqc --version | sed -e "s/multiqc, version //g" )
END_VERSIONS
"""
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
"""
# Create a config for OptiType on a per sample basis with task.ext.args2

#Doing it old school now
echo "[mapping]" > config.ini
echo "razers3=razers3" >> config.ini
echo "threads=$task.cpus" >> config.ini
echo "[ilp]" >> config.ini
echo "$args2" >> config.ini
echo "threads=1" >> config.ini
echo "[behavior]" >> config.ini
echo "deletebam=true" >> config.ini
echo "unpaired_weight=0" >> config.ini
echo "use_discordant=false" >> config.ini

# Run the actual OptiType typing with args
OptiTypePipeline.py -i ${bam} -c config.ini --${meta.seq_type} $args --prefix $prefix --outdir $prefix

#Couldn't find a nicer way of doing this
cat <<-END_VERSIONS > versions.yml
"${task.process}":
    optitype: \$(cat \$(which OptiTypePipeline.py) | grep -e "Version:" | sed -e "s/Version: //g")
END_VERSIONS
"""
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
"""
samtools collate \\
    $args \\
    --threads $task.cpus \\
    -O \\
    $input \\
    . |

samtools fastq \\
    $args2 \\
    --threads $task.cpus \\
    -1 ${prefix}_1.fq.gz \\
    -2 ${prefix}_2.fq.gz \\
    -0 ${prefix}_other.fq.gz \\
    -s ${prefix}_singleton.fq.gz

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
"""
samtools \\
    view \\
    --threads ${task.cpus-1} \\
    ${reference} \\
    ${readnames} \\
    $args \\
    -o ${prefix}.${file_type} \\
    $input \\
    $args2

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
57
58
59
60
61
62
63
64
65
"""
touch ${prefix}.bam
touch ${prefix}.cram

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
NextFlow From line 57 of view/main.nf
23
24
25
26
27
28
29
30
31
32
"""
yara_indexer \\
    $fasta \\
    -o ${fasta}

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    yara: \$(echo \$(yara_indexer --version 2>&1) | sed 's/^.*yara_indexer version: //; s/ .*\$//')
END_VERSIONS
"""
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
"""
yara_mapper \\
    $args \\
    -t $task.cpus \\
    -f bam \\
    ${index_prefix} \\
    $reads | samtools view -@ $task.cpus -hb -F4 | samtools sort -@ $task.cpus > ${prefix}.mapped.bam

samtools index -@ $task.cpus ${prefix}.mapped.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    yara: \$(echo \$(yara_mapper --version 2>&1) | sed 's/^.*yara_mapper version: //; s/ .*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
"""
yara_mapper \\
    $args \\
    -t $task.cpus \\
    -f bam \\
    ${index_prefix} \\
    ${reads[0]} \\
    ${reads[1]} > output.bam

samtools view -@ $task.cpus -hF 4 -f 0x40 -b output.bam | samtools sort -@ $task.cpus > ${prefix}_1.mapped.bam
samtools view -@ $task.cpus -hF 4 -f 0x80 -b output.bam | samtools sort -@ $task.cpus > ${prefix}_2.mapped.bam

samtools index -@ $task.cpus ${prefix}_1.mapped.bam
samtools index -@ $task.cpus ${prefix}_2.mapped.bam

cat <<-END_VERSIONS > versions.yml
"${task.process}":
    yara: \$(echo \$(yara_mapper --version 2>&1) | sed 's/^.*yara_mapper version: //; s/ .*\$//')
    samtools: \$(echo \$(samtools --version 2>&1) | sed 's/^.*samtools //; s/Using.*\$//')
END_VERSIONS
"""
ShowHide 11 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://nf-co.re/hlatyping
Name: hlatyping
Version: 2.0.0
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: None
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...