TemplateGen: Flexible BidsApp for Custom Template Generation

public public 1yr ago 0 bookmarks

TemplateGen is a snakebids-based BidsApp that flexibly combines images across modalities to create custom template spaces. It can output both participant-specific transforms into the custom space, and cohort wide template outputs in a format compatible with TemplateFlow .

This workflow uses greedy instead of ANTS, for the sake of efficiency, in fact, the registrations as compared to ants_build_template seem to be more accurate (though that of course is likely just due to parameter selection, not an inherent limitation of ANTS).. There is ~20-30x speedup for a single pairwise registration compared to ANTS, making template-generation for hundreds of subjects a job that can be completed in around a day with modest resources (32cores). The 4-core 16gb memory greedy registration jobs take <30mins.

Installation

TemplateGen has a few installation options suitable for different environments:

Singularity

...

Docker

...

pip

Pip installations are the most flexible, and offer the best opportunity for parallelization on cluster environments, but are also the most involved, as 3rd party requirements will not be automatically installed. See below for different strategies on dealing with this.

First, be sure that python 3.7 or greater is installed on your system:

python --version

Then start by creating a virtualenv:

python -m venv .venv
source .venv/bin/activate

And install via pip:

pip install templategen

Alternatively, you can get a safe, user-wide installation using pipx (pipx installation instructions found here ):

pipx install templategen

3rd party requirement options for pip installation

Singularity

With singularity installed on your system, TemplateGen can take care of 3rd party installations by itself. The singularity executable must be available on your $PATH (try singularity --version on the command line), and your computer must have an active internet connection so that the singularity containers can be downloaded.

This option works well for clusters and compute environments, which typically have singularity already installed.

Manual Installation

The required software can also be manually installed. TemplateGen depends on ANTs v2.3.4 and itk-SNAP v4.0 . In particular, you must have the following commands available on your $PATH :

  • From ANTs:

    • AverageImages

    • MultiplyImages

    • AverageAffineTransformNoRigid

    • antsApplyTransforms

    • ResampleImageBySpacing

  • From itk-SNAP

    • greedy

    • c3d_-_affine_tool

Usage

Running with a cluster profile (e.g. cc-slurm)

This option will be the most # of jobs ( num_iters * num_subjects * num_cohorts ), but will maximize parallelization on a cluster

snakemake --profile cc-slurm

Running with cluster profile, but grouping registration jobs together in chunks of 8

This will reduce the number of jobs by factor of 8 by grouping the 8 4core jobs to fill the 32-core nodes

HOWEVER -- if you are using the --group-components option to group the registrations in chunks of 8, you MUST run each iteration separately, using the --config run_iter=# option, where # is the iteration to run -- this seems to be a bug/limitation of the group-components option when a job has recursively-defined rules. (open issue as of Oct 3 2020: https://github.com/snakemake/snakemake/issues/656)

snakemake --config run_iter=2 --profile cc-slurm --group-components reg=8 composite=100

Note: composite=100 is for the composing warps to another ref space (e.g. mni) to combine 100 subjects in a single job (each is relatively quick and disk-bound)..

Running as a single job

This is the most frugal for resources (32-cores max), but is simple in that only a single job is spawned.. This is the method to use if you are running this on a single machine. Four iterations of building a single 1mm template with 100 subjects takes < 24hrs on a 32core system.

regularSubmit -j Fat snakemake --use-singularity -j32

Running each cohort in parallel with single jobs and the --nolock flag

This runs each cohort separately with single-node jobs.. This is a happy medium for running on a cluster while not needing hundreds (or thousands) of short jobs..

WARNING: make sure your cohorts are mutually-exclusive when using this method, as it is running snakemake in parallel on the same directory with the --nolock option -- if cohorts are not mutually-exclusive, you can still use this method, but only after all the pre-processing (e.g. T2/T1 registration, is completed)

for cohort in young middle1 middle2 old1 old2; do regularSubmit -j Fat snakemake --use-singularity -j32 --nolock --config run_cohort=$cohort; done

Code Snippets

69
70
shell:
    'AverageImages {params.dim} {output} {params.use_n4} {input} &> {log}'
87
88
shell:
    'AverageImages {params.dim} {output} {params.use_n4} {input} &> {log}'
 99
100
shell:
    'MultiplyImages {params.dim} {input} {params.gradient_step} {output} &> {log}' 
116
117
shell:
    'AverageAffineTransformNoRigid {params.dim} {output} {input} &> {log}'
135
136
shell:
    'antsApplyTransforms {params.dim} -e vector -i {input.invwarp} -o {output} -t [{input.affine},1] -r {input.ref} --verbose 1 &> {log}'
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
    shell:
        'antsApplyTransforms {params.dim} --float 1 --verbose 1 -i {input.template} -o {output.template} -t [{input.affine},1] '
        ' -t {input.invwarp} -t {input.invwarp} -t {input.invwarp} -t {input.invwarp} -r {input.template} &> {log}' #apply warp 4 times

rule get_final_xfm:
    input:
        template=expand(
            rules.apply_template_update.output['template'],
            iteration=config['num_iters'],
            channel=channels[0],
        ),
        affine=expand(
            rules.reg_to_template.output['affine_xfm_ras'],
            iteration=config['num_iters'],
            allow_missing=True,
        ),
        warp=expand(
            rules.reg_to_template.output['warp'],
            iteration=config['num_iters'],
            allow_missing=True,
        ),
    output:
        bids(
            output_dir/"template",
            datatype="xfm",
            mode="image",
            from_="individual",
            to="{template}",
            suffix="xfm.nii.gz",
            **inputs.subj_wildcards,
        )

    container: config['singularity']['itksnap']
    shell: 'greedy -d 3 -rf {input.template} '
          ' -r {input.warp} {input.affine} '
          ' -rc {output}'
217
218
219
shell: 'greedy -d 3 -rf {input.template} '
      ' -r {input.affine},-1 {input.warp}'
      ' -rc {output}'
231
232
run:
    for src, dest in zip(input, output):
84
85
86
87
shell: 'greedy -d 3 -rf {input.ref_std} '
      ' -r {input.cohort2std_warp} {input.cohort2std_affine_xfm_ras} '
      '  {input.subj2cohort_warp} {input.subj2cohort_affine_xfm_ras} '
      ' -rc {output.subj2std_warp}'
119
120
121
122
123
124
shell: 'greedy -d 3 -rf {input.ref_subj} -r '
      ' {input.subj2cohort_affine_xfm_ras},-1 '
      ' {input.subj2cohort_invwarp}'
      ' {input.cohort2std_affine_xfm_ras},-1 '
      ' {input.cohort2std_invwarp} '
      ' -rc {output.subj2std_invwarp}'
17
18
shell:
    "AverageImages {params.dim} {output} {params.use_n4} {input} &> {log}"
26
shell: 'cp -v {input} {output} &> {log}'
42
43
shell:
    "ResampleImageBySpacing {params.dim} {input} {output} {params.vox_dims}"
55
shell: "mv {input} {output}"
ShowHide 13 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/pvandyken/templategen
Name: templategen
Version: 1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...