Snakemake workflow: Zona Incerta Diffusion Parcellation

public public 1yr ago 0 bookmarks

Snakemake workflow: Zona Incerta Diffusion Parcellation

This is a Snakemake workflow for performing connectivity-based segmentation of the zona-incerta using probabilistic tractography. It requires pre-processed DWI and bedpost (e.g. from prepdwi), and makes use of transforms from ANTS buildtemplate on the SNSX32 dataset.

Authors

  • Ali Khan (@akhanf)

Usage

Step 1: Install Snakemake

Install Snakemake using conda :

conda create -c bioconda -c conda-forge -n snakemake snakemake

For installation details, see the instructions in the Snakemake documentation .

Step 2: Configure workflow

Configure the workflow according to your needs via editing the file config.yaml , and editing the participants.tsv

Step 3: Dry-run

Test your configuration by performing a dry-run via

snakemake -np

Step 4: Execution on graham (compute canada)

There are a few different ways to execute the workflow:

  1. Execute the workflow locally using an interactive job

  2. Execute the workflow using the cc-slurm profile

Interactive Job

Execute the workflow locally using an interactive job:

salloc --time=3:00:00 --gres=gpu:t4:1 --cpus-per-task=8 --ntasks=1 --mem=32000 --account=YOUR_CC_ACCT srun snakemake --use-singularity --cores 8 --resources gpus=1 mem_mb=32000
Use the cc-slurm profile

The cc-slurm profile sets up default options for running on compute canada systems. More info in the README here: https://github.com/khanlab/cc-slurm

If you haven't used it before, deploy the cc-slurm profile using:

cookiecutter gh:khanlab/cc-slurm -o ~/.config/snakemake -f

Note: you must have cookiecutter installed (e.g. pip install cookiecutter )

Then to execute the workflow for all subjects, submitting a job for each rule group, use:

snakemake --profile cc-slurm
Export to Dropbox

To export files to dropbox, use: snakemake -s export_dropbox.smk

See the Snakemake documentation for further details.

Step 4: Investigate results

After successful execution, you can create a self-contained interactive HTML report with all results via:

snakemake --report report.html

This report can, e.g., be forwarded to your collaborators.

Advanced

The following recipe provides established best practices for running and extending this workflow in a reproducible way.

  1. Fork the repo to a personal or lab account.

  2. Clone the fork to the desired working directory for the concrete project/run on your machine.

  3. Create a new branch (the project-branch) within the clone and switch to it. The branch will contain any project-specific modifications (e.g. to configuration, but also to code).

  4. Modify the config, and any necessary sheets (and probably the workflow) as needed.

  5. Commit any changes and push the project-branch to your fork on github.

  6. Run the analysis.

  7. Optional: Merge back any valuable and generalizable changes to the upstream repo via a pull request . This would be greatly appreciated .

  8. Optional: Push results (plots/tables) to the remote branch on your fork.

  9. Optional: Create a self-contained workflow archive for publication along with the paper (snakemake --archive).

  10. Optional: Delete the local clone/workdir to free space.

Testing

No test cases yet

Code Snippets

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
import nibabel as nib
import numpy as np
mask_nib = nib.load(snakemake.input.mask)
mask_vol = mask_nib.get_fdata()
mask_indices = mask_vol > 0 
masked = mask_vol[mask_indices]

nvoxels = len(masked)
ntargets = len(snakemake.params.connmap_3d)
conn = np.zeros((nvoxels,ntargets))
for i,conn_file in enumerate(snakemake.params.connmap_3d):
    vol = nib.load(conn_file).get_fdata()
    masked =  vol[mask_indices].T
    conn[:,i] = masked
np.savez(snakemake.output.connmap_npz, conn=conn,mask=mask_vol,affine=mask_nib.affine)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import sklearn
import numpy as np
import nibabel as nib

# Define a function for saving niftis 
def save_label_nii (labels,mask,affine,out_nifti):
    labels_vol = np.zeros(mask.shape)
    labels_vol[mask > 0] = labels+1 #add a 1 so label 0 is diff from bgnd
    labels_nib = nib.Nifti1Image(labels_vol,affine)
    nib.save(labels_nib,out_nifti)

data = np.load(snakemake.input.connmap_group_npz)
cluster_range = range(2,snakemake.params.max_k+1)
out_nii_list = snakemake.output

conn_group = data['conn_group']
mask = data['mask']
affine = data['affine']

# Concat subjects
conn_group_m = np.moveaxis(conn_group,0,2)
conn_concat = conn_group_m.reshape([conn_group_m.shape[0],conn_group_m.shape[1]*conn_group_m.shape[2]])

# Run spectral clustering and save output nifti
for i,k in enumerate(cluster_range):
    from sklearn.cluster import SpectralClustering
    clustering = SpectralClustering(n_clusters=k, assign_labels="discretize",random_state=0,affinity='cosine').fit(conn_concat)
    print(f'i={i}, k={k},saving {out_nii_list[i]}')
    save_label_nii(clustering.labels_,mask,affine,out_nii_list[i])
45
46
shell:
    'fslmaths {input.lh} -max {input.rh} {output} &> {log}'
53
shell: 'cp -v {input} {output} &> {log}'
67
68
shell:
    'antsApplyTransforms -d 3 --interpolation NearestNeighbor -i {input.seed} -o {output} -r {input.ref} -t [{input.affine},1] -t {input.invwarp} &> {log}'
84
85
86
87
shell:
    'fslmaths {input.dwi} -bin {output.mask} &&'
    'mri_convert {output.mask} -vs {params.seed_resolution} {params.seed_resolution} {params.seed_resolution} {output.mask_res} -rt nearest &&'
    'reg_resample -flo {input.targets} -res {output.targets_res} -ref {output.mask_res} -NN 0  &> {log}'
98
99
shell:
    'reg_resample -flo {input.seed} -res {output.seed_res} -ref {input.mask_res} -NN 0 &> {log}'
116
117
shell:
    'mkdir -p {output} && parallel  --jobs {threads} fslmaths {input.targets} -thr {{1}} -uthr {{1}} -bin {{2}} &> {log} ::: {params.target_nums} :::+ {params.target_seg}'
SnakeMake From line 116 of master/Snakefile
128
129
130
131
132
run:
    f = open(output.target_txt,'w')
    for s in params.target_seg:
        f.write(f'{s}\n')
    f.close()
SnakeMake From line 128 of master/Snakefile
153
154
155
156
shell:
    'mkdir -p {output.probtrack_dir} && probtrackx2_gpu --samples={params.bedpost_merged}  --mask={input.mask} --seed={input.seed_res} ' 
    '--targetmasks={input.target_txt} --seedref={input.seed_res} --nsamples={config[''probtrack''][''nsamples'']} ' 
    '--dir={output.probtrack_dir} {params.probtrack_opts} -V 2  &> {log}'
SnakeMake From line 153 of master/Snakefile
177
178
shell:
    'mkdir -p {output} && ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS=2 parallel  --jobs {threads} antsApplyTransforms -d 3 --interpolation Linear -i {{1}} -o {{2}}  -r {input.ref} -t {input.warp} -t {input.affine} &> {log} :::  {params.in_connmap_3d} :::+ {params.out_connmap_3d}' 
SnakeMake From line 177 of master/Snakefile
191
script: 'scripts/save_connmap_template_npz.py'
SnakeMake From line 191 of master/Snakefile
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
run:
    import numpy as np

    #load first file to get shape
    data = np.load(input['connmap_npz'][0])
    affine = data['affine']
    mask = data['mask']
    conn_shape = data['conn'].shape
    nsubjects = len(input['connmap_npz'])
    conn_group = np.zeros([nsubjects,conn_shape[0],conn_shape[1]])

    for i,npz in enumerate(input['connmap_npz']):
        data = np.load(npz)
        conn_group[i,:,:] = data['conn']

    #save conn_group, mask and affine
    np.savez(output['connmap_group_npz'], conn_group=conn_group,mask=mask,affine=affine)
224
script: 'scripts/spectral_clustering.py'
SnakeMake From line 224 of master/Snakefile
ShowHide 12 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/akhanf/zona-diffparc
Name: zona-diffparc
Version: 1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...