Snakemake workflow for generating native space hcp-mmp segmentations (requires Freesurfer run)
Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output, operation, topic
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
Snakemake workflow for generating hcp mmp 180 parcellation in subject native volumetric space
Inputs:
-
participants.tsv with target subject IDs
-
For each target subject:
- Freesurfer processed data
NEW
: If you have freesurfer data in sub-{subject}.tar files (i.e. to keep file numbers low on compute canada), then you can use the
input_tars
branch, and only modify the
in_freesurfer_tar
config variable.
Singularity containers required:
-
Freesurfer (for
mri_convert
,mris_convert
,mri_info
) -
Connectome workbench
Authors
- Ali Khan @akhanf
Usage
If you use this workflow in a paper, don't forget to give credits to the authors by citing the URL of this (original) repository and, if available, its DOI (see above).
Step 1: Obtain a copy of this workflow
-
Create a new github repository using this workflow as a template .
-
Clone the newly created repository to your local system, into the place where you want to perform the data analysis.
Step 2: Configure workflow
Configure the workflow according to your needs via editing the files in the
config/
folder. Adjust
config.yml
to configure the workflow execution, and
participants.tsv
to specify your subjects.
Step 3: Install Snakemake
Install Snakemake using conda :
conda create -c bioconda -c conda-forge -n snakemake snakemake
For installation details, see the instructions in the Snakemake documentation .
Step 4: Execute workflow
Activate the conda environment:
conda activate snakemake
Test your configuration by performing a dry-run via
snakemake --use-singularity -n
Execute the workflow locally via
snakemake --use-singularity --cores $N
using
$N
cores or run it in a cluster environment via
snakemake --use-singularity --cluster qsub --jobs 100
or
snakemake --use-singularity --drmaa --jobs 100
If you are using Compute Canada, you can use the cc-slurm profile, which submits jobs and takes care of requesting the correct resources per job (including GPUs). Once it is set-up with cookiecutter, run:
snakemake --profile cc-slurm
Or, with neuroglia-helpers can get a 8-core, 32gb node and run locally there. First, get a node (default 8-core, 32gb, 3 hour limit):
regularInteractive
Then, run:
snakemake --use-singularity --cores 8 --resources mem=32000
See the Snakemake documentation for further details.
Step 5: Investigate results
After successful execution, you can create a self-contained interactive HTML report with all results via:
snakemake --report report.html
This report can, e.g., be forwarded to your collaborators. An example (using some trivial test data) can be seen here .
Step 6: Commit changes
Whenever you change something, don't forget to commit the changes back to your github copy of the repository:
git commit -a
git push
Step 7: Obtain updates from upstream
Whenever you want to synchronize your workflow copy with new developments from upstream, do the following.
-
Once, register the upstream repository in your local copy:
git remote add -f upstream git@github.com:snakemake-workflows/{{cookiecutter.repo_name}}.git
orgit remote add -f upstream https://github.com/snakemake-workflows/{{cookiecutter.repo_name}}.git
if you do not have setup ssh keys. -
Update the upstream version:
git fetch upstream
. -
Create a diff with the current version:
git diff HEAD upstream/master workflow > upstream-changes.diff
. -
Investigate the changes:
vim upstream-changes.diff
. -
Apply the modified diff via:
git apply upstream-changes.diff
. -
Carefully check whether you need to update the config files:
git diff HEAD upstream/master config
. If so, do it manually, and only where necessary, since you would otherwise likely overwrite your settings and samples.
Step 8: Contribute back
In case you have also changed or added steps, please consider contributing them back to the original repository:
-
Fork the original repo to a personal or lab account.
-
Clone the fork to your local system, to a different place than where you ran your analysis.
-
Copy the modified files from your analysis to the clone of your fork, e.g.,
cp -r workflow path/to/fork
. Make sure to not accidentally copy config file contents or sample sheets. Instead, manually update the example config files if necessary. -
Commit and push your changes to your fork.
-
Create a pull request against the original repository.
Testing
TODO: create some test datasets
Code Snippets
9 | shell: 'FS_LICENSE={params.license} mris_convert {input} {output} &> {log}' |
19 | shell: 'FS_LICENSE={params.license} mri_convert {input} {output} &> {log}' |
31 | shell: 'FS_LICENSE={params.license} mri_info {input.t1} --tkr2scanner > {output.tkr2scanner} 2> {log}' |
43 | shell: 'wb_command -surface-apply-affine {input.surf} {input.tkr2scanner} {output.surf} &> {log}' |
56 | shell: 'wb_command -surface-average {output.midthickness} -surf {input.white} -surf {input.pial} &> {log}' |
73 | shell: 'wb_command -surface-resample {input.surf} {input.current_sphere} {input.new_sphere} {params.method} {output.surf} &> {log}' |
92 93 94 95 | shell: 'wb_command -label-resample {input.label} {input.current_sphere} {input.new_sphere}' ' {params.method} {output.label}' ' -area-surfs {input.current_surf} {input.new_surf} &> {log}' |
112 113 114 115 | shell: 'wb_command -label-to-volume-mapping {input.label} {input.surf} {input.vol_ref} {output.label_vol}' ' -ribbon-constrained {input.white_surf} {input.pial_surf}' ' -greedy &> {log}' |
134 135 136 | shell: 'wb_command -label-to-volume-mapping {input.label} {input.surf} {input.vol_ref} {output.label_vol}' ' -nearest-vertex {params.nearest_vertex} &> {log}' |
Support
- Future updates
Related Workflows





