Help improve this workflow!
This workflow has been published but could be further improved with some additional meta data:- Keyword(s) in categories input, output, operation, topic
You can help improve this workflow by suggesting the addition or removal of keywords, suggest changes and report issues, or request to become a maintainer of the Workflow .
mir-seek 🔬
An awesome microRNA-sequencing pipeline
Overview
Welcome to mir-seek! Before getting started, we highly recommend reading through mir-seek's documentation .
The
./mir-seek
pipeline is composed several inter-related sub commands to setup and run the pipeline across different systems. Each of the available sub commands perform different functions:
-
mir-seek run
: Run the mir-seek pipeline with your input files. -
mir-seek unlock
: Unlocks a previous runs output directory. -
mir-seek install
: Download reference files locally. -
mir-seek cache
: Cache software containers locally.
mir-seek is a comprehensive microRNA-sequencing pipeline. It relies on technologies like Singularity1 to maintain the highest-level of reproducibility. The pipeline consists of a series of data processing and quality-control steps orchestrated by Snakemake2 , a flexible and scalable workflow management system, to submit jobs to a cluster.
The pipeline is compatible with data generated from Illumina short-read sequencing technologies. As input, it accepts a set of single-end FastQ files and can be run locally on a compute instance or on-premise using a cluster. A user can define the method or mode of execution. The pipeline can submit jobs to a cluster using a job scheduler like SLURM (more coming soon!). A hybrid approach ensures the pipeline is accessible to all users.
Before getting started, we highly recommend reading through the usage section of each available sub command.
For more information about issues or trouble-shooting a problem, please checkout our FAQ prior to opening an issue on Github .
Dependencies
Requires:
singularity>=3.5
snakemake>=6.0
At the current moment, the pipeline uses a mixture of enviroment modules and docker images; however, this will be changing soon! In the very near future, the pipeline will only use docker images. With that being said, snakemake and singularity must be installed on the target system. Snakemake orchestrates the execution of each step in the pipeline. To guarantee the highest level of reproducibility, each step of the pipeline will rely on versioned images from DockerHub . Snakemake uses singularity to pull these images onto the local filesystem prior to job execution, and as so, snakemake and singularity will be the only two dependencies in the future.
Installation
Please clone this repository to your local filesystem using the following command:
# Clone Repository from Github
git clone https://github.com/OpenOmics/mir-seek.git
# Change your working directory
cd mir-seek/
# Add dependencies to $PATH
# Biowulf users should run
module load snakemake singularity
# Get usage information
./mir-seek -h
Contribute
This site is a living document, created for and by members like you. mir-seek is maintained by the members of OpenOmics and is improved by continous feedback! We encourage you to contribute new content and make improvements to existing content via pull request to our GitHub repository .
Cite
If you use this software, please cite it as below:
@BibTextCitation coming soon!
Citation coming soon!
References
1.
Kurtzer GM, Sochat V, Bauer MW (2017). Singularity: Scientific containers for mobility of compute. PLoS ONE 12(5): e0177459.
2.
Koster, J. and S. Rahmann (2018). "Snakemake-a scalable bioinformatics workflow engine." Bioinformatics 34(20): 3600.
Code Snippets
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 | shell: """ # Setups temporary directory for # intermediate files, miRDeep2 # output directories rely on # timestamps, this helps avoid # collision due to multiple runs # of the same sample, needed for # the bowtie/1.X log files if [ ! -d "{params.tmpdir}" ]; then mkdir -p "{params.tmpdir}"; fi tmp=$(mktemp -d -p "{params.tmpdir}") cd "${{tmp}}" # Aligns reads to the reference # genome with Bowtie/1.X, allows # one mismatch in the alignment mapper.pl \\ {input.reads_fa} \\ -c \\ -j \\ -l {params.min_len} \\ -m \\ -n \\ -q \\ -p {params.bw_index} \\ -s {output.collapsed} \\ -t {output.arf} \\ -v \\ -o {threads} \\ > {output.map_log} 2>&1 # Extract mapper statistics paste \\ <(echo -e "sample\\n{params.sample}") \\ <(grep -A1 --color=never '^#desc' \\ {output.map_log} \\ | tr ' ' '\\t' \\ | cut -f2- ) \\ > {output.map_tsv} # Rename bowtie/1.X log file mv "${{tmp}}/bowtie.log" "{output.new_log}" """ |
105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | shell: """ # Create table from per-sample # mideep2 mapper files i=0 for f in {input.map_tsvs}; do if [ "$i" -eq 0 ]; then # Add header to output file head -1 "${{f}}" \\ > {output.map_tsv} fi awk 'NR=="2" {{print}}' "${{f}}" \\ >> {output.map_tsv} i=$((i + 1)) done """ |
23 24 25 26 27 28 | shell: """ fastqc \\ -t {threads} \\ -o {params.outdir} \\ {input.fq} """ |
51 52 53 54 55 56 | shell: """ fastqc \\ -t {threads} \\ -o {params.outdir} \\ {input.fq} """ |
88 89 90 91 92 93 94 95 96 | shell: """ multiqc \\ --ignore '*/.singularity/*' \\ -f \\ -c {params.conf} \\ --interactive \\ --outdir {params.outdir} \\ {params.wdir} """ |
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 | shell: """ # Setups temporary directory for # intermediate files, miRDeep2 # output directories rely on # timestamps, this helps avoid # collision due to multiple runs # of the same sample if [ ! -d "{params.tmpdir}" ]; then mkdir -p "{params.tmpdir}"; fi tmp=$(mktemp -d -p "{params.tmpdir}") cd "${{tmp}}" # Run miRDeep2 to detect known # and novel miRNA expression miRDeep2.pl \\ {input.collapsed} \\ {params.fasta} \\ {input.arf} \\ {params.mature} \\ none \\ {params.hairpin} \\ -t {params.species} \\ -P \\ -v \\ 2> {log.report} # Link expression results from # miRDeep2 timestamp directory exp=$( find "${{tmp}}/expression_analyses/" \\ -type f \\ -iname "miRNA_expressed.csv" \\ -print \\ -quit ) ln -sf "${{exp}}" {output} """ |
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 | shell: """ # Removes comment character from # header and calculates average # mature miRNA expression head -1 {input.mirna} \\ | sed '1 s/^#//g' \\ | cut -f1,2 \\ > {output.avg_exp} # Cut on prefix of miRBase identifer # to get mature miRNA identifers for # aggregation/averaging. These identifers # more compatible with downstream tools. tail -n+2 {input.mirna} \\ | cut -f1,2 \\ | awk -F '\\t' -v OFS='\\t' '{{split($1,a,"MIMA"); print a[1], $NF}}' \\ | awk -F '\\t' -v OFS='\\t' '{{seen[$1]+=$2; count[$1]++}} END {{for (x in seen) print x, seen[x]/count[x]}}' \\ >> {output.avg_exp} """ |
125 126 127 128 129 130 131 132 133 134 | shell: """ # Create counts matrix of mature miRNAs {params.script} \\ --input {input.counts} \\ --output {output.matrix} \\ --join-on miRNA \\ --extract read_count \\ --clean-suffix '_mature_miRNA_expression.tsv' \\ --nan-values 0.0 """ |
26 27 28 29 30 31 32 33 34 35 36 | shell: """ fastp \\ --thread {threads} \\ --in1 {input.raw_fq} \\ --out1 {output.trim_fq} \\ --json {output.json_report} \\ --html {output.html_report} \\ --adapter_fasta {params.adapters} \\ -l {params.min_len} \\ --max_len1 {params.max_len} """ |
60 61 62 63 64 65 66 67 68 69 70 71 72 73 | shell: """ # Covert FastQ to FASTA format seqkit fq2fa --threads {threads} \\ {input.trim_fq} \\ -o {output.raw_fa} # Clean sequence identifiers # to replace spaces, tabs, and # asterisks with underscores sed '/^>/ s/\\s/_/g' {output.raw_fa} \\ | sed '/^>/ s/\\t/_/g' \\ | sed '/^>/ s/*/_/g' \\ > {output.clean_fa} """ |
Support
- Future updates
Related Workflows





