A Snakemake based modular Workflow that facilitates RNA-Seq analyses with a special focus on splicing

public public 1yr ago 0 bookmarks

About

A Snakemake based modular Workflow that facilitates RNA-Seq analyses with a special focus on the exploration of differential splicing behaviours.

Table of contents

SnakeSplice Modules

The given parent workflow is a wrapper workflow, which includes the following sub-workflows (called modules): \

  1. Module1: Quality Control, Preprocessing and Alignment

  2. Module2: Gene Fusion Detection

  3. Module3: Transcript Quantification & Expression Analysis

  4. Module4: Splice Pattern Analysis

SnakeSplice Workflow Diagram

Software Requirements

  • Conda: Conda Webpage

  • Snakemake: Snakemake Webpage

  • For PEP required:

    1. peppy is required and can be installed via Conda: conda install -c conda-forge peppy

    2. eido required is required and can be installed via Conda: conda install -c conda-forge eido

Usage

Input Data

The input data for this workflow is provided via a sample sheet (default location: input_data/input_samples.csv ), whereby the structure of the sample sheet is defined by the PEP (file pep/pep_schema_config.yaml ) file.

General structure of the sample sheet

The sample sheet is a tabular file, which consists of the following columns:

Column Description Required
sample_name Name/ID of the sample YES
sample_directory Path to the directory, where the sample data (FASTQ-files) are located. This information is only used if the FASTQ-files are needed. YES (depending on tool selection)
read1 Name of the FASTQ-file for read1 sequences YES (depending on tool selection)
read2 Name of the FASTQ-file for read2 sequences YES (depending on tool selection)
control true or false (if true, the sample is treated as control sample) YES
condition Name of the condition (e.g. treatment group) YES
protocol Name of the protocol (e.g. RNAseq-PolyA). This information is not yet used... NO
stranded No, if library is unstranded, yes if library is stranded, reverse if library is reverse stranded YES
adaptors_file Path to the file, which contains the adaptors for the sample YES (depending on tool selection)
additional_comment Additional comment for the sample NO

Note : Currently, the entries for the columns protocol and additional_comment are not used.
Note : The entries "read1", read2" and "adaptors_file" are marked as mandatory, as they are needed for the execution of the alignment workflow. However, if the user has already aligned the samples, these columns can be either filled with dummy data (make sure the references files exist!), or one can manipulate the PEP-file (path: pep/pep_schema_config.yaml ) to make these columns optional.

Starting with FASTQ files

SnakeSplice supports the execution of the workflow starting with FASTQ files.
In this case, the sample sheet has to be filled with the information about the FASTQ files (see above).
Note : The FASTQ files have to be located in the same directory, which is specified in the column sample_directory of the sample sheet.

Starting with BAM files

SnakeSplice also supports the execution of the workflow starting with BAM files.
In this case the location and further information of the BAM-files have to be specified in the respective configuration files (path: config_files/config_moduleX.yaml ).\

Reference files

Some tools require reference files, which need to be user-provided.
The location of these reference files have to be specified in the respective configuration files (path: config_files/config_moduleX.yaml ).

Our recommendation : We recommend to use the same reference files for all samples, as the reference files are not adjusted to the samples.

Reference genome and gene annotation file

We recommend an analysis set reference genome. Its advantages over other common forms of reference genomes can be read here .
Such a reference genome can be downloaded from the UCSC Genome Browser.

Configurations

The respective workflow settings can be adjusted via the configuration files, which are placed in the directory config_files . In this folder is a config_main.yaml -file, which holds the general settings for the workflow. Additionally, every sub-workflow/module has its own config_module{X}_{module_name}.yaml -file, which lists the settings for the respective sub-workflow.

Main Configuration File - config_files/config_main.yaml

This configuration file holds the general settings for this master workflow. It consists of 2 parts:

  1. Module switches - module_swiches :
    Here, the user can switch on/off the sub-workflows/modules, which should be executed. Note : Submodule 1 has to be run first alone, as the output of this submodule is used as input for the other submodules. Subsequently, the other modules can be run in (almost) any order.

  2. Module output directory names - module_output_dir_names :
    Every submodule saves their output in a separate sub-directory of the main output directory output .
    The names of these sub-directories can be adjusted here.

Specific Module Configuration Files

Every submodule has its own configuration file, which holds the settings for the respective submodule. The configuration files are located in the directory config_files and have the following naming scheme: config_module{X}_{module_name}.yaml , where X is the number of the submodule and module_name is the name of the submodule. The configuration files are structured in the following way:

  1. switch variables - switch_variables : Here, the user can switch on/off the different steps of the submodule.

  2. output directories - output_directories : Here, the user can adjust the names of the output directories per tool.

  3. bam files attributes - bam_files_attributes : Some tools require additional information about the BAM files, which are not provided in the sample sheet. This information can be specified here.

  4. tool-specific settings - tool_specific_settings : Here, the user can adjust the settings for the different tools, which are used in the submodule.

Configure the execution of SnakeSplice

Since the execution of SnakeSplice is based on Snakemake, the user can configure the execution of SnakeSplice via the command line or via a profile configuration file.

Command line

The user can configure the execution of SnakeSplice via the command line.
Details regarding the configuration of Snakemake via the command line can be found here .

Predefined configuration profiles

A profile configuration file can be used to summarize all desired settings for the snakemake execution. SnakeSplice comes with two predefined profile configuration files, which can be found in the directory config_files/profiles .

  1. profile_config_local.yaml :\ A predefined profile configuration file for the execution on a local machine.

  2. profile_config_cluster.yaml :\ A predefined profile configuration file for the execution on a cluster (using SLURM).

This workflow offers a predefined profile configuration file for the execution on a cluster (using SLURM). The respective setting options are listed and explained below.
Note : Go to the bottom of this file to find out, how to execute Snakemake using this profile-settings file.

Command line argument Default entry Description
--use-conda True Enables the use of conda environments (and Snakemake wrappers)
--keep-going True Go on with independent jobs, if one job fails
--latency-wait 60 Wait given seconds if an output file of a job is not present after the job finished.
--rerun-incomplete True Rerun all jobs where the output is incomplete
--printshellcmds True Printout shell commands that will be executed
--jobs 50 Number of jobs / rules to run (maximal) in parallel
--default-resources [cpus=1, mem_mb=2048, time_min=60] Default resources for each job (can be overwritten in the rule definition)
--resources [cpus=100, mem_mb=500000] Resource constraints for the whole workflow
--cluster "sbatch -t {resources.time_min} --mem={resources.mem_mb} -c {resources.cpus} -o logs_slurm/{rule}.%j.out -e logs_slurm/{rule}.%j.out --mail-type=FAIL [email protected]" Cluster command for the execution on a cluster (here: SLURM)

Execution

Steps for simple execution of SnakeSplice

  1. Activate Conda-Snakemake environment
    conda activate snakemake

  2. Execute Workflow (you can adjust the passed number of cores to your desire...)
    snakemake -s Snakefile --cores 4 --use-conda

  3. Run Workflow in background
    rm nohup.out && nohup snakemake -s Snakefile --cores 4 --use-conda &

Visualization & Dry Runs

  • Visualize DAG of jobs
    snakemake --dag | dot -Tsvg > dag.svg

  • Dry run -> Get overview of job executions, but no real output is generated
    snakemake -n -r --cores 4

Cluster: Execute Snakemake workflow on a HPC cluster

  1. Adjust settings in profile-settings file (e.g. here in profiles/profile_cluster/config.yaml ).

  2. Execute workflow
    mkdir -p logs_slurm && rm nohup.out || true && nohup snakemake --profile profiles/profile_cluster &

Monitor execution stats on a HPC cluster with SLURM

sacct -a --format=JobID,User,Group,Start,End,State,AllocNodes,NodeList,ReqMem,MaxVMSize,AllocCPUS,ReqCPUS,CPUTime,Elapsed,MaxRSS,ExitCode -j <job-ID>
Explanation:

  • -a : Show jobs for all users

  • --format=JobID... : Format output

Kill cluster jobs

killall -TERM snakemake

Node stats on SLURM cluster

sinfo -o "%n %e %m %a %c %C"

Report creation

After the successful execution of SnakeSplice, a self-contained HTML-report can be generated. The report can be generated by executing the following command: snakemake --report report.html

Code Snippets

41
42
script:
    "../../scripts/create_report_html_files.R"
63
64
65
66
67
shell:
    "python {params.script} "
    "--input_gtf_file {input} "
    "--output_gtf_file {output} "
    "--log_file {log}"
SnakeMake From line 63 of rules/alfa.smk
80
81
shell:
    "sort -k1,1 -k4,4n -k5,5nr {input.gtf_file} > {output.sorted_gtf_file} 2> {log}"
SnakeMake From line 80 of rules/alfa.smk
106
107
shell:
    "alfa -a {input.gtf_file} -g {params.alfa_genome_index_name} -o {output.output_index_dir} -p {threads} 2> {log}"
SnakeMake From line 106 of rules/alfa.smk
148
149
150
151
152
153
shell:
    "alfa -g {params.alfa_genome_index_name} "
    "--bam {input.input_bam_file} {wildcards.sample_id} "
    "-o {output[0]} "
    "--processors {threads} "
    "2> {log}"
SnakeMake From line 148 of rules/alfa.smk
194
195
script:
    "../scripts/alfa_summary_analysis.py"
SnakeMake From line 194 of rules/alfa.smk
27
28
wrapper:
    "v1.21.4/bio/bamtools/stats"
42
43
wrapper:
    "v1.21.4/bio/bamtools/stats"
54
55
shell:
    "tail -n 13 {input} | head -n 12 | cut -f 1 > {output} 2> {log}"
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
run:
    import pandas as pd

    # Open all TSV-files of input, transpose them and merge them into one dataframe
    # Keep filename as row name
    df_list = []
    for file in input:
        current_df = pd.read_csv(file, sep=":\s*", engine="python", header=None, index_col=0)
        current_df = current_df.transpose()
        current_df["BAM-file"] = os.path.basename(file)
        df_list.append(current_df)

    # Finalize column order and output merged dataframe
    output_df = pd.concat(df_list, axis=0)
    cols = output_df.columns.tolist()
    final_cols = cols[-1:] + cols[:-1]
    output_df = output_df[final_cols]
    output_df.to_csv(output[0], sep="\t", index=False)
113
114
script:
    "../../../scripts/create_report_html_files.R"
26
27
28
29
30
shell:
    "mkdir -p {params.output_dir} && "
    "wget -O {output.output_file}.gz {params.ensembl_all_cdna_fasta_file_link};"
    "cd {params.output_dir};"
    "gunzip {params.gz_file}"
44
45
46
47
48
shell:
    "mkdir -p {params.output_dir} && "
    "wget -O {output.output_file}.gz {params.ensembl_gtf_file_link} && "
    "cd {params.output_dir};"
    "gunzip {params.gz_file}"
69
70
71
72
73
shell:
    "check_strandedness "
    "--transcripts {input.transcripts_file} "
    "--gtf {input.annotation_file} "
    "--reads_1 {input.r1} --reads_2 {input.r2} > {output[0]} 2> {log};"
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
run:
    # Iterate over all input files and read lines 9-13 and put them into a pandas table
    # Then save table into CSV-file
    import pandas as pd
    col0_key = "sample"
    col1_key = "Fraction of reads failed to determine strandedness"
    col2_key = "Fraction of reads explained by FR"
    col3_key = "Fraction of reads explained by RF"
    col4_key = "Summary"
    col5_key = "Conclusion"
    col6_key = "stranded-value"
    table = pd.DataFrame(columns=[col0_key, col1_key, col2_key, col3_key, col4_key, col5_key])

    for sample_file in input:
        sample_name = sample_file.split("/")[-1].split(".")[0]
        with open(sample_file, "r") as f:
            lines = f.readlines()

            # Add row to table
            row = pd.DataFrame([[sample_name, lines[-5].split(":")[1].strip(), lines[-4].split(":")[1].strip(),
                                 lines[-3].split(":")[1].strip(), lines[-2].strip(),
                                 lines[-1].strip()]],
                columns=[col0_key, col1_key, col2_key, col3_key, col4_key, col5_key])
            table = table.append(row, ignore_index=True)

    # Add column with annotations for the sample configuration file
    table[col6_key] = "ERROR: strandedness could not be determined"
    # "no" for unstranded data
    table.loc[table[col5_key].str.contains("unstranded"), col6_key] = "no"
    # If the conclusion contains "RF/fr-firststrand" then set the value to "reverse", otherwise to "yes"
    table.loc[table[col5_key].str.contains("RF/fr-firststrand"), col6_key] = "reverse"
    table.loc[table[col5_key].str.contains("FR/fr-secondstrand"), col6_key] = "yes"

    table.to_csv(output[0], index=False)
144
145
script:
    "../../../scripts/create_report_html_files.R"
158
159
160
shell:
    "rm -rf ./stranded_test_*; "
    "rm -rf ./kallisto_index; "
50
51
52
53
54
shell:
    "multiBamSummary bins --minMappingQuality {params.min_map_quality} {params.region} "
    "--verbose --numberOfProcessors {threads} --bamfiles {input.bam_file_paths} "
    "--outFileName {output.compressed_numpy_array} --outRawCounts {output.raw_read_counts_file} "
    "2> {log}"
82
83
shell:
    "plotPCA -in {input} -o {output[0]} --plotTitle \"{params.plot_title}\" {params.extras} 2> {log}"
21
22
wrapper:
    "0.79.0/bio/fastqc"
39
40
wrapper:
    "0.79.0/bio/fastqc"
31
32
33
34
shell:
    "wget {params.minikraken2_v1_db_link};"
    "tar -xvzf {params.download_file};"
    "mv {params.download_file} {output.db_dir};"
69
70
71
72
73
shell:
    "kraken2 --use-names --threads {threads} --db {params.kraken2_db} "
    "--report {output.kraken2_report} "
    "--paired {input.r1} {input.r2} "
    "> {output.kraken2_kmer_mapping} 2> {log}"
57
58
wrapper:
    "v0.86.0/bio/multiqc"
28
29
30
31
32
33
shell:
    "mkdir -p {params.olego_dir};"
    "cd {params.olego_dir};"
    "git clone {params.olego_url};"
    "cd olego;"
    "make"
55
56
shell:
    "{params.olego_installation_dir}/olegoindex {input} 2> {log}"
84
85
86
shell:
    "{params.olego_installation_dir}/olego -v -t {threads} -r {params.r} -M {params.M} -o {output}  "
    "{params.ref_index} {input[0]} 2> {log}"
109
110
shell:
    "perl {params.olego_installation_dir}/mergePEsam.pl -v {input[0]} {input[1]} {output} 2> {log}"
127
128
wrapper:
    "v1.14.0/bio/samtools/view"
149
150
wrapper:
    "v1.14.0/bio/samtools/sort"
169
170
wrapper:
    "v1.14.0/bio/samtools/index"
191
192
shell:
    "perl {params.olego_installation_dir}/sam2bed.pl -v --use-RNA-strand {input} {output} 2> {log}"
208
209
shell:
    "perl {params.olego_installation_dir}/bed2junc.pl {input} {output} 2> {log}"
27
28
wrapper:
    "0.80.2/bio/star/index"
69
70
wrapper:
    "v1.21.0/bio/star/align"
82
83
shell:
    "mv {input} {output}"
104
105
wrapper:
    "v1.21.0/bio/samtools/sort"
124
125
wrapper:
    "v1.14.0/bio/samtools/index"
61
62
wrapper:
    "v1.12.2/bio/trimmomatic/pe"
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import os

if __name__ == '__main__':

    # input
    input_bam_files = snakemake.input.bam_files
    # output
    output_dir = snakemake.output[0]
    # log file
    log_file = snakemake.log[0]
    # params
    alfa_genome_index_name = snakemake.params.alfa_genome_index_name
    nr_processes = snakemake.threads

    # Creates a string of bam-files and their respective labels.
    #     Format: BAM_FILE1 LABEL1 [BAM_FILE2 LABEL2 …]
    bam_files_and_labels = " ".join(["{0} {1}".format(bam_file, bam_file.replace(".sorted.bam", "").split("/")[-1]) for
                                     bam_file in input_bam_files])

    command = "alfa -g {alfa_genome_index_name} "\
              "--bam {bam_files_with_labels} "\
              "-o {output_dir} "\
              "--processors {threads} "\
              "2> {log}".format(alfa_genome_index_name=alfa_genome_index_name,
                                bam_files_with_labels=bam_files_and_labels,
                                output_dir=output_dir,
                                threads=nr_processes,
                                log=log_file)

    os.system(command)  # execute command
48
49
wrapper:
    "v1.12.0/bio/arriba"
66
67
68
69
70
71
72
73
74
75
76
77
78
79
run:
    import pandas as pd
    import os

    # Concat all input fusion files, and add a column with the sample_id as first column
    df = pd.concat([pd.read_csv(f, sep="\t").assign(sample_id=os.path.basename(f).split(".")[0])
                    for f in input.fusions])
    # Place sample_id as first column
    cols = df.columns.tolist()
    cols = cols[-1:] + cols[:-1]
    df = df[cols]

    # Save the summary table
    df.to_csv(output.summary, sep="\t", index=False)
110
111
script:
    "../../../scripts/create_report_html_files.R"
SnakeMake From line 110 of rules/arriba.smk
56
57
script:
    "../scripts/deseq2/gene_set_enrichment_analysis.R"
100
101
script:
    "../../../scripts/create_report_html_files.R"
154
155
script:
    "../scripts/deseq2/gene_set_enrichment_analysis.R"
198
199
script:
    "../../../scripts/create_report_html_files.R"
26
27
28
29
30
31
32
shell:
    "samtools view -h {input} | "
    "awk 'BEGIN {{OFS=\"\t\"}} {{"
        "split($6,C,/[0-9]*/); split($6,L,/[SMDIN]/); "
            "if (C[2]==\"S\") {{$10=substr($10,L[1]+1); $11=substr($11,L[1]+1)}}; "
            "if (C[length(C)]==\"S\") {{L1=length($10)-L[length(L)-1]; $10=substr($10,1,L1); $11=substr($11,1,L1); }}; "
        "gsub(/[0-9]*S/,\"\",$6); print}}' - >{output}"
76
77
78
79
80
81
82
83
84
85
86
shell:
    'cufflinks '
    '--num-threads {threads}'
    ' --library-type {params.library_type}'
    ' --GTF-guide {params.gtf_file}'                            # use reference transcript annotation to guide assembly, but also includes novel transcripts
    ' --frag-bias-correct {params.ref_seq} '                    # use bias correction - reference fasta required 
    ' --min-isoform-fraction {params.min_isoform_fraction}'     # suppress transcripts below this abundance level (compared with major isoform of the gene)
    ' --min-frags-per-transfrag {params.min_frags_per_transfrag}'   # assembled transfrags supported by fewer than this many aligned RNA-Seq fragments are ignored
    ' --output-dir {params.output_dir} '
    '{params.extra_options}'                                    # additional options
    '{input}'
101
102
103
run:
    with open(output.all_transcriptome_assemblies_file, 'w') as out:
        print(*input, sep="\n", file=out)
125
126
127
128
129
130
shell:
    'cuffmerge -o {params.output_dir}'
    ' -g {params.gtf_file} '
    ' -s {params.ref_seq} '
    ' -p {threads} '
    '{input}'
209
210
211
212
213
214
215
216
217
shell:
    'cuffdiff '
    '--num-threads {threads} '
    '--output-dir {params.output_dir} '
    '--labels {params.labels} '
    '--frag-bias-correct {params.ref_seq} '
    '{params.extra_options} '
    '{input.merged_cufflinks_transcriptomes_gtf} '
    '{params.ctrl_replicates} '
267
268
script:
    "../../../scripts/create_report_html_files.R"
33
34
script:
    "../scripts/cummerbund_script.R"
46
47
script:
    "../scripts/deseq2/gene_expression_analysis_with_deseq2.R"
90
91
script:
    "../scripts/deseq2/gene_expression_analysis_with_deseq2.R"
100
101
102
103
104
105
106
107
108
109
110
111
run:
    import pandas as pd

    df = pd.read_csv(input.deseq2_results, sep=",")
    # filter for significant results (adjusted p-value < 0.10)
    df = df[df["padj"] < 0.10]
    # rename first column
    df = df.rename(columns={df.columns[0]: "subject"})

    # write to file
    df = df.sort_values(by=["padj"])
    df.to_csv(output[0], sep=",", index=False)
148
149
script:
    "../../../scripts/create_report_html_files.R"
185
186
script:
    "../../../scripts/create_report_html_files.R"
22
23
script:
    "../scripts/deseq2/create_deseq_dataset_object.R"
71
72
script:
    "../scripts/deseq2/explore_deseq_dataset.R"
118
119
script:
    "../scripts/deseq2/explore_deseq_dataset.R"
21
22
wrapper:
    "v1.19.2/bio/kallisto/index"
85
86
wrapper:
    "v1.19.2/bio/kallisto/quant"
106
107
108
109
110
111
112
113
114
115
116
117
118
run:
    all_controls_table = pep.sample_table[pep.sample_table["sample_name"].isin(params.control_samples)]
    all_condition_table = pep.sample_table[pep.sample_table["sample_name"].isin(params.condition_samples)]
    # Create a table with all samples
    all_samples_table =all_controls_table.append(all_condition_table)

    # Add the quant.sf files to the table
    results_dir_path = os.path.dirname(input.quant_dirs[0])
    abundance_file_path = os.path.join(results_dir_path, "quant_results_{sample_id}", "abundance.h5")
    all_samples_table["kallisto_results_file"] = \
        all_samples_table.apply(lambda row: abundance_file_path.replace("{sample_id}", row["sample_name"]), axis=1)

    # Save output table
30
31
32
33
34
shell:
    "python {params.script} "
    "--input_gtf_file {input} "
    "--output_gtf_file {output} "
    "--log_file {log}"
78
79
80
81
82
83
84
85
shell:
    "htseq-count "
    "-n {threads} "
    "--format {params.format} "
    "--order {params.order} "
    "--stranded {params.stranded} "
    "--additional-attr {params.add_attribute} "
    "{input.bam_file} {input.gtf_file} >{output.count_file} 2>{log}"
104
105
script:
    "../scripts/outrider/merge_htseq_count_files.R"
156
157
script:
    "../scripts/outrider/outrider_create_analysis_object.R"
204
205
script:
    "../scripts/outrider/outrider_explore_results.R"
237
238
script:
    "../../../scripts/create_report_html_files.R"
23
24
wrapper:
    "v1.19.2/bio/salmon/decoys"
63
64
wrapper:
    "v1.19.2/bio/salmon/index"
112
113
wrapper:
    "v1.19.2/bio/salmon/quant"
132
133
134
135
136
137
138
139
140
141
142
143
144
145
run:
    all_controls_table = pep.sample_table[pep.sample_table["sample_name"].isin(params.control_samples)]
    all_condition_table = pep.sample_table[pep.sample_table["sample_name"].isin(params.condition_samples)]
    # Create a table with all samples
    all_samples_table =all_controls_table.append(all_condition_table)

    # Add the quant.sf files to the table
    results_dir_path = os.path.dirname(os.path.dirname(input.quant_files[0]))
    quant_file_path = os.path.join(results_dir_path, "quant_results_{sample_id}", "quant.sf")
    all_samples_table["salmon_results_file"] = \
        all_samples_table.apply(lambda row: quant_file_path.replace("{sample_id}", row["sample_name"]), axis=1)

    # Save output table
    all_samples_table.to_csv(output.annotation_table, sep="\t", index=False)
SnakeMake From line 132 of rules/salmon.smk
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
library("cummeRbund")


global_statistics_and_qc <- function(cuff_obj, output_dir) {
  # ---- Global Statistics and Quality Control -----------

  # Dispersion explained:
  # https://genomebiology.biomedcentral.com/articles/10.1186/s13059-014-0550-8
  # https://www.biostars.org/p/167688/
  # https://support.bioconductor.org/p/75260/

  # ------------- 1. Dispersion ----------------
  # -> visualizes the estimated overdispersion for each sample
  # uses cufflinks emitted data (mean counts, variance, & dispersion)
  # -> http://cole-trapnell-lab.github.io/cufflinks/cuffdiff/
  genes.disp<-dispersionPlot(genes(cuff_obj))
  pdf(file=file.path(output_dir, "cummerbund_figures/dispersion_genes.pdf"))
  plot(genes.disp)				# Plot is displayed
  dev.off()

  cuff_obj.disp<-dispersionPlot(cuff_obj)
  pdf(file=file.path(output_dir, "dispersion_cuff.pdf"))
  plot(cuff_obj.disp)				# Plot is displayed
  dev.off()

  # ------ 2. Distributions of FPKM scores across samples ----------
  # 2.1.) csDensity plots
  dens<-csDensity(genes(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_density_genes.pdf"))
  plot(dens)
  dev.off()

  dens<-csDensity(isoforms(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_density_isoforms.pdf"))
  plot(dens)
  dev.off()

  # 2.2.) Boxplots
  b<-csBoxplot(genes(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_density_boxplot_genes.pdf"))
  plot(b)
  dev.off()

  b<-csBoxplot(isoforms(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_density_boxplot_isoforms.pdf"))
  plot(b)
  dev.off()

  # 2.3.) Matrix of pairwise scatterplots
  s<-csScatterMatrix(genes(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_density_matrix_genes.pdf"))
  plot(s)
  dev.off()

  s<-csScatterMatrix(isoforms(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_density_matrix_isoforms.pdf"))
  plot(s)
  dev.off()

  # 2.4.) Volcano plots -> Explore relationship between fold-change and significance
  v<-csVolcanoMatrix(genes(cuff_obj))
  pdf(file=file.path(output_dir, "cs_volcano_matrix_genes.pdf"))
  plot(v)
  dev.off()

  v<-csVolcanoMatrix(isoforms(cuff_obj))
  pdf(file=file.path(output_dir, "cs_volcano_matrix_isoforms.pdf"))
  plot(v)
  dev.off()
}


analyse_differential_expression <- function(cuff_obj, output_dir) {
  # ------ Differential expression -------------
  # 1.) genes
  # all
  gene.diff <- diffData(genes(cuff_obj))
  write.csv(gene.diff, file.path(output_dir, "gene_diff.csv"), row.names=F)

  # only significant -> with gene names
  sig_gene_ids <- getSig(cuff_obj,level="genes",alpha=0.05)
  if (NROW(sig_gene_ids) > 0) {
    sigFeatures <- getFeatures(cuff_obj,sig_gene_ids,level="genes")
    sigData <- diffData(sigFeatures)
    sigData <- subset(sigData, (significant == 'yes'))
    names <- featureNames(sigFeatures)
    sigOutput <- merge(names, sigData, by.x="tracking_id", by.y="gene_id")

    # Patch the merged table to have the original name for the ID column.
    # This is always the first column for the examples we've seen.
    colnames(sigOutput)[1] <- "gene_id"
    write.table(sigOutput, file.path(output_dir, "gene_diff_only_significant.tsv"), sep='\t', row.names = F,
                col.names = T, quote = F)
  } else {
    sink(file.path(output_dir, "gene_diff_only_significant.tsv"))
    cat("No significantly differently expressed genes detected")
    sink()
  }

  # 2.) isoforms
  # all
  isoform.diff <- diffData(isoforms(cuff_obj))
  write.csv(isoform.diff, file.path(output_dir, "isoform_diff.csv"), row.names=F)

  # only significant -> with gene names
  sig_isoforms_ids <- getSig(cuff_obj, level="isoforms", alpha=0.05)
  if (NROW(sig_isoforms_ids) > 0) {
    sigFeatures <- getFeatures(cuff_obj, sig_isoforms_ids, level="isoforms")
    sigData <- diffData(sigFeatures)
    sigData <- subset(sigData, (significant == 'yes'))
    names <- featureNames(sigFeatures)
    sigOutput <- merge(names, sigData, by.x="tracking_id", by.y="isoform_id")

    # Patch the merged table to have the original name for the ID column.
    # This is always the first column for the examples we've seen.
    colnames(sigOutput)[1] <- "gene_id"
    write.table(sigOutput, file.path(output_dir, "isoform_diff_only_significant.tsv"), sep='\t', row.names = F,
                col.names = T, quote = F)
  } else {
    sink(file.path(output_dir, "isoform_diff_only_significant.tsv"))
    cat("No significantly differently expressed isoforms detected")
    sink()
  }
}


create_count_matrices <- function(cuff_obj, output_dir) {
  " Create count matrices for genes and isoforms "

  # ------------- CSV files ----------------
  # access feature lvl data
  gene.features <- annotation(genes(cuff_obj))
  write.csv(gene.features, file.path(output_dir, "gene_features.csv"), row.names = FALSE)

  gene.fpkm <- fpkm(genes(cuff_obj))
  write.csv(gene.fpkm, file.path(output_dir, "gene_fpkm.csv"), row.names = FALSE)

  # raw and normalized (on sequencing depth?) fragment counts
  gene.counts <- count(genes(cuff_obj))
  write.csv(gene.counts, file.path(output_dir, "gene_counts.csv"), row.names = FALSE)

  # -- isoforms
  isoform.features <- annotation(isoforms(cuff_obj))
  write.csv(isoform.features, file.path(output_dir, "isoform_features.csv"), row.names = FALSE)

  isoform.fpkm <- fpkm(isoforms(cuff_obj))
  write.csv(isoform.fpkm, file.path(output_dir, "isoform_fpkm.csv"), row.names = FALSE)

  isoform.counts <- count(isoforms(cuff_obj))
  write.csv(isoform.counts, file.path(output_dir, "isoform_counts.csv"), row.names = FALSE)


  # ----------- create PDFs -----------
  # FPKM matrices
  gene.fpkm.matrix<-fpkmMatrix(genes(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_matrix_genes.pdf"))
  plot(gene.fpkm.matrix)
  dev.off()

  isoform.fpkm.matrix<-fpkmMatrix(isoforms(cuff_obj))
  pdf(file=file.path(output_dir, "fpkm_matrix_isoforms.pdf"))
  plot(isoform.fpkm.matrix)
  dev.off()

  # Count matrices
  gene.count.matrix<-countMatrix(genes(cuff_obj))
  pdf(file=file.path(output_dir, "count_matrix_genes.pdf"))
  plot(gene.count.matrix)
  dev.off()

  isoforms.count.matrix<-countMatrix(isoforms(cuff_obj))
  pdf(file=file.path(output_dir, "count_matrix_isoforms.pdf"))
  plot(isoforms.count.matrix)
  dev.off()
}


single_gene_analysis <- function(cuff_obj, gene_of_interest_id, output_dir) {
  " Detailed analysis of a single gene of interest"
  # IV) ---- Single gene -----------
  myGeneId <- gene_of_interest_id
  myGene<-getGene(cuff_obj,myGeneId)
  header <- paste("Single Gene", arg_gene, ":", sep=" ")
  capture.output(myGene, file=file.path(output_dir, "analysis_output.txt"), append = TRUE)

  output_file_genes <- file.path(output_dir, paste(arg_gene, "_gene_fpkm.csv", sep=""))
  write.csv(fpkm(myGene), output_file_genes, row.names = FALSE)

  output_file_isoforms <- file.path(output_dir, paste(arg_gene, "_isoforms_fpkm.csv", sep=""))
  write.csv(fpkm(isoforms(myGene)), output_file_isoforms, row.names = FALSE)

  # Plots
  gl <- expressionPlot(myGene)
  output_file_1 <- file.path(output_dir, paste("expressionPlot_singleGene_", arg_gene, ".pdf", sep=""))
  pdf(file=output_file_1)
  plot(gl)
  dev.off()

  # Expression plot of all isoforms of a single gene with FPKMs exposed
  gl.iso.rep <- expressionPlot(isoforms(myGene))
  output_file_2 <- file.path(output_dir, paste("expressionPlot_isoforms_singleGene_", arg_gene, ".pdf", sep=""))
  pdf(file=output_file_2)
  plot(gl.iso.rep)
  dev.off()

  # Expression plot of all CDS for a single gene with FPKMS exposed
  gl.cds.rep<-expressionPlot(CDS(myGene))
  output_file_3 <- file.path(output_dir, paste("expressionPlot_cds_singleGene_", arg_gene, ".pdf", sep=""))
  pdf(file=output_file_3)
  plot(gl.cds.rep)
  dev.off()

  # Detailed feature graph
  trackList<-list()
  myStart<-min(features(myGene)$start)
  myEnd<-max(features(myGene)$end)
  myChr<-unique(features(myGene)$seqnames)
  genome<-arg_genome
  ideoTrack <- IdeogramTrack(genome = genome, chromosome = myChr)
  trackList<-c(trackList,ideoTrack)   # appending ideoTrack -> chromosome
  axtrack<-GenomeAxisTrack()
  trackList<-c(trackList,axtrack)     # appending axtrack -> genome-Axis
  genetrack<-makeGeneRegionTrack(myGene)

  trackList<-c(trackList,genetrack)   # appending genetrack -> the mapping results
  biomTrack<-BiomartGeneRegionTrack(genome=genome,chromosome=as.character(myChr), start=myStart,end=myEnd,name="ENSEMBL",showId=T)
  trackList<-c(trackList,biomTrack)   # Biomart transcripts
  conservation <- UcscTrack(genome = genome, chromosome = myChr, track = "Conservation", table = "multiz100way",from = myStart-2000, to = myEnd+2000, trackType = "DataTrack",start = "start", end = "end", data = "score",type = "hist", window = "auto", col.histogram = "darkblue",fill.histogram = "darkblue", ylim = c(-3.7, 4),name = "Conservation")
  trackList<-c(trackList,conservation)    # conservation
  # Plot detailed graph: Chromosome on top...
  pdf(file.path(output_dir, "detailed_track.pdf"))
  plotTracks(trackList,from=myStart-2000,to=myEnd+2000)
  dev.off()
}

# ----------- Main -----------
main <- function() {
  # Inputs
  cufflinks_merged_transcriptome_assemblies_gtf <- snakemake@input["merged_cufflinks_transcriptome_assemblies_gtf"]

  # Params
  cufflinks_output_files_dir <- snakemake@params["cufflinks_output_files_dir"]
  genome_build <- snakemake@params["original_genome_build"]
  chosen_genes_of_interest <- snakemake@params[["chosen_genes_of_interest"]]

  # Output directory
  cummerbund_output_dir <- snakemkae@output[["cummerbund_output_dir"]]
  cummerbund_summary_results <- snakemake@output[["cummerbund_summary_results"]]

  # read results -> create SQLite db
  # Rebuild is important: Create always new database
  cuff_obj <- readCufflinks(dir=cufflinks_output_files_dir, gtfFile=cufflinks_merged_transcriptome_assemblies_gtf,
                            genome=genome_build, rebuild=T)
  capture.output(cuff_obj, file=file.path(cummerbund_output_dir, cummerbund_summary_results), append = FALSE)

  # Global statistics and qc
  global_statistics_and_qc(cuff_obj, cummerbund_output_dir)

  # Count matrices
  create_count_matrices(cuff_obj, cummerbund_output_dir)

  # Single gene analysis
  for (gene_of_interest in chosen_genes_of_interest) {
      single_gene_analysis(cuff_obj, gene_of_interest, cummerbund_output_dir)
  }
}

# Execute main
main()
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
library("BiocParallel")

# Load data
library("tximeta")	# Import transcript quantification data from Salmon
library("tximport")	# Import transcript-level quantification data from Kaleidoscope, Sailfish, Salmon, Kallisto, RSEM, featureCounts, and HTSeq
library("rhdf5")
library("SummarizedExperiment")

library("magrittr")	# Pipe operator
library("DESeq2")		# Differential gene expression analysis

# Plotting libraries
library("pheatmap")
library("RColorBrewer")
library("ggplot2")

# Mixed
library("PoiClaClu")
library("glmpca")
library("apeglm")
library("genefilter")
library("AnnotationDbi")
library("org.Hs.eg.db")


# ---------------- Loading read/fragment quantification data from Salmon output ----------------
load_in_salmon_generated_counts <- function(annotation_table_file) {
	# ----------------- 1. Load annotation -----------------
	# Columns: 1. names, 2. files, 3. condition, 4. additional information
	annotation_table <- read.csv(file=annotation_table_file, sep="\t")

	annotation_data <- data.frame(
									names=annotation_table[,"sample_name"],
									files=file.path(annotation_table[,"salmon_results_file"]),
									condition=annotation_table[,"condition"],
									add_info=annotation_table[,"additional_comment"]
									)
	# Replace None in condition column with "Control"
	annotation_data$condition[annotation_data$condition=="None"] <- "Control"
	annotation_data$condition[annotation_data$condition==""] <- "Control"

	# ----------------- 2. Load into Bioconductor experiment objects -----------------
	# Summarized experiment: Imports quantifications & metadata from all samples -> Each row is a transcript
	se <- tximeta(annotation_data)

	# Summarize transcript-level quantifications to the gene level -> reduces row number: Each row is a gene
	# Includes 3 matrices:
	# 1. counts: Estimated fragment counts per gene & sample
	# 2. abundance: Estimated transcript abundance in TPM
	# 3. length: Effective Length of each gene (including biases as well as transcript usage)
	gse <- summarizeToGene(se)

	# ----------------- 3. Load experiments into DESeq2 object -----------------
	# SummarizedExperiment
	# assayNames(gse)   		# Get all assays -> counts, abundance, length, ...
	# head(assay(gse), 3)     	# Get count results for first 3 genes
	# colSums(assay(gse))     	# Compute sums of mapped fragments
	# rowRanges(gse)          	# Print rowRanges: Ranges of individual genes
	# seqinfo(rowRanges(gse))   # Metadata of sequences (chromosomes in our case)

	gse$condition <- as.factor(gse$condition)
	gse$add_info <- as.factor(gse$add_info)

	# Use relevel to make sure untreated is listed first
	gse$condition %<>% relevel("Control")   # Concise way of saying: gse$condition <- relevel(gse$condition, "Control")

	# Construct DESeqDataSet from gse
	if (gse$add_info %>% unique %>% length >1) {
		# Add info column with more than 1 unique value
		print("More than 1 unique value in add_info column")
		# TODO Need to make sure to avoid:
		# the model matrix is not full rank, so the model cannot be fit as specified.
		#   One or more variables or interaction terms in the design formula are linear
		#   combinations of the others and must be removed.
		# dds <- DESeqDataSet(gse, design = ~condition + add_info)
		print("However, simple DESeq2 analysis will be performed without add_info column")
		dds <- DESeqDataSet(gse, design = ~condition)

	} else {
		print("Only 1 unique value in add_info column")
		dds <- DESeqDataSet(gse, design = ~condition)
	}

	return(dds)
}


# ---------------- Loading read/fragment quantification data from RSEM output ----------------
load_in_rsem_generated_counts <- function(annotation_table_file) {
	# ----------------- 1. Load annotation -----------------
	annotation_table <- read.csv(file=annotation_table_file, sep="\t")
	files <- file.path(annotation_table[,"rsem_results_file"])
	# For sample.genes.results: txIn= FALSE & txOut= FALSE
	# For sample.isoforms.results: txIn= TRUE & txOut= TRUE
	# Check: https://bioconductor.org/packages/devel/bioc/vignettes/tximport/inst/doc/tximport.html
	txi.rsem <- tximport(files, type = "rsem", txIn = FALSE, txOut = FALSE)

	annotation_data <- data.frame(condition=factor(annotation_table[,"condition"]),
									add_info=factor(annotation_table[,"additional_comment"])
						)
	rownames(annotation_data) <- annotation_table[,"sample_name"]

	# Construct DESeqDataSet from tximport
	if (annotation_data$add_info %>% unique %>% length >1) {
		# Add info column with more than 1 unique value
		# dds <- DESeqDataSetFromTximport(txi.rsem, annotation_data, ~condition + add_info)
		dds <- DESeqDataSetFromTximport(txi.rsem, annotation_data, ~condition)
	} else {
		dds <- DESeqDataSetFromTximport(txi.rsem, annotation_data, ~condition)
	}
	return(dds)
}


load_in_kallisto_generated_counts <- function(annotation_table_file) {
	# ----------------- 1. Load annotation -----------------
	annotation_table <- read.csv(file=annotation_table_file, sep="\t")

	files <- file.path(annotation_table[,"kallisto_results_file"])
	txi.kallisto <- tximport(files, type = "kallisto", txOut = TRUE)

	annotation_data <- data.frame(condition=factor(annotation_table[,"condition"]),
									add_info=factor(annotation_table[,"additional_comment"])
						)
	rownames(annotation_data) <- annotation_table[,"sample_name"]

	# Construct DESeqDataSet from tximport
	if (annotation_data$add_info %>% unique %>% length >1) {
		# Add info column with more than 1 unique value
		# dds <- DESeqDataSetFromTximport(txi.kallisto, annotation_data, ~condition + add_info)
		dds <- DESeqDataSetFromTximport(txi.kallisto, annotation_data, ~condition)
	} else {
		dds <- DESeqDataSetFromTximport(txi.kallisto, annotation_data, ~condition)
	}
	return(dds)
}


# ----------------- Main function -----------------
main_function <- function(){
	threads <- snakemake@threads[[1]]
	register(MulticoreParam(workers=threads))

	# Snakemake variables
	annotation_table_file <- snakemake@input[["annotation_table_file"]]
	output_file <- snakemake@output[["deseq_dataset_r_obj"]]
	count_algorithm <- snakemake@params[["count_algorithm"]]

	# Load annotation table & Salmon data into a DESeq2 object
	if (count_algorithm == "salmon") {
		dds <- load_in_salmon_generated_counts(annotation_table_file)
	} else if (count_algorithm == "kallisto") {
		dds <- load_in_kallisto_generated_counts(annotation_table_file)
	} else if (count_algorithm == "rsem") {
		dds <- load_in_rsem_generated_counts(annotation_table_file)
	} else {
		stop("Count algorithm not supported!")
	}

	# Remove rows that have no or nearly no information about the amount of gene expression
	print(paste(c("Number of rows before filtering out counts with values <1", nrow(dds))))
	keep <- rowSums(counts(dds)) > 1 # Counts have to be greater than 1
	dds <- dds[keep,]
	print(paste(c("Number of rows after filtering out counts with values <1", nrow(dds))))

    # Save deseq dataset object
    saveRDS(dds, output_file)
}


# ----------------- Run main function -----------------
main_function()
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
library("BiocParallel")

# Load data
library("tximeta")	# Import transcript quantification data from Salmon
library("tximport")	# Import transcript-level quantification data from Kaleidoscope, Sailfish, Salmon, Kallisto, RSEM, featureCounts, and HTSeq
library("rhdf5")
library("SummarizedExperiment")

library("magrittr")	# Pipe operator
library("DESeq2")		# Differential gene expression analysis

# Plotting libraries
library("pheatmap")
library("RColorBrewer")
library("ggplot2")

# Mixed
library("PoiClaClu")
library("glmpca")
library("apeglm")
library("genefilter")
library("AnnotationDbi")
library("org.Hs.eg.db")



# ---------------- DESeq2 explorative analysis ----------------
run_deseq2_explorative_analysis <- function(dds, output_files) {

	# ----------- 4.2 Variance stabilizing transformation and the rlog -------------
	# Problem: PCA depends mostly on points with highest variance
	# -> For gene-counts: Genes with high expression values, and therefore high variance
	# are the ones the PCA is mostly depending on
	# Solution: Apply stabilizing transformation to variance
	# -> transform data, so it becomes more homoskedastic (expected amount of variance the same across different means)
	# 1. Variance stabilizing transformation: VST-function -> fast for large datasets (> 30n)
	# 2. Regularized-logarithm transformation or rlog -> Works well on small datasets (< 30n)

	# The transformed values are no longer counts, and are stored in the assay slot.
	# 1. VST
	# transformed_dds <- vst(dds, blind = FALSE)
	# head(assay(transformed_dds), 3)

	# 2. rlog
	transformed_dds <- rlog(dds, blind=FALSE)

	# ----------- A. Sample distances -------------
	# ----------- A.1 Euclidian distances -------------
	# Sample distances -> Assess overall similarity between samples
	# dist: takes samples as rows and genes as columns -> we need to transpose
	sampleDists <- dist(t(assay(transformed_dds)))

	# Heatmap of sample-to-sample distances using the transformed values
	# Uses euclidian distance between samples
	sampleDistMatrix <- as.matrix(sampleDists)
	rownames(sampleDistMatrix) <- paste(transformed_dds$names, transformed_dds$condition, sep = " - " )
	colnames(sampleDistMatrix) <- paste(transformed_dds$names, transformed_dds$condition, sep = " - " )
	colors <- colorRampPalette( rev(brewer.pal(9, "Blues")) )(255)
	jpeg(output_files[1], width=800, height=800)
	pheatmap(sampleDistMatrix,
			 clustering_distance_rows = sampleDists,
			 clustering_distance_cols = sampleDists,
			 col = colors,
			 main = "Heatmap of sample-to-sample distances (Euclidian) after normalization")
	dev.off()


	# ----------- A.2 Poisson distances -------------
	# Use Poisson distance
	# -> takes the inherent variance structure of counts into consideration
	# The PoissonDistance function takes the original count matrix (not normalized) with samples as rows instead of
	# columns -> so we need to transpose the counts in dds.
	poisd <- PoissonDistance(t(counts(dds)))

	# heatmap
	samplePoisDistMatrix <- as.matrix(poisd$dd)
	rownames(samplePoisDistMatrix) <- paste(transformed_dds$names, transformed_dds$condition, sep=" - ")
	colnames(samplePoisDistMatrix) <- paste(transformed_dds$names, transformed_dds$condition, sep=" - ")
	jpeg(output_files[2], width=800, height=800)
	pheatmap(samplePoisDistMatrix,
			 clustering_distance_rows = poisd$dd,
			 clustering_distance_cols = poisd$dd,
			 col = colors,
			 main = "Heatmap of sample-to-sample distances (Poisson) without normalization")
	dev.off()

	# ------------ 4.4 PCA plot -------------------

	# ----------------- 4.4.1 Custom PCA plot --------------
	# Build own plot with ggplot -> to distinguish subgroups more clearly
	# Each unique combination of treatment and cell-line has unique color
	# Use function that is provided with DeSeq2
	pcaData <- plotPCA(transformed_dds, intgroup = c("condition", "add_info"), returnData=TRUE)
	percentVar <- round(100 * attr(pcaData, "percentVar"))

	print("Creating custom PCA plot")
	jpeg(output_files[3], width=800, height=800)
	customPCAPlot <- ggplot(pcaData, aes(x=PC1, y=PC2, color=condition, shape=add_info, label=name)) +
		geom_point(size =3) +
		geom_text(check_overlap=TRUE, hjust=0, vjust=1) +
		xlab(paste0("PC1: ", percentVar[1], "% variance")) +
		ylab(paste0("PC2: ", percentVar[2], "% variance")) +
		coord_fixed() +
		ggtitle("PCA on transformed (rlog) data with subgroups (see shapes)")
	print(customPCAPlot)
	dev.off()

	# ----------------- 4.4.2 Generalized PCA plot --------------
	# Generalized PCA: Operates on raw counts, avoiding pitfalls of normalization
	print("Creating generalized PCA plot")
	gpca <- glmpca(counts(dds), L=2)
	gpca.dat <- gpca$factors
	gpca.dat$condition <- dds$condition
	gpca.dat$add_info <- dds$add_info

	jpeg(output_files[4], width=800, height=800)
	generalizedPCAPlot <- ggplot(gpca.dat, aes(x=dim1, y=dim2, color=condition, shape=add_info,
											   label=rownames(gpca.dat))) +
	  	geom_point(size=2) +
		geom_text(check_overlap=TRUE, hjust=0.5,vjust=1) +
		coord_fixed() +
		ggtitle("glmpca - Generalized PCA of samples")
	print(generalizedPCAPlot)
	dev.off()
}


# ----------------- Main function -----------------
main_function <- function(){
	threads <- snakemake@threads[[1]]
	register(MulticoreParam(workers=threads))

	# Snakemake variables
	deseq_dataset_obj <- snakemake@input[["deseq_dataset_r_obj"]]
	output_file_paths <- snakemake@params[["output_file_paths"]]

    # Load deseq dataset object
    dds <- readRDS(deseq_dataset_obj)

	# Run explorative analysis
	run_deseq2_explorative_analysis(dds, output_file_paths)
}


# ----------------- Run main function -----------------
main_function()
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
library("BiocParallel")

# Load data
library("tximeta")	# Import transcript quantification data from Salmon
library("tximport")	# Import transcript-level quantification data from Kaleidoscope, Sailfish, Salmon, Kallisto, RSEM, featureCounts, and HTSeq
library("rhdf5")
library("SummarizedExperiment")

library("magrittr")	# Pipe operator
library("DESeq2")		# Differential gene expression analysis

# Plotting libraries
library("pheatmap")
library("RColorBrewer")
library("ggplot2")

# Mixed
library("PoiClaClu")
library("glmpca")
library("apeglm")
library("ashr")
library("genefilter")
library("AnnotationDbi")
library("org.Hs.eg.db")

library("ReportingTools")	# For creating HTML reports



# ---------------- Helper functions ----------------
savely_create_deseq2_object <- function(dds) {
	### Function to create a DESeq2 object -> Handles errors that can appear due to parallelization
	### Input: dds object (DESeq dataset object)
	### Output: dds object (DESeq2 object)

	# ------------- 5. Run the differential expression analysis ---------------
	# The respective steps of this function are printed out
	# 1. Estimation of size factors: Controlling for differences
	# in the sequencing depth of the samples
	# 2. Estimation of dispersion values for each gene & fitting a generalized
	# linear model
	print("Creating DESeq2 object")

	# Try to create the DESeq2 object with results in Parallel
	create_obj_parallelized <- function(){
		print("Creating DESeq2 object in parallel")
		dds <- DESeq2::DESeq(dds, parallel=TRUE)
		return(list("dds"=dds, "run_parallel"=TRUE))
	}
	# Try to create the DESeq2 object with results in Serial
	create_obj_not_parallelized <- function(error){
		print("Error in parallelized DESeq2 object creation"); print(error)
		print("Creating DESeq2 object not in parallel")
		dds <- DESeq2::DESeq(dds, parallel=FALSE)
		return(list("dds"=dds, "run_parallel"=FALSE))
	}

	result_list <- tryCatch(create_obj_parallelized(), error=create_obj_not_parallelized)
	print("DESeq2 object created!")
	return(result_list)
}

rename_rownames_with_ensembl_id_matching <- function(dds_object, input_algorithm) {
  	"
	Extracts Ensembl-Gene-IDs from rownames of SummarizedExperiment object and
	renames rownames with Ensembl-Gene-IDs.
	"
  print("Gene annotations")
  if (input_algorithm == "salmon") {
    # Ensembl-Transcript-IDs at first place
    gene_ids_in_rows <- substr(rownames(dds_object), 1, 15)
  }
  else if (input_algorithm == "kallisto") {
    # Ensembl-Transcript-IDs at second place (delimeter: "|")
    gene_ids_in_rows <- sapply(rownames(dds_object), function(x) strsplit(x, '\\|')[[1]], USE.NAMES=FALSE)[2,]
    gene_ids_in_rows <- sapply(gene_ids_in_rows, function(x) substr(x, 1, 15), USE.NAMES=FALSE)
  }
  else {
    stop("Unknown algorithm used for quantification")
  }

  # Set new rownames
  rownames(dds_object) <- gene_ids_in_rows
  return(dds_object)
}


add_gene_symbol_and_entrez_id_to_results <- function(result_object, with_entrez_id=FALSE) {
	"
	Adds gene symbols and entrez-IDs to results object.
	"
	gene_ids_in_rows <- rownames(result_object)

	# Add gene symbols
	# Something breaks here when setting a new column name
	result_object$symbol <- AnnotationDbi::mapIds(org.Hs.eg.db::org.Hs.eg.db,
												 keys=gene_ids_in_rows,
												 column="SYMBOL",
												 keytype="ENSEMBL",
												 multiVals="first")
	if (with_entrez_id) {
		# Add ENTREZ-ID
		result_object$entrez <- AnnotationDbi::mapIds(org.Hs.eg.db::org.Hs.eg.db,
													 keys=gene_ids_in_rows,
													 column="ENTREZID",
													 keytype="ENSEMBL",
													 multiVals="first")
	}

	return(result_object)
}


# ---------------- DESeq2 analysis ----------------
explore_deseq2_results <- function(dds, false_discovery_rate, output_file_paths, run_parallel=FALSE,
                                   used_algorithm) {
	# Results: Metadata
	# 1. baseMean: Average/Mean of the normalized count values divided by size factors, taken over ALL samples
	# 2. log2FoldChange: Effect size estimate. Change of gene's expression
	# 3. lfcSE: Standard Error estimate for log2FoldChange
	# 4. Wald statistic results
	# 5. Wald test p-value ->  p value indicates the probability that a fold change as strong as the observed one, or even stronger, would be seen under the situation described by the null hypothesis.
	# 6. BH adjusted p-value
	print("Creating DESeq2 results object")
	results_obj <- results(dds, alpha=false_discovery_rate, parallel=run_parallel)
	capture.output(summary(results_obj), file=output_file_paths[1])


	# ------------------ 6. Plotting results --------------------
	# Contrast usage
	# TODO: Failed... -> Remove
	# print("Plotting results")
	# chosen_contrast <- tail(resultsNames(results_obj), n=1)	     # get the last contrast: Comparison of states
	# print("resultsNames(results_obj)"); print(resultsNames(results_obj))
	# print("chosen_contrast"); print(chosen_contrast)

	# ------------ 6.1 MA plot without shrinking --------------
	# - M: minus <=> ratio of log-values -> log-Fold-change on Y-axis
	# - A: average -> Mean of normalized counts on X-axis

	# res.noshr <- results(dds, contrast=chosen_contrast, parallel=run_parallel)
	res.no_shrink <- results(dds, parallel=run_parallel)
	jpeg(output_file_paths[2], width=800, height=800)
	DESeq2::plotMA(res.no_shrink, ylim = c(-5, 5), main="MA plot without shrinkage")
	dev.off()

	# ------------ 6.2 MA plot with apeGLM shrinking --------------
	# apeglm method for shrinking coefficients
	# -> which is good for shrinking the noisy LFC estimates while
	# giving low bias LFC estimates for true large differences

	# TODO: apeglm requires coefficients. However, resultsNames(results_obj) does not return any coefficients...
	# res <- lfcShrink(dds, coef=chosen_contrast, type="apeglm", parallel=run_parallel)      # Pass contrast and shrink results
	# Use ashr as shrinkage method
	res <- DESeq2::lfcShrink(dds, res=res.no_shrink, type="ashr", parallel=run_parallel)
	jpeg(output_file_paths[3], width=800, height=800)
	DESeq2::plotMA(res, ylim = c(-5, 5), main="MA plot with ashr shrinkage")
	dev.off()

	# ------------ 6.3 Plot distribution of p-values in histogram --------------
	jpeg(output_file_paths[4], width=800, height=800)
	hist(res$pvalue[res$baseMean > 1], breaks = 0:20/20, col = "grey50", border = "white",
		main="Histogram of distribution of p-values (non-adjusted)", xlab="p-value", ylab="Frequency")
	dev.off()

	jpeg(output_file_paths[5], width=800, height=800)
	hist(res$padj[res$baseMean > 1], breaks = 0:20/20, col = "grey50", border = "white",
		main="Histogram of distribution of p-values (adjusted)", xlab="p-value", ylab="Frequency")
	dev.off()

	# ------------- 6.4 Gene clustering -----------------
	print("Plotting results: Gene clustering")
	# Gene clustering -> Heatmap of divergence of gene's expression in comparison to average over all samples
	# Transform count results to reduce noise for low expression genes
	transformed_dds <- DESeq2::rlog(dds, blind=FALSE)

	# Get top Genes -> with most variance in VSD-values/rlog-transformed counts
	topVarGenes <- head(order(genefilter::rowVars(SummarizedExperiment::assay(transformed_dds)), decreasing=TRUE), 20)
	mat  <- SummarizedExperiment::assay(transformed_dds)[ topVarGenes, ]
	mat  <- mat - rowMeans(mat)   # difference to mean expression
	# Transform row names to gene symbols
	rownames(mat) <- add_gene_symbol_and_entrez_id_to_results(mat)$symbol
	# Additional annotations
	anno <- as.data.frame(SummarizedExperiment::colData(transformed_dds)[, c("condition", "add_info")])
	# Create plot
	jpeg(output_file_paths[6], width=800, height=800)
	pheatmap::pheatmap(mat, annotation_col=anno,
			 main="Divergence in gene expression in comparison to average over all samples")
	dev.off()

	# ---------- 7. Gene annotations --------------
	res <- add_gene_symbol_and_entrez_id_to_results(res)
	resOrdered <- res[order(res$pvalue),]					# Sort results by p-value

	# Exporting results
	resOrderedDF <- as.data.frame(resOrdered)
	write.csv(resOrderedDF, file=output_file_paths[7])
}



# ----------------- Main function -----------------
main_function <- function(){
	threads <- snakemake@threads[[1]]
	register(MulticoreParam(workers=threads))

	# Snakemake variables
	deseq_dataset_obj <- snakemake@input[["deseq_dataset_r_obj"]]
	output_file_paths <- snakemake@params[["output_file_paths"]]
	# For gene-ID matching: Used in rename_rownames_with_ensembl_id_matching()
	used_algorithm <- snakemake@params["used_algorithm"]
	# Adjusted p-value threshold
	false_discovery_rate <- 0.05

    # Load deseq dataset object
    dds <- readRDS(deseq_dataset_obj)

	# Create DESeq2 results object
	print("Creating DESeq2 results object")
	result_list <- savely_create_deseq2_object(dds)
	deseq2_obj <- result_list$dds
	run_parallel <- result_list$run_parallel

	# Rename rows
	deseq2_obj <- rename_rownames_with_ensembl_id_matching(deseq2_obj, used_algorithm)

	# Run statistical analysis
	print("Running statistical analysis")
	explore_deseq2_results(deseq2_obj, false_discovery_rate, output_file_paths, run_parallel=run_parallel,
	                       used_algorithm=used_algorithm)
}


# ----------------- Run main function -----------------
main_function()
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
library("DESeq2")
library("AnnotationDbi")
library("org.Hs.eg.db")
library("clusterProfiler")
library("ggplot2")
library("enrichplot")


create_gsea_plots <- function(gsea_obj, dotplot_file_path, gsea_plot_file_path_1, gsea_plot_file_path_2, method) {
  # Create plots for GSEA results

  # -------- Plottings ---------
  # 1. Dotplot
  jpeg(dotplot_file_path,width=800, height=800)
  print(clusterProfiler::dotplot(gsea_obj, showCategory=30)
          + ggplot2::ggtitle(paste0("DotPlot for GSE-analysis (top 30 results) with method: ", method)))
  dev.off()

  # 2. GSEA-Plot for top 10 results
  jpeg(gsea_plot_file_path_1, width=800, height=800)
  print(enrichplot::gseaplot2(gsea_obj, geneSetID = 1:5, pvalue_table=FALSE,
                             title = paste0("GSEA-Plot for top 1-5 results with method: ", method)))
  dev.off()

  jpeg(gsea_plot_file_path_2, width=800, height=800)
  print(enrichplot::gseaplot2(gsea_obj, geneSetID = 6:10, pvalue_table=FALSE,
                             title = paste0("GSEA-Plot for top 6-10 results with method: ", method)))
  dev.off()

  # for (count in c(1:10)) {
  #   jpeg(paste0(plot_output_dir, "/gsea_plot_", method, "_", count, ".jpg"), width=800, height=800)
  #   print(clusterProfiler::gseaplot(gsea_obj, geneSetID=1, pvalue_table=TRUE))
  #   dev.off()
  # }
}

explore_gsea_go <- function(ordered_gene_list, summary_file_path, gsea_obj_file_path) {
  # Gene Set Enrichtment Analyses
  # params:
  #   ordered_gene_list: Ordered (i.e. deseq2 stat) list of genes

  # ----- 1. GO Enrichment --------
  # GO comprises three orthogonal ontologies, i.e. molecular function (MF),
  # biological process (BP), and cellular component (CC)
  go_gsea <- clusterProfiler::gseGO(ordered_gene_list,
               ont = "BP",
               keyType = "ENSEMBL",
               OrgDb = "org.Hs.eg.db",
               verbose = TRUE)

  df_go_gsea <- as.data.frame(go_gsea)
  df_go_gsea <- df_go_gsea[order(df_go_gsea$p.adjust),]
  write.csv(df_go_gsea, file=summary_file_path)

  # Save GSEA object
  saveRDS(go_gsea, file=gsea_obj_file_path)

  return(go_gsea)
}


explore_gsea_kegg <- function(ordered_gene_list, summary_file_path, gsea_obj_file_path) {
  # KEGG pathway enrichment analysis
  # params:
  #   ordered_gene_list: Ordered (i.e. deseq2 stat) list of genes

  names(ordered_gene_list) <- mapIds(org.Hs.eg.db, keys=names(ordered_gene_list), column="ENTREZID",
                                                      keytype="ENSEMBL", multiVals="first")
  # res$symbol <- mapIds(org.Hs.eg.db, keys=row.names(res), column="SYMBOL", keytype="ENSEMBL", multiVals="first")
  # res$entrez <- mapIds(org.Hs.eg.db, keys=row.names(res), column="ENTREZID", keytype="ENSEMBL", multiVals="first")
  # res$name <- mapIds(org.Hs.eg.db, keys=row.names(res), column="GENENAME", keytype="ENSEMBL", multiVals="first")

  # ----- 1. GO Enrichment --------
  # GO comprises three orthogonal ontologies, i.e. molecular function (MF),
  # biological process (BP), and cellular component (CC)
  kegg_gsea <- clusterProfiler::gseKEGG(geneList=ordered_gene_list,
                                        organism='hsa',
                                        verbose=TRUE)

  df_kegg_gsea <- as.data.frame(kegg_gsea)
  df_kegg_gsea <- df_kegg_gsea[order(df_kegg_gsea$p.adjust),]
  write.csv(df_kegg_gsea, file=summary_file_path)

  # Save GSEA object
  saveRDS(kegg_gsea, file=gsea_obj_file_path)
  return(kegg_gsea)
}


explore_gsea_wp <- function(ordered_gene_list,
                            summary_file_path, gsea_obj_file_path) {
  # WikiPathway
  # params:
  #   ordered_gene_list: Ordered (i.e. deseq2 stat) list of genes

  names(ordered_gene_list) <- mapIds(org.Hs.eg.db, keys=names(ordered_gene_list), column="ENTREZID",
                                                      keytype="ENSEMBL", multiVals="first")

  # ----- 1. GO Enrichment --------
  # GO comprises three orthogonal ontologies, i.e. molecular function (MF),
  # biological process (BP), and cellular component (CC)
  wp_gsea <- clusterProfiler::gseWP(
    geneList=ordered_gene_list,
    organism="Homo sapiens",
    verbose=TRUE)

  df_wp_gsea <- as.data.frame(wp_gsea)
  df_wp_gsea <- df_wp_gsea[order(df_wp_gsea$p.adjust),]
  write.csv(df_wp_gsea, file=summary_file_path)

  # Save GSEA object
  saveRDS(wp_gsea, file=gsea_obj_file_path)
  return(wp_gsea)
}


main <- function() {
  # Input
  input_dseq_dataset_obj <- snakemake@input$deseq_dataset_obj

  # Outputs
  gsea_go_obj_file_path <- snakemake@output$gsea_go_obj_file_path
  gsea_go_summary_file_path <- snakemake@output$gsea_go_summary_file_path
  gsea_kegg_obj_file_path <- snakemake@output$gsea_kegg_obj_file_path
  gsea_kegg_summary_file_path <- snakemake@output$gsea_kegg_summary_file_path
  gsea_wp_obj_file_path <- snakemake@output$gsea_wp_obj_file_path
  gsea_wp_summary_file_path <- snakemake@output$gsea_wp_summary_file_path

  # DotPlots
  dotplot_gsea_go_file_path <- snakemake@output$dotplot_gsea_go_file_path
  dotplot_gsea_kegg_file_path <- snakemake@output$dotplot_gsea_kegg_file_path
  dotplot_gsea_wp_file_path <- snakemake@output$dotplot_gsea_wp_file_path
  # GSEA Plots
  gsea_go_top10_plot_file_path_1 <- snakemake@output$gsea_go_top10_plot_file_path_1
  gsea_kegg_top10_plot_file_path_1 <- snakemake@output$gsea_kegg_top10_plot_file_path_1
  gsea_wp_top10_plot_file_path_1 <- snakemake@output$gsea_wp_top10_plot_file_path_1
  gsea_go_top10_plot_file_path_2 <- snakemake@output$gsea_go_top10_plot_file_path_2
  gsea_kegg_top10_plot_file_path_2 <- snakemake@output$gsea_kegg_top10_plot_file_path_2
  gsea_wp_top10_plot_file_path_2 <- snakemake@output$gsea_wp_top10_plot_file_path_2

  # Params
  input_algorithm <- snakemake@params$input_algorithm

  # Load DataSet
  dds <- readRDS(input_dseq_dataset_obj)
  # Create DESeq2 object
  dds <- DESeq(dds)
  res <- DESeq2::results(dds)

  # Filtering
  res <- na.omit(res)
  res <- res[res$baseMean >50,] # Filter out genes with low expression

  # Order output -> We choose stat, which takes log-Fold as well as SE into account
  # Alternative: lfc * -log10(P-value)
  # order descending so use minus sign
  res <- res[order(-res$stat),]

  # --------- Create input gene list ---------------
  # Extract stat values
  gene_list <- res$stat
  # Add rownames
  if (input_algorithm == "salmon") {
    # Ensembl-Transcript-IDs at first place
    names(gene_list) <- substr(rownames(res), 1, 15)
  }
  else if (input_algorithm == "kallisto") {
    # Ensembl-Transcript-IDs at second place (delimeter: "|")
    gene_ids_in_rows <- sapply(rownames(res), function(x) strsplit(x, '\\|')[[1]], USE.NAMES=FALSE)[2,]
    gene_ids_in_rows <- sapply(gene_ids_in_rows, function(x) substr(x, 1, 15), USE.NAMES=FALSE)
    names(gene_list) <- gene_ids_in_rows
  }
  else {
    stop("Unknown algorithm used for quantification")
  }

  # =========== Run  GSEA ===========
  # ----- 1. GO Enrichment --------
  go_gsea_obj <- explore_gsea_go(gene_list, gsea_go_summary_file_path, gsea_go_obj_file_path)
  create_gsea_plots(go_gsea_obj, dotplot_gsea_go_file_path,
                    gsea_go_top10_plot_file_path_1, gsea_go_top10_plot_file_path_2, "go")

  # ----- 2. KEGG Enrichment --------
  kegg_gsea_obj <- explore_gsea_kegg(gene_list, gsea_kegg_summary_file_path, gsea_kegg_obj_file_path)
  create_gsea_plots(kegg_gsea_obj, dotplot_gsea_kegg_file_path,
                    gsea_kegg_top10_plot_file_path_1, gsea_kegg_top10_plot_file_path_2, "kegg")

  # ----- 3. WikiPathway Enrichment --------
  wp_gsea_obj <- explore_gsea_wp(gene_list, gsea_wp_summary_file_path, gsea_wp_obj_file_path)
  create_gsea_plots(wp_gsea_obj, dotplot_gsea_wp_file_path,
                    gsea_wp_top10_plot_file_path_1, gsea_wp_top10_plot_file_path_2, "wp")
}

# Run main function
main()
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
summarize_sample_counts <- function(input_count_files, sample_names) {
	# Loop through files and collect them in a list
	cov <- list()
	for (i in 1:length(input_count_files)) {
		# Load files
		count_file <- input_count_files[i]
		sample_name <- sample_names[i]

		# Import into our list and set col names
		cov[[i]] <- read.table(count_file, sep="\t", header=FALSE, stringsAsFactors=FALSE)
		colnames(cov[[i]]) <- c("ENSEMBL_GeneID", "GeneSymbol", sample_name)
	}

	## construct one data frame from list of data.frames using reduce function
	# Reduce: Takes function and vector, then applies function to first two elements of vector, then result of that to third element, etc.
	df <- Reduce(function(x,y) merge(x = x, y = y, by =c("ENSEMBL_GeneID", "GeneSymbol")), cov)

	return(df)
}


main_function <- function() {
	# Input count files (generated by htseq-count)
	input_count_files <- snakemake@input[["sample_count_files"]]
	# Sample names
	sample_names <- snakemake@params[["sample_names"]]
	# Output file
	output_total_counts_table <- snakemake@output[["total_counts_table"]]

	# Additional count file
	# additional_count_file <- snakemake@params[["additional_count_file"]]

	# 1. Collect all counts
	total_counts <- summarize_sample_counts(input_count_files , sample_names)

	# 2. Add additional counts -> This is merged with inner joint!
	# if (addtional_count_file != NULL && additional_count_file != "NA" && additional_count_file != "") {
	# 	additional_counts <- read.table(additional_count_file, sep="\t", header=TRUE, stringsAsFactors=FALSE)
	# 	print(paste(c("First row lenght:"), length(additional_counts[1,])))
	# 	print(paste(c("Second row length:"), length(additional_counts[2,])))
	# 	# Remove suffixes of form "[1-9]?_[1-9]? from first column
	# 	additional_counts$geneID <- gsub(".[0-9]+_[0-9]+$", "", additional_counts$geneID)
	# 	print(paste(c("First row lenght:"), length(additional_counts[1,])))
	# 	print(paste(c("Second row length:"), length(additional_counts[2,])))
	# 	total_counts <- merge(x = total_counts, y = additional_counts, by.x="ENSEMBL_GeneID", by.y="geneID")
	# 	print(paste(c("First row lenght:"), length(total_counts[1,])))
	# 	print(paste(c("Second row length:"), length(total_counts[2,])))
	# }

	## 3. write to file
	write.table(total_counts, output_total_counts_table, sep="\t", quote=FALSE, row.names=FALSE)
	sanity_check <- read.table(output_total_counts_table, sep="\t", header=TRUE, stringsAsFactors=FALSE)
	print(paste(c("First row lenght:"), length(sanity_check[1,])))
	print(paste(c("Second row length:"), length(sanity_check[2,])))
}

# Run main
main_function()
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
library("OUTRIDER")
library("annotables")
library("data.table")
library("ggplot2")
library("ggpubr")
library("plotly")

library("BiocParallel")   # For parallelization needed
# BPPARAM = MulticoreParam(snakemake@threads)


create_outrider_data_set <- function(ctsFile_path) {
    ### Create OUTRIDER data set
    # Input: ctsFile_path: File path leading to count file
    # Output: OUTRIDER data set

    ctsTable <- read.table(ctsFile_path, sep="\t", header=FALSE, stringsAsFactors=FALSE)

    countDataMatrix <- as.matrix(ctsTable[-1,-1])   # Extract counts & Ignore first column (geneID) and first row (sample names)
    mode(countDataMatrix) <- "integer"              # Convert to integer
    rownames(countDataMatrix) <- ctsTable[-1,1]     # Set rownames to geneIDs
    colnames(countDataMatrix) <- ctsTable[1,-1]     # Set colnames to sample names

    # Create OutriderDataSet
    ods <- OutriderDataSet(countData=countDataMatrix)
    print("Done creating OutriderDataSet")

    return(ods)
}


filter_outrider_data_set <- function(ods) {
    ### Filter OUTRIDER data set: Remove genes with low counts
    # Input: ods: OUTRIDER data set
    # Output: Filtered OUTRIDER data set

    # --------- 3. Filter out non expressed genes --------
    # filter out non expressed genes
    # minCounts: If True, only genes wit 0 counts in all samples are filtered out.
    # ALTERNATIVE: If one provides also GTF-annotation, then based on FPKM values filtering is applied
    ods <- filterExpression(ods, minCounts=TRUE, filterGenes=FALSE)
    print("Done filtering out non expressed genes")

    # -------- 3.1 Plotting of the filtered data ---------
    # TODO: Might be not applicable since we do not use FPKM values for filtering

    # Plot FPKM distribution across all sample/gene pairs
    # png_file_path <- file.path(plot_output_dir, paste("fpkm_distribution_across_all_samples_and_genes.png"))
    # png(png_file_path)
    # # TODO: This might not work, since we are not working with FPKM values
    # plotFPKM(ods) + theme(legend.position = 'bottom')
    # dev.off()

    # Apply filter
    ods <- ods[mcols(ods)[['passedFilter']]]
    print("Done applying filter")

    # -------- 3.2 Plotting of potential co-variation ---------
    # TODO: Might be not applicable since we do not use FPKM values
    # -> Requires also a sample annotation file

    # # Make heatmap figure bigger
    # options(repr.plot.width=6, repr.plot.height=5)
    # png_file_path = file.path(plot_output_dir, paste("covariation_heatmap.png"))
    # png(png_file_path)
    #
    # # use normalize=FALSE since the data is not yet corrected
    # # use columns from the annotation to add labels to the heatmap
    # plotCountCorHeatmap(ods, colGroups=c("adaptors_file"), rowGroups="condition", normalize=FALSE)
    # dev.off()

    return(ods)
}


# TODO: clean up naming of output files
# TODO: Do not give output dir, but directly the output files?!
main_function <- function() {
    # Counts file
    ctsFile_path <- snakemake@input[["counts_file"]]

    # With estimated size factors
    outrider_obj_with_estimated_size_factors_txt <- snakemake@output[["outrider_obj_with_estimated_size_factors_txt"]]
    outrider_obj_with_estimated_size_factors_rds <- snakemake@output[["outrider_obj_with_estimated_size_factors_rds"]]

    # Final Outrider object
    output_final_outrider_obj_file <- snakemake@output[["outrider_object_file"]]

    # ------- 1. Create OutriderDataSet ---------
    ods <- create_outrider_data_set(ctsFile_path)

    # ------- 2. Filter out non expressed genes ---------
    ods <- filter_outrider_data_set(ods)

    # -------- 3. Run full Outrider pipeline ------------
    # run full OUTRIDER pipeline (control, fit model, calculate P-values)

    # Crash in case where sample groups are not equally sized!
    #     -> Size Factors, which are accounting for sequencing depth,
    #     are differing strongly between sample groups -> Controlling for confounders leads to crash?!
    #     - first remote error: L-BFGS-B benötigt endliche Werte von 'fn' -> filter values?!
    #     - https://github.com/gagneurlab/OUTRIDER/issues/25

    # --------- For debugging purposes: Saves estimated size factors ---------
    print("Investigate estimated size factors")
    ods <- OUTRIDER::estimateSizeFactors(ods)
    saveRDS(ods, outrider_obj_with_estimated_size_factors_rds)
    sink(outrider_obj_with_estimated_size_factors_txt)
    print(OUTRIDER::sizeFactors(ods))
    sink()
    # ------------------------------------------

    print("Start running full OUTRIDER pipeline")
    ods <- OUTRIDER(ods, BPPARAM=SerialParam())
    # Save the OutriderDataSet -> can be used for further analysis via R: readRDS(file)
    saveRDS(ods, output_final_outrider_obj_file)
}

main_function()
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
library("OUTRIDER")
library("annotables")
library("data.table")
library("ggplot2")
library("ggpubr")
library("plotly")

library("BiocParallel")   # For parallelization needed
# BPPARAM = MulticoreParam(snakemake@threads)


explore_outrider_results <- function(ods, output_dir,
                                     sample_ids, genes_of_interest,
                                     p005_file, p010_file,
                                     ab_genes_per_sample_file, ab_samples_per_gene_file) {
    ### Extract OUTRIDER results
    # Input: ods: OUTRIDER data set
    #        sample_ids: Sample IDs
    #        genes_of_interest: Genes of interest
    #        plot_output_dir: Output directory for plots
    #        outrider_obj_file: File path to save OUTRIDER object
    #        p005_file: File path to save p-value 5% results
    #        p010_file: File path to save p-value 10% results
    #        ab_genes_per_sample_file: File path to save aberrant genes per sample
    #        ab_samples_per_gene_file: File path to save aberrant samples per gene
    # Output: OUTRIDER object, p005 results, p010 results, aberrant genes per sample, aberrant samples per gene

    # -------- 5. Output Results ------------
    # -------- 5.1 significant results ----------
    # get results (default only significant, padj < 0.05)
    res <- results(ods)
    res <- res[order(res$padjust),]
    write.table(res, file=p005_file, sep="\t", quote=FALSE, row.names=FALSE)

    # get results (default only significant, padj < 0.10)
    res <- results(ods, padjCutoff=0.1)
    res <- res[order(res$padjust),]
    write.table(res, file=p010_file, sep="\t", quote=FALSE, row.names=FALSE)

    # -------- 5.2 Aberrant expression ----------
    # number of aberrant genes per sample
    nr_aberrant_genes_per_sample <- sort(aberrant(ods, by="sample"))
    # Use sink to include sample-names in output file
    sink(ab_genes_per_sample_file); print(nr_aberrant_genes_per_sample); sink()

    # number of aberrant samples per gene
    nr_aberrant_samples_per_gene <- sort(aberrant(ods, by="gene"))
    sink(ab_samples_per_gene_file); print(nr_aberrant_samples_per_gene); sink()


    # -------- 5.3 Volcano Plots for p-values ----------
    print("Given sample IDs:")
    print(sample_ids)

    # Convert sample IDs
    # 1. "-" are converted into "."
    # 2. Samples which start with numbers get the prefix: "X"
    for (i in (1:length(sample_ids))) {
        sample_ids[i] <- gsub("-", ".", sample_ids[i])
        # if ( grepl("^[1-9].*$", sample_ids[i]) ) {
        #     sample_ids[i] <- paste0("X", sample_ids[i])  # Add X to sample IDs starting with a number -> Comply with Outrider conversion
        # }
    }

    tryCatch(
        expr = {
            for (sample_id in sample_ids) {
                html_file_path <- file.path(output_dir, paste(sample_id, "_pvalues_volcano.html", sep=""))
                png_file_path <- file.path(output_dir, paste(sample_id, "_pvalues_volcano.png", sep=""))

                # A. Create interactive Plotly plot
                interactive_plot <- plotVolcano(ods, sample_id, basePlot=FALSE)
                htmlwidgets::saveWidget(as_widget(interactive_plot), html_file_path)

                # B. Create static plot
                png(png_file_path)
                print(plotVolcano(ods, sample_id, basePlot=TRUE))
                dev.off()
            }
        },
        error = function(e) {
            print("Error in creating volcano plots")
            print(e)
        },
        finally = {
            print("Yes, Volcano plots are done!")
        }
    )

    # -------- 5.4 Gene level plots ----------
    # 5.4.1 Expression Rank
    tryCatch(
        expr = {
            for (gene_name in genes_of_interest) {
                html_file_path <- file.path(output_dir, paste(gene_name, "_expressionRank.html"))
                jpeg_file_path <- file.path(output_dir, paste(gene_name, "_expressionRank.jpg"))

                # A. Create interactive Plotly plot
                interactive_plot <- plotExpressionRank(ods, gene_name, basePlot=FALSE)
                htmlwidgets::saveWidget(as_widget(interactive_plot), html_file_path)

                # B. Create static plot
                jpeg(file=jpeg_file_path)
                print(plotExpressionRank(ods, gene_name, basePlot=TRUE))
                dev.off()
            }
        },
        error = function(e) {
            print("Error in plotExpressionRank")
            print(e)
        },
        finally = {
            print("Yes, Expression Rank plots are done!")
        }
    )

    # 5.4.2 Quantile-Quantile-Plots (Q-Q plots)
    tryCatch(
        expr = {
            for (gene_name in genes_of_interest) {
                jpeg_file_path <- file.path(output_dir, paste(gene_name, "_qqPlot.jpg", sep=""))

                # B. Create static plot
                jpeg(file=jpeg_file_path)
                print(plotQQ(ods, gene_name))
                dev.off()
            }
        },
        error = function(e) {
            print("Error in plotQQ")
            print(e)
        },
        finally = {
            print("Yes, Q-Q plots are done!")
        }
    )

    # 5.4.3 Observed versus expected Expression
    tryCatch(
        expr = {
            for (gene_name in genes_of_interest) {
                html_file_path <- file.path(output_dir, paste(gene_name, "_expectedVsObservedExpression.html", sep=""))
                jpeg_file_path <- file.path(output_dir, paste(gene_name, "_expectedVsObservedExpression.jpg", sep=""))

                # A. Create interactive Plotly plot
                interactive_plot <- plotExpectedVsObservedCounts(ods, gene_name, basePlot=FALSE)
                htmlwidgets::saveWidget(as_widget(interactive_plot), html_file_path)

                # B. Create static plot
                jpeg(file=jpeg_file_path)
                print(plotExpectedVsObservedCounts(ods, gene_name, basePlot=TRUE))
                dev.off()
            }
        },
        error = function(e) {
            print("Error in plotExpectedVsObservedCounts")
            print(e)
        },
        finally = {
            print("Yes, Observed versus expected Expression plots are done!")
        }
    )
}


# TODO: clean up naming of output files
# TODO: Do not give output dir, but directly the output files?!
main_function <- function() {
  # Input
  input_final_outrider_obj_file <- snakemake@input[["outrider_object_file"]]

  # Output directory -> For saving plots
  output_dir <- snakemake@output[[1]]

  # Outputs
  significant_results_p005_output_file <- snakemake@output[["significant_results_p005_file"]]
  significant_results_p010_output_file <- snakemake@output[["significant_results_p010_file"]]
  nr_aberrant_genes_per_sample_output_file <- snakemake@output[["nr_aberrant_genes_per_sample"]]
  nr_aberrant_samples_per_gene_output_file <- snakemake@output[["nr_aberrant_samples_per_gene"]]

  # Params
  sample_ids <- snakemake@params[["sample_ids"]]
  genes_of_interest <- snakemake@params[["genes_of_interest"]]

  # ------- 1. Create output directory ---------
  dir.create(file.path(output_dir), recursive=TRUE, showWarnings=FALSE)     # Create plot-directory

  ods <- readRDS(input_final_outrider_obj_file)

  # -------- 5. Output Results ------------
  explore_outrider_results(ods, output_dir,
                           sample_ids, genes_of_interest,
                           significant_results_p005_output_file, significant_results_p010_output_file,
                           nr_aberrant_genes_per_sample_output_file, nr_aberrant_samples_per_gene_output_file)
}

main_function()
86
87
88
89
90
shell:
    "python {params.script} "
    "--input_gtf_file {input} "
    "--output_gtf_file {output} "
    "--log_file {log}"
122
123
124
shell:
    "python {params.script} -f {output.subread_compatible_flat_gtf_file} "
    "{input.gtf_file} {output.gff_file} 2>{log}"
SnakeMake From line 122 of rules/dexseq.smk
160
161
162
163
164
165
166
167
168
169
170
171
172
shell:
    "featureCounts -f "             # -f Option to count reads overlapping features
    "-O "                           # -O Option to count reads overlapping to multiple exons
    "-J "                           # -J: Count number of reads supporting each exon-exon junction -> Creates a separate file
    #"--fracOverlap 0.2 "            # --fracOverlap FLOAT: Minimum fraction of a read that must overlap a feature to be assigned to that feature
    "-s {params.stranded} "         # -s Strandedness: 0 (unstranded), 1 (stranded) and 2 (reversely stranded).
    "{params.paired} "              # -p If specified, fragments (or templates) will be counted instead of reads. 
    "-T {threads} "                 # Specify number of threads
    "-F {params.format} "           # Specify format of the provided annotation file
    "-a {input.subread_compatible_flat_gtf_file} "          # Name of annotation file
    "-o {output.subread_exon_counting_bin_file} "           # Output file including read counts
    "{input.bam_file} "
    "2> {log}"
184
185
script:
    "../scripts/dexseq/merge_feature_count_files.py"
SnakeMake From line 184 of rules/dexseq.smk
221
222
script:
    "../scripts/dexseq/dexseq_data_analysis.R"
SnakeMake From line 221 of rules/dexseq.smk
244
245
script:
    "../scripts/dexseq/extract_significant_results.py"
SnakeMake From line 244 of rules/dexseq.smk
275
276
script:
    "../../../scripts/create_report_html_files.R"
SnakeMake From line 275 of rules/dexseq.smk
306
307
script:
    "../scripts/dexseq/dexseq_create_html_summary_reports.R"
SnakeMake From line 306 of rules/dexseq.smk
28
29
30
31
32
33
34
35
36
37
38
39
run:
    annotation_table = pep.sample_table.copy()
    # Add the bam file paths to the annotation table in column "bamFile"
    annotation_table["bamFile"] = input.bam_file_paths
    annotation_table["sampleID"] = annotation_table["sample_name"]
    annotation_table["pairedEnd"] = "TRUE"

    # Drop all columns except "sampleID", "bamFile" and "pairedEnd"
    annotation_table = annotation_table[["sampleID", "bamFile", "pairedEnd"]]

    # Save output table
    annotation_table.to_csv(output.annotation_table, sep="\t", index=False)
102
103
script:
    "../scripts/fraser/fraser_dataset_exploration.R"
SnakeMake From line 102 of rules/fraser.smk
145
146
script:
    "../scripts/fraser/create_fraser_analysis_plots.R"
SnakeMake From line 145 of rules/fraser.smk
180
181
script:
    "../../../scripts/create_report_html_files.R"
SnakeMake From line 180 of rules/fraser.smk
25
26
wrapper:
    "v1.18.3/bio/samtools/sort"
52
53
54
55
56
57
58
59
60
shell:
    "mkdir -p {output.ref_dir};"
    "ln -s {input.gtf_file} {output.ref_dir}/transcripts.gtf;"
    "ln -s {input.fasta_file} {output.ref_dir}/genome.fa;"
    "IRFinder -m BuildRefProcess -t {threads} "
    "-r {output.ref_dir} "
    "-b {params.bed_file_consensus_excludable} "
    "-R {params.bed_file_non_polya} "
    "2> {log};"
119
120
121
122
123
124
shell:
    "IRFinder -m BAM "
    "-r {input.reference_dir} "
    "-d {output.output_dir} "
    "-t {threads} "
    "{input.bam_file} 2> {log}"
170
171
172
173
run:
    with open(output[0], "w") as f:
        for path in input:
            f.write(path + "\n")
193
194
195
196
197
198
run:
    with open(output[0], "w") as f:
        assert(len(params.samples) == len(params.conditions))
        f.write("SampleNames\tCondition\n")
        for i in range(len(params.samples)):
            f.write(params.samples[i] + "\t" + params.conditions[i] + "\n")
222
223
script:
    "../scripts/irfinder/run_glm_analysis.R"
78
79
80
81
82
83
84
85
shell:
    "regtools junctions extract "
    "-a {params.a} "
    "-m {params.m} "
    "-M {params.M} "
    "-s {params.strandedness} "
    "-o {output} "
    "{input.sorted_bam} 2>{log}"
102
103
104
105
run:
    with open(output[0], "w") as f:        # Creates file / Rewrites the given file
        for file in input.bam_files:
            f.write(file + "\n")
148
149
150
151
152
153
154
155
156
157
158
159
shell:
    "mkdir -p {output.output_dir}; "
    "python {params.leafcutter_clustering_script_path} "
    "-j {input.juncfile_list} "    # Juncfile list
    "-m {params.m} "    # m split reads needed so that a cluster is considered
    "-l {params.l} "    # max intron length
    "-o {params.o} "    # output prefix
    "-r {output.output_dir} "    # output dir
    "2>{log} && "
    "gunzip --keep --force {output.count_file} 2>{log} && "
    "python {params.bed_file_create_script} "
    "--leafcutter_intron_file {output.unzipped_count_file} --bed_file {output.bed_file} 2>{log}"
184
185
186
shell:
    "{params.leafcutter_installation_dir}/leafviz/gtf2leafcutter.pl -o {output.output_dir}/{params.output_prefix} "
    "{input.reference_genome_annotation_file} 2>{log}"
212
213
script:
    "../scripts/leafcutter/create_leafcutter_group_files.py"
285
286
287
288
289
290
291
292
run:
    # Extract significant clusters (column: p.adjust)
    # p < 0.1
    import pandas as pd
    df = pd.read_csv(input[0], sep="\t")
    df = df[df["p.adjust"] < 0.1]
    df = df.sort_values(by="p.adjust")
    df.to_csv(output[0], sep="\t", index=False)
325
326
script:
    "../../../scripts/create_report_html_files.R"
358
359
360
361
362
363
364
365
shell:
    "{params.leafcutter_installation_dir}/leafviz/prepare_results.R "
    "-m {input.group_file} "
    "-f {params.fdr} "
    "{input.count_file} "
    "{input.sample_analysis_cluster_significance_file} {input.sample_analysis_effect_sizes_file} "
    "{params.annotation_files_prefix} "
    "-o {output}"
405
406
407
408
shell:
    "R -e 'devtools::install_github(\"davidaknowles/leafcutter/leafcutter\")'; "
    "{params.leafcutter_installation_dir}/scripts/leafcutterMD.R --num_threads {threads} "
    "--output_prefix {output.output_dir}/{params.output_prefix} {input} 2>{log}"
443
444
script:
    "../scripts/leafcutter/analyze_leafcutterMD_output.py"
482
483
script:
    "../../../scripts/create_report_html_files.R"
56
57
58
59
60
61
62
63
shell:
    "regtools junctions extract "
    "-a {params.a} "
    "-m {params.m} "
    "-M {params.M} "
    "-s {params.strandedness} "
    "-o {output} "
    "{input.bam_file} 2>{log}"
87
88
script:
    "../scripts/private_junction_detection/extract_actual_junctions.py"
109
110
shell:
    "sort-bed {input.in_file} >{output.output_file} 2>{log}"
134
135
136
137
shell:
    "gtf2bed <{input.gtf_file} | grep -w gene | sort-bed - >{output.bed_file} 2>{log};"
    "python3 {input.chrom_transform_script} --input_gtf_file {output.bed_file} "
    "--output_gtf_file {output.bed_file_transformed} --log_file {log}"
164
165
shell:
    "bedtools closest {params.extra} -a {input.junc_file} -b {input.annotation_file} > {output} 2>{log}"
186
187
script:
    "../scripts/private_junction_detection/junction_collector.py"
210
211
script:
    "../scripts/private_junction_detection/filter_junctions.py"
232
233
234
235
236
237
238
239
240
241
run:
    # Extract 150 entries from each file and write them to output
    for input_file, output_file in zip(input, output):
        with open(input_file, "r") as in_file:
            with open(output_file, "w") as out_file:
                for i, line in enumerate(in_file):
                    if i < 251:     # 150 entries + header
                        out_file.write(line)
                    else:
                        break
263
264
script:
    "../scripts/private_junction_detection/insert_gene_symbol_and_gene_id.R"
307
308
script:
    "../../../scripts/create_report_html_files.R"
333
334
script:
    "../scripts/private_junction_detection/filter_junctions.py"
216
217
218
219
220
221
222
223
224
225
226
227
228
229
run:
    import pandas as pd

    for input_file, output_file in zip(input, output):
        df = pd.read_csv(input_file, sep="\t")
        # filter for significant results (adjusted p-value < 0.10)
        df = df[(df["PValue"] < 0.05) & (df["FDR"] < 0.10)]
        df = df.sort_values(by=["PValue"])

        # Extract top x results
        df = df.head(params.top_x_results)

        # write to file
        df.to_csv(output_file, sep="\t",index=False)
286
287
script:
    "../../../scripts/create_report_html_files.R"
SnakeMake From line 286 of rules/rmats.smk
105
106
script:
    "../scripts/summary/create_splice_results_summary_file.py"
141
142
script:
    "../../../scripts/create_report_html_files.R"
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
library(BiocParallel)   # For parallelization needed
library(DEXSeq)         # For differential expression analysis

library(dplyr)


# -------------------- Main function --------------------
main_function <- function() {
  # ----------------- 1. Load snakemake variables -----------------
  # inputs
  input_dxr_object_file_list <- snakemake@input[["dexseq_results_object_file_list"]]

  # outputs
  output_html_report_file_array <- snakemake@output[["result_html_summary_report_file_list"]]

  # params
  summary_report_fdr <- snakemake@params[["summary_report_fdr"]]
  threads <- snakemake@threads

  # ----------------- 2. Prepare analysis -----------------
  # ------ 2.1 Set number of threads --------
  BPPARAM <- MulticoreParam(threads)


  # ----------------- 3. Run analysis -----------------
  # ------ 3.1 Create html summary report for condition group --------
  for (i in c(1:length(input_dxr_object_file_list))) {
    # ------ 3.1 Load DEXSeq results object --------
    current_dexseq_result_obj <- readRDS(file=input_dxr_object_file_list[i])

    # Output directory & file
    current_output_html_report_file <- output_html_report_file_array[i]
    current_output_html_report_dir <- dirname(current_output_html_report_file)

    # HTML Summary with linkouts
    tryCatch(
      expr = {
        print("Creating HTML report")
        print("1. Create dir")
        dir.create(current_output_html_report_dir, showWarnings = FALSE)

        print("2. Create HTML report")
        DEXSeqHTML(current_dexseq_result_obj, FDR=summary_report_fdr, path=current_output_html_report_dir,
                   file=basename(current_output_html_report_file), BPPARAM=BPPARAM)
      },
      error = function(e) {
        print("Error in DEXSeqHTML")
        print(e)
        print("Saving Error instead of HTML report")
        sink(file=current_output_html_report_file); print(e); sink()
      },
      finally = {
          print("Finished DEXSeqHTML")
      }
    )
  }
}

# -------------------- Run main function --------------------
main_function()
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
library("BiocParallel")   # For parallelization needed
library("DEXSeq")         # For differential expression analysis

library("dplyr")
source(snakemake@params[["load_subreadOutput_script"]])      # Source script to import Subread output


runDexseqAnalysis <- function(countFile, input_flattened_gtf_file, sampleTable,
                              dexseq_results_object_file, output_csv_file, BPPARAM) {
    # ------ A. Load count data into table -------
    # 3. argument: specifies the formula for creating a linear regression model -> interaction between condition and exon
    # Using this formula, we are interested in differences in exon usage due to the “condition” variable changes.
    dxd <- DEXSeqDataSetFromFeatureCounts(
        countFile,
        flattenedfile = input_flattened_gtf_file,
        sampleData = sampleTable
    )

#     dxd = DEXSeqDataSetFromHTSeq(
#        countFile,
#        sampleData=sampleTable,
#        design= ~ sample + exon + condition:exon,
#        flattenedfile=input_flattened_gtf_file
#     )

    # ------ B. Normalization -------
    dxd <- estimateSizeFactors(dxd)      # Normalization -> Uses same method as DESeq2

    # -------- 4.3 Dispersion estimation ---------
    # To test for differential exon usage, we need to estimate the variability of the data.
    # This is necessary to be able to distinguish technical and biological variation (noise) from real
    # effects on exon usage due to the different conditions.
    dxd <- estimateDispersions(dxd, BPPARAM=BPPARAM)

    # --------- C. Testing for differential exon usage ----------------
    # For each gene, DEXSeq fits a generalized linear model with the formula
    # ~sample + exon + condition:exon
    # and compares it to the smaller model (the null model)
    # ~ sample + exon.

    # exon: Factor with 2 levels: this and others
    # Explanation for linear models in R: For every coefficient to add into formula: use symbol "+"
    # Interactions are separated by colon -> condition:exon -> interpreted as multiplication term

    # Testing: The deviances of both fits are compared using a χ2-distribution, providing a p value.
    # Based on this p-value, we can decide whether the null model is sufficient to explain the data,
    # or whether it may be rejected in favour of the alternative model, which contains an interaction
    # coefficient for condition:exon. The latter means that the fraction of the gene’s reads that fall
    # onto the exon under the test differs significantly between the experimental conditions.
    dxd <- testForDEU(dxd, BPPARAM=BPPARAM)

    # --------- D. Compute exon fold change ----------------
    # Compute exon fold change numbers with formula:
    # count ~ condition + exon + condition:exon
    dxd <- estimateExonFoldChanges(dxd, fitExpToVar="condition", BPPARAM=BPPARAM)

    # ------- E. Results ------------
    # Summarize results, save R-object and write result summary to file
    dxr1 <- DEXSeqResults(dxd)

    print("Save DEXSeq results object in R-file")
    saveRDS(dxr1, file=dexseq_results_object_file)

    print("Save summary of results in CSV-file")
    write.csv(dxr1, file=output_csv_file, row.names=TRUE)
}

# -------------------- Main function --------------------
main <- function(){
    # ----------------- 1. Load snakemake variables -----------------
    # inputs
    input_flattened_gtf_file <- snakemake@input[["flattened_gtf_file"]]
    input_exon_counting_bin_file <- snakemake@input[["exon_counting_bin_file"]]

    # params
    input_sample_ids <- snakemake@params[["sample_ids"]]
    input_sample_conditions <- snakemake@params[["sample_conditions"]]
    threads <- snakemake@threads

    # outputs
    dexseq_results_object_file <- snakemake@output[["dexseq_results_object_file"]]
    output_csv_file <- snakemake@output[["result_summary_csv_file"]]

    # ----------------- 2. Prepare analysis -----------------
    # ------ 2.1 Set number of threads --------
    BPPARAM <- MulticoreParam(threads)
    # register(MulticoreParam(threads))

    # ------ 2.2 Load annotation data into table -------
    # Table: One row for each library (each sample)
    # Columns: For all relevant information -> covariates
    # If only one covariant, it has to be named "condition"!
    overallSampleTable <- data.frame(
        row.names = input_sample_ids,
        condition = input_sample_conditions
       )
    print("Sample table:")
    print(overallSampleTable)

    # --------------- 2.3 Extract only needed columns from Subread output ----------------
    # Extract only needed columns from Subread output
    # counts_file <- fread(input_exon_counting_bin_file, header=TRUE, sep="\t", stringsAsFactors=FALSE)
    counts_file <- read.table(input_exon_counting_bin_file, header=TRUE, check.names=FALSE, sep="\t", stringsAsFactors=FALSE)
    keep_cols <- c("Geneid", "Chr", "Start", "End", "Strand", "Length", input_sample_ids)
    # Extract only needed columns from Subread output and ensure order of columns
    subset_counts <- counts_file[names(counts_file) %in% keep_cols][keep_cols]
    # Write selected columns to temporary file
    tmp_subset_counts_file <- tempfile(fileext = ".tsv")
    write.table(subset_counts, file=tmp_subset_counts_file, sep="\t", quote=FALSE, row.names=FALSE)


    # ----------------- 3. Run analysis -----------------
    # Run dexseq analysis
    print("Run analysis")
    runDexseqAnalysis(tmp_subset_counts_file, input_flattened_gtf_file, overallSampleTable,
                  dexseq_results_object_file, output_csv_file, BPPARAM)
}

# Run main function
main()
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
import pandas as pd


def clean_summary_csv_file(input, output):
    """
    Replaces the vector symbols in the last column of the summary file and replaces the commas with semicolons

    :param input:       Original summary file from DEXSeq (path)
    :param output:      Output file (path)
    :return:
    """
    with open(input, 'r') as f:
        filedata = f.read()

    # e.g.: c("ENST000000494424", "ENST00000373020") -> ENST000000494424;ENST00000373020
    # remove whitespaces that are included in c(...)
    filedata = filedata.replace('c(', '')
    filedata = filedata.replace(')', '')
    filedata = filedata.replace('", "', '";"')
    filedata = filedata.replace('"', '')
    filedata = filedata.replace(",\n", "")
    filedata = filedata.replace(", \n", "")
    filedata = filedata.replace(",  \n", "")

    with open(output, 'w') as file:
        file.write(filedata)


def extract_significant_results(input_file, output_file, p_value_cutoff=0.05, top_x_select=1000):
    """
    Extracts the significant results from the DEXSeq output file
    :param input_file:
    :param output_file:
    :param p_value_cutoff:  p-value cutoff value
    :return:
    """
    # Read in the results
    results = pd.read_csv(input_file, low_memory=False)

    # Remove count data (drop columns that start with "countData.")
    results = results.loc[:, ~results.columns.str.startswith('countData.')]

    # Remove entries, where the p-value is not significant
    # 1. Remove NA values
    results = results[results['padj'].notna()]      # "NA" values are not significant
    # 2. Remove entries with p-value > 0.05
    results = results[results['padj'] < p_value_cutoff]    # p-value > 0.05 are not significant

    # Sort the results by p-value
    results = results.sort_values(by=['padj'])

    # Select the top x results
    results = results.head(top_x_select)

    # Write the results to a file
    results.to_csv(output_file, index=False)


def integrate_gene_names(result_file, gene_mapping_file, output_file):
    """
    Replace first two columns (gene/group IDs) with gene names

    :param result_file:
    :param gene_mapping_file:
    :param output_file:
    :return:
    """
    # Read in the results
    results = pd.read_csv(result_file, low_memory=False)
    results['ensembl_gene_id'] = results['groupID'].str.split('+').str[0]

    # Get gene names from Ensembl IDs
    ensembl_mappings_df = pd.read_csv(gene_mapping_file, low_memory=False)

    # Merge the two dataframes
    results = pd.merge(results, ensembl_mappings_df, on='ensembl_gene_id')
    sorted_results = results.sort_values(by=['padj'])   # Sort the results by p-value -> ascending order

    # Replace the first two columns with the last two columns
    columns = sorted_results.columns.tolist()[-2:] + sorted_results.columns.tolist()[2:-2]
    sorted_results = sorted_results[columns]

    # Save the results to a file
    sorted_results.to_csv(output_file, index=False)


if __name__ == "__main__":
    # Summary file from DEXSeq & Gene mapping file
    snakemake_summary_file = snakemake.input.summary_csv_file
    snakemake_gene_mapping_file = snakemake.params.gene_mapping_file
    top_x_results = snakemake.params.top_x_results

    # Output file
    snakemake_output_file = snakemake.output.filtered_results_file

    # Clean the summary file
    clean_summary_csv_file(snakemake_summary_file, snakemake_output_file)
    # Extract the significant results
    extract_significant_results(snakemake_output_file, snakemake_output_file, top_x_select=top_x_results)
    # Integrate gene names
    integrate_gene_names(snakemake_output_file, snakemake_gene_mapping_file, snakemake_output_file)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import pandas as pd


if __name__ == '__main__':

    count_files = snakemake.input["count_files"]
    output_total_count_file = snakemake.output["total_counts_file"]

    # Iterate over all count files and merge them into one dataframe
    total_counts_df = pd.DataFrame()
    for i, count_file in enumerate(count_files):
        # Current sample ID
        current_sample_id = count_file.split("/")[-1].split(".")[0]
        # Read count file & add sample ID as column
        current_count_df = pd.read_csv(count_file, sep="\t", low_memory=False, skiprows=1)
        current_count_df = current_count_df.set_axis([*current_count_df.columns[:-1], current_sample_id], axis=1)

        if i == 0:  # First count file
            total_counts_df = current_count_df
        else:    # All other count files
            # Add only last column to the total count dataframe
            total_counts_df = total_counts_df.merge(current_count_df,  how="outer",
                                                    on=["Geneid", "Chr", "Start", "End", "Strand", "Length"])

    # Write total count dataframe to file
    total_counts_df.to_csv(output_total_count_file, sep="\t", index=False)
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
library("FRASER")


main_function <- function() {
  input_fraser_analysis_set_object_file <- snakemake@input[["fraser_analysis_set_object_file"]]

  # Output: Differential splicing analysis - Plots
  output_summary_table_file <- toString(snakemake@output[["csv_summary_table_file"]])
  plot_aberrant_events_per_sample_file <- toString(snakemake@output["plot_aberrant_events_per_sample_file"][1])
  plot_qq_plot_file <- toString(snakemake@output["plot_qq_plot_file"][1])

  # 1. Create FRASER object
  dir_name <- dirname(dirname(input_fraser_analysis_set_object_file))
  file_name <- basename(input_fraser_analysis_set_object_file)
  fds <- FRASER::loadFraserDataSet(dir=dir_name, name=file_name)
  print("FRASER: FRASER dataset object loaded")


  # 2. Collect results and save them in a data frame
  res <- as.data.table(results(fds))
  resOrdered <- res[order(res$pValue),]					# Sort results by p-value
  # Exporting results
  resOrderedDF <- as.data.frame(resOrdered)
  write.csv(resOrderedDF, file=output_summary_table_file)


  # 3. Create Plots
  # 3.1 Plot the number of aberrant events per sample
  tryCatch(
    expr = {
      # Plot number of aberrant events per sample based on the given cutoff values
      print("Plotting number of aberrant events per sample")
      print(plot_aberrant_events_per_sample_file)
      png(filename=plot_aberrant_events_per_sample_file, width=800, height=800)
      print(FRASER::plotAberrantPerSample(fds))
      dev.off()
    },
    error = function(e) {
      print("Error in creating aberrant events per sample plot")
      print(e)
    }
  )

  # 3.2 Plot the qq-plot
  tryCatch(
    expr = {
      # Global qq-plot (on gene level since aggregate=TRUE)
      print("Plotting qq-plot")
      print(plot_qq_plot_file)
      jpeg(filename=plot_qq_plot_file, width=800, height=800)
      print(FRASER::plotQQ(fds, aggregate=TRUE, global=TRUE))
      dev.off()
    },
    error = function(e) {
        print("Error in creating qq-plot")
        print(e)
    }
  )
}

main_function()
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
library("FRASER")

library("TxDb.Hsapiens.UCSC.hg19.knownGene")
library("org.Hs.eg.db")

# Requierements: 1. Sample annotation,
# 2. Two count matrices are needed: one containing counts for the splice junctions, i.e. the
# split read counts, and one containing the splice site counts, i.e. the counts of non
# split reads overlapping with the splice sites present in the splice junctions.


set_up_fraser_dataset_object <- function(sample_annotation_file_path) {
  #' Function to set up a FRASER object
  #'
  #' @param sample_annotation_file_path     Path to sample annotation file
  #' @param output_dir_path     Path to output directory
  #'
  #' @return FRASER object

  # Load annotation file
  annotationTable <- fread(sample_annotation_file_path, header=TRUE, sep="\t", stringsAsFactors=FALSE)
  annotationTable$bamFile <- file.path(annotationTable$bamFile)   # Required for FRASER

  # --------------- Creating a FRASER object ----------------
  # create FRASER object
  settings <- FraserDataSet(colData=annotationTable, name="Fraser Dataset")

  # Via count reads
  fds <- countRNAData(settings)

  # Via raw counts
  # junctionCts <- fread(additional_junction_counts_file, header=TRUE, sep="\t", stringsAsFactors=FALSE)
  # spliceSiteCts <- fread(additional_splice_site_counts_file, header=TRUE, sep="\t", stringsAsFactors=FALSE)
  # fds <- FraserDataSet(colData=annotationTable, junctions=junctionCts, spliceSites=spliceSiteCts, workingDir="FRASER_output")

  return(fds)
}


run_filtering <- function(fraser_object,
                          plot_filter_expression_file, plot_cor_psi5_heatmap_file,
                          plot_cor_psi3_heatmap_file, plot_cor_theta_heatmap_file) {
  #' Function to run filtering
  #'
  #' @param fraser_object     FRASER object
  #' @param output_dir_path     Path to output directory
  #'
  #' @return FRASER object


  # --------------- Filtering ----------------
  # Compute main splicing metric -> The PSI-value
  fds <- calculatePSIValues(fraser_object)
  # Run filters on junctions: At least one sample has 20 reads, and at least 5% of the samples have at least 1 reads
  # Filter=FALSE, since we first plot and subsequently apply subsetting
  fds <- filterExpressionAndVariability(fds,
                                        minExpressionInOneSample=20,
                                        minDeltaPsi=0.0,  # Only junctions with a PSI-value difference of at least x% between two samples are considered
                                        filter=FALSE       # If TRUE, a subsetted fds containing only the introns that passed all filters is returned.
                                        )

  # Plot filtering results
  jpeg(plot_filter_expression_file, width=800, height=800)
  print(plotFilterExpression(fds, bins=100))
  dev.off()

  # Finally apply filter results
  fds_filtered <- fds[mcols(fds, type="j")[,"passed"],]

  # ---------------- Heatmaps of correlations ----------------
  # 1. Correlation of PSI5
  tryCatch(
    expr = {
      # Heatmap of the sample correlation
      jpeg(plot_cor_psi5_heatmap_file, width=800, height=800)
      plotCountCorHeatmap(fds_filtered, type="psi5", logit=TRUE, normalized=FALSE)
      dev.off()
    },
    error = function(e) {
        print("Error in creating Heatmap of the sample correlation")
        print(e)
    }
  )
  # tryCatch(
  #   expr = {
  #     # Heatmap of the intron/sample expression
  #     jpeg(plot_cor_psi5_top100_heatmap_file, width=800, height=800)
  #     plotCountCorHeatmap(fds_filtered, type="psi5", logit=TRUE, normalized=FALSE,
  #                     plotType="junctionSample", topJ=100, minDeltaPsi = 0.01)
  #     dev.off()
  #   },
  #   error = function(e) {
  #       print("Error in creating Heatmap of the intron/sample expression")
  #       print(e)
  #   }
  # )

  # 2. Correlation of PSI3
  tryCatch(
      expr = {
      # Heatmap of the sample correlation
      jpeg(plot_cor_psi3_heatmap_file, width=800, height=800)
      plotCountCorHeatmap(fds_filtered, type="psi3", logit=TRUE, normalized=FALSE)
      dev.off()
      },
      error = function(e) {
          print("Error in creating Heatmap of the sample correlation")
          print(e)
      }
  )
  # tryCatch(
  #   expr = {
  #     # Heatmap of the intron/sample expression
  #     jpeg(plot_cor_psi3_top100_heatmap_file, width=800, height=800)
  #     plotCountCorHeatmap(fds_filtered, type="psi3", logit=TRUE, normalized=FALSE,
  #                     plotType="junctionSample", topJ=100, minDeltaPsi = 0.01)
  #     dev.off()
  #   },
  #   error = function(e) {
  #       print("Error in creating Heatmap of the intron/sample expression")
  #       print(e)
  #   }
  # )

  # 3. Correlation of Theta
  tryCatch(
      expr = {
      # Heatmap of the sample correlation
      jpeg(plot_cor_theta_heatmap_file, width=800, height=800)
      plotCountCorHeatmap(fds_filtered, type="theta", logit=TRUE, normalized=FALSE)
      dev.off()
      },
      error = function(e) {
          print("Error in creating Heatmap of the sample correlation")
          print(e)
      }
  )
  # tryCatch(
  #   expr = {
  #     # Heatmap of the intron/sample expression
  #     jpeg(plot_cor_theta_top100_heatmap_file, width=800, height=800)
  #     plotCountCorHeatmap(fds_filtered, type="theta", logit=TRUE, normalized=FALSE,
  #                     plotType="junctionSample", topJ=100, minDeltaPsi = 0.01)
  #     dev.off()
  #   },
  #   error = function(e) {
  #       print("Error in creating Heatmap of the intron/sample expression")
  #       print(e)
  #   }
  # )

  return(fds_filtered)
}


detect_dif_splice <- function(fraser_object, output_fraser_analysis_set_object_file,
                              plot_normalized_cor_psi5_heatmap_file,
                              plot_normalized_cor_psi3_heatmap_file,
                              plot_normalized_cor_theta_heatmap_file) {
  #' Function to detect differential splicing
  #'
  #' @param fraser_object     FRASER object
  #' @param output_dir_path     Path to output directory
  #' @param summary_table_file     Path to summary table file
  #'
  #' @return FRASER object


  # ----------------- Detection of differential splicing -----------------
  # 1. Fitting the splicing model:
  # Normalizing data and correct for confounding effects by using a denoising autoencoder
  # This is computational heavy on real size datasets and can take awhile

  # q: The encoding dimension to be used during the fitting procedure. Can be fitted with optimHyperParams
  # see: https://rdrr.io/bioc/FRASER/man/optimHyperParams.html
  fds <- FRASER(fraser_object, q=c(psi5=3, psi3=5, theta=2))

  # Plot 1: PSI5
  tryCatch(
    expr = {
      # Check results in heatmap
      jpeg(plot_normalized_cor_psi5_heatmap_file, width=800, height=800)
      plotCountCorHeatmap(fds, type="psi5", normalized=TRUE, logit=TRUE)
      dev.off()
    },
    error = function(e) {
        print("Error in creating Heatmap of the sample correlation")
        print(e)
    }
  )

  # Plot 2: PSI3
  tryCatch(
      expr = {
      # Check results in heatmap
      jpeg(plot_normalized_cor_psi3_heatmap_file, width=800, height=800)
      plotCountCorHeatmap(fds, type="psi3", normalized=TRUE, logit=TRUE)
      dev.off()
      },
      error = function(e) {
          print("Error in creating Heatmap of the sample correlation")
          print(e)
      }
  )

  # Plot 3: Theta
  tryCatch(
      expr = {
      # Check results in heatmap
      jpeg(plot_normalized_cor_theta_heatmap_file, width=800, height=800)
      plotCountCorHeatmap(fds, type="theta", normalized=TRUE, logit=TRUE)
      dev.off()
      },
      error = function(e) {
          print("Error in creating Heatmap of the sample correlation")
          print(e)
      }
  )


  # 2. Differential splicing analysis
  # 2.1 annotate introns with the HGNC symbols of the corresponding gene
  txdb <- TxDb.Hsapiens.UCSC.hg19.knownGene
  orgDb <- org.Hs.eg.db
  fds <- annotateRangesWithTxDb(fds, txdb=txdb, orgDb=orgDb)

  # 2.2 retrieve results with default and recommended cutoffs (padj <= 0.05 and |deltaPsi| >= 0.3)
  print("Saving FraserAnalysisDataSetTest results")
  # Saves RDS-file into savedObjects folder
  saveFraserDataSet(fds, dir=dirname(dirname(output_fraser_analysis_set_object_file)),
                    name=basename(output_fraser_analysis_set_object_file))


  # ----------------- Finding splicing candidates in patients -----------------
  # -> Plotting the results
  # tryCatch(
  #   expr = {
      # -------- Sample specific plots --------
      # jpeg(file.path(output_dir_path, "psi5_volcano_plot_sample1.jpg"), width=800, height=800)
      # plotVolcano(fds, type="psi5", annotationTable$sampleID[1])
      # dev.off()

      # jpeg(file.path(output_dir_path, "psi5_expression_sample1.jpg"), width=800, height=800)
      # plotExpression(fds, type="psi5", result=sampleRes[1])
      # dev.off()

      # jpeg(file.path(output_dir_path, "expected_vs_observed_psi_sample1.jpg"), width=800, height=800)
      # plotExpectedVsObservedPsi(fds, result=sampleRes[1])
      # dev.off()
  #   },
  #   error = function(e) {
  #       print("Error in creating plots")
  #       print(e)
  #   }
  # )

  return(fds)
  }


main_function <- function() {
  in_sample_annotation_file <- snakemake@input[["sample_annotation_file"]]

  # Output: Plot files - After filtering, no normalization
  plot_filter_expression_file <- snakemake@output[["plot_filter_expression_file"]]
  plot_cor_psi5_heatmap_file <- snakemake@output[["plot_cor_psi5_heatmap_file"]]
  plot_cor_psi3_heatmap_file <- snakemake@output[["plot_cor_psi3_heatmap_file"]]
  plot_cor_theta_heatmap_file <- snakemake@output[["plot_cor_theta_heatmap_file"]]

  # ToDO: Set plotType to "sampleCorrelation", however this plots are not helpful and can be ignored...
  # plot_cor_psi5_top100_heatmap_file <- snakemake@output[["plot_cor_psi5_top100_heatmap_file"]]
  # plot_cor_psi3_top100_heatmap_file <- snakemake@output[["plot_cor_psi3_top100_heatmap_file"]]
  # plot_cor_theta_top100_heatmap_file <- snakemake@output[["plot_cor_theta_top100_heatmap_file"]]

  # Output: Plot files - After filtering, normalization
  plot_normalized_cor_psi5_heatmap_file <- snakemake@output[["plot_normalized_cor_psi5_heatmap_file"]]
  plot_normalized_cor_psi3_heatmap_file <- snakemake@output[["plot_normalized_cor_psi3_heatmap_file"]]
  plot_normalized_cor_theta_heatmap_file <- snakemake@output[["plot_normalized_cor_theta_heatmap_file"]]

  # Output: Differential splicing analysis
  output_fraser_dataset_object_file <- snakemake@output[["fraser_data_set_object_file"]]


  # TODO: Integrate additional count files from external resources -> Failed...
  # additional_junction_counts_file <- snakemake@params[["additional_junction_counts_file"]]
  # additional_splice_site_counts_file <- snakemake@params[["additional_splice_site_counts_file"]]

  threads <- snakemake@threads
  register(MulticoreParam(workers=threads))

  # 1. Create FRASER object
  fraser_obj <- set_up_fraser_dataset_object(in_sample_annotation_file)
  print("FRASER: FRASER dataset object created")

  # 2. Run filtering
  filtered_fraser_obj <- run_filtering(fraser_obj,
                                       plot_filter_expression_file,
                                       plot_cor_psi5_heatmap_file,
                                       plot_cor_psi3_heatmap_file,
                                       plot_cor_theta_heatmap_file)
  print("FRASER: Filtering done")

  # 3. Detect differential splicing
  detect_dif_splice(filtered_fraser_obj, output_fraser_dataset_object_file,
                    plot_normalized_cor_psi5_heatmap_file,
                    plot_normalized_cor_psi3_heatmap_file,
                    plot_normalized_cor_theta_heatmap_file
                    )
  print("FRASER: Differential splicing analysis done")
}

main_function()
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
library(DESeq2)
source(snakemake@input[["deseq2_constructor_script"]])  				# Load IRFinder-related function

results = read.table(snakemake@input[["irfinder_results_file_paths_collection"]])
paths = as.vector(results$V1)                                            # File names must be saved in a vector
experiment = read.table(snakemake@input[["sample_condition_mapping_file"]], header=T)
experiment$Condition=factor(experiment$Condition,levels=c(snakemake@wildcards[["condition"]], "None"))    # Set WT as the baseline in the analysis
rownames(experiment)=NULL                                                # Force removing rownames

# WARNING: make sure the rownames of `experiment` is set to NULL.
# WARNING: users MUST check if the order of files in the `path` matches the order of samples in `experiment` before continue

metaList=DESeqDataSetFromIRFinder(filePaths=paths, designMatrix=experiment, designFormula=~1)
# The above line generates a meta list containing four slots
# First slot is a DESeq2 Object that can be directly passed to DESeq2 analysis.
# Second slot is a matrix for trimmed means of intron depth
# Third slot is a matrix for correcting splicing depth flanking introns
# Fourth slot is a matrix for maximum splicing reads at either ends of introns
# We build a “null” regression model on the interception only.
# A “real” model can be assigned either here directly, or in the downstream step. See below

dds = metaList$DESeq2Object                       # Extract DESeq2 Object with normalization factors ready
print("Check design of matrix")
colData(dds)                                      # Check design of matrix


# Please note that sample size has been doubled and one additional column "IRFinder" has been added.
# This is because IRFinder considers each sample has two sets of counts: one for reads inside intronic region
# and one for reads at splice site, indicating by "IR" and "Splice" respectively.
# "IRFinder" is considered as an additional variable in the GLM model.
# Please also be aware that size factors have been set to 1 for all samples. Re-estimation of size factors is NOT recommended and is going to bias the result.
# More details at the end of the instruction.

design(dds) = ~Condition + Condition:IRFinder     # Build a formula of GLM. Read below for more details.
dds = DESeq(dds)                                  # Estimate parameters and fit to model

print("Check actual variable names assigned by DeSeq2")
resultsNames(dds)                                 # Check the actual variable name assigned by DESeq2


res.WT = results(dds, name = "ConditionWT.IRFinderIR")
# This tests if the number of IR reads are significantly different from normal spliced reads, in the WT samples.
# We might only be interested in the "log2FoldChange" column, instead of the significance.
# This is because "log2FoldChange" represents log2(number of intronic reads/number of normal spliced reads).
# So we have the value of (intronic reads/normal spliced reads) by

WT.IR_vs_Splice=2^res.WT$log2FoldChange

# As IR ratio is calculated as (intronic reads/(intronic reads+normal spliced reads))
# We can easily convert the above value to IR ratio by

IRratio.WT = WT.IR_vs_Splice/(1+WT.IR_vs_Splice)

# Similarly, we can get IR ratio in the KO samples
res.KO = results(dds, name = "ConditionKO.IRFinderIR")
KO.IR_vs_Splice=2^res.KO$log2FoldChange
IRratio.KO = KO.IR_vs_Splice/(1+KO.IR_vs_Splice)

# Finally we can test the difference of (intronic reads/normal spliced reads) ratio between WT and KO
res.diff = results(dds, contrast=list("ConditionKO.IRFinderIR","ConditionWT.IRFinderIR"))
write.csv(dxr1, file=snakemake@output[["output_results_csv_file"]], row.names=TRUE)

# We can plot the changes of IR ratio with p values
# In this example we defined significant IR changes as
# 1) IR changes no less than 10% (both direction) and
# 2) with adjusted p values less than 0.05

IR.change = IRratio.KO - IRratio.WT
# Create plot and save it as JPEG
output_plot_file = snakemake@output[["output_plot_file"]]
jpeg(file=output_plot_file)
print(plot(IR.change,col=ifelse(res.diff$padj < 0.05 & abs(IR.change)>=0.1, "red", "black")))
dev.off()
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
import pandas as pd


# filters input_df according to given p-val threshold
# -> Returns filtered dataframe
def filter_dfs(input_df, p_val_threshold=0.01, columns=None):
    """
    Filters input_df according to given p-val threshold
    Sort & extract top 1000 rows
    :param input_df:
    :param p_val_threshold:
    :param columns:
    :return:
    """
    if columns is None or len(columns) == 0:
        columns = input_df.columns

    sub_df = input_df[columns]

    # filtering: only keep rows with p-val < p_val_threshold
    # .any(axis=1) -> keep rows with at least one True value
    output_df = sub_df[(sub_df <= p_val_threshold).any(axis=1)]

    # Sort by minimal p-value
    output_df['min_pvalue'] = output_df.min(axis=1)
    output_df = output_df.sort_values(by=['min_pvalue'])

    # Extract only top 1000 results
    output_df = output_df.head(n=1000)

    return output_df


if __name__ == '__main__':
    # Load files
    all_outlier_introns_pVals_file = snakemake.input["all_outlier_introns_pVals_file"]
    all_outlier_clusters_pVals_file = snakemake.input["all_outlier_clusters_pVals_file"]
    all_outlier_effSize_file = snakemake.input["all_outlier_effSize_file"]

    # Output files
    all_filtered_introns_file = snakemake.output["all_filtered_introns_file"]
    condition_filtered_introns_file = snakemake.output["condition_filtered_introns_file"]
    all_filtered_clusters_file = snakemake.output["all_filtered_clusters_file"]
    condition_filtered_clusters_file = snakemake.output["condition_filtered_clusters_file"]

    # Load sample names for affected samples
    sample_ids = snakemake.params.patient_sample_ids
    pvalue_threshold = snakemake.params.pvalue_threshold

    # Load dataframes of LeafcutterMD results
    introns_df = pd.read_csv(all_outlier_introns_pVals_file, sep='\t')
    clusters_df = pd.read_csv(all_outlier_clusters_pVals_file, sep='\t')

    # Intron assessment results
    filter_dfs(introns_df, p_val_threshold=pvalue_threshold).to_csv(all_filtered_introns_file, sep='\t')
    filter_dfs(introns_df, p_val_threshold=pvalue_threshold, columns=sample_ids).to_csv(condition_filtered_introns_file, sep='\t')

    # Cluster assessment results
    filter_dfs(clusters_df, p_val_threshold=pvalue_threshold).to_csv(all_filtered_clusters_file, sep='\t')
    filter_dfs(clusters_df, p_val_threshold=pvalue_threshold, columns=sample_ids).to_csv(condition_filtered_clusters_file, sep='\t')
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
import os


def create_leafcutter_group_file(output_dir, condition, control_sample_ids, patient_sample_ids):
    """
    Creates a group file for leafcutter analysis
    :param output_dir:              Output directory
    :param condition:               Current condition's name
    :param control_sample_ids:      List of control sample ids
    :param patient_sample_ids:
    :return:
    """
    # Create group file
    group_file = os.path.join(output_dir, f"{condition}_group_file.txt")

    group_file_text = ""
    for sample_id in control_sample_ids:
        group_file_text += f"{sample_id}\tcontrol\n"
    for sample_id in patient_sample_ids:
        group_file_text += f"{sample_id}\tpatient\n"

    with open(group_file, "w") as f:
        f.write(group_file_text)


if __name__ == '__main__':
    # Get snakemake variables
    output_dir = snakemake.params.output_dir
    control_samples_ids = snakemake.params.control_samples["sample_name"].tolist()
    condition_samples_array = snakemake.params.condition_samples_array

    for condition_samples in condition_samples_array:
        condition_samples_ids = condition_samples["sample_name"].tolist()
        current_condition = condition_samples["condition"][0]
        print("condition: ", current_condition)
        print("control samples: ", control_samples_ids)
        print("condition samples: ", condition_samples)
        print("condition samples IDs: ", condition_samples_ids)
        create_leafcutter_group_file(output_dir, current_condition, control_samples_ids, condition_samples_ids)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import pandas as pd


def extract_actual_junctions_from_regtools_file(input_jct_file, output_jct_file, sample_id):
    """
    Extracts the actual junctions from the regtools file
    Meaning:
    ChromStart includes maximum overhang for the junction on the left side -> Add blockSizes[0] to get the actual start
    ChromEnd includes maximum overhang for the junction on the right side -> Subtract blockSizes[1] to get the actual end
    See docs here: https://regtools.readthedocs.io/en/latest/commands/junctions-extract/

    :param input_jct_file:      Input file: Contains all junctions for given sample
    :param output_jct_file:     Output file
    :params sample_id:          ID of current sample

    :return:
    """

    # Column-names: Each line is a exon-exon junction
    column_names = ["chrom", "chromStart", "chromEnd", "name", "score", "strand",
                    "thickStart", "thickEnd", "itemRgb", "blockCount", "blockSizes", "blockStarts"]

    # Read the file
    junction_df = pd.read_csv(input_jct_file, names=column_names, sep="\t")

    # Extract the actual junctions
    junction_df["max_overhang_before_start"] = junction_df["blockSizes"].str.split(",").str[0].astype(int)
    junction_df["max_overhang_after_end"] = junction_df["blockSizes"].str.split(",").str[-1].astype(int)
    junction_df["exact_jct_start"] = junction_df["chromStart"].astype(int) + junction_df["max_overhang_before_start"]
    junction_df["exact_jct_end"] = junction_df["chromEnd"].astype(int) - junction_df["max_overhang_after_end"]

    # Save the results to a file, but without header...
    junction_df["sample_name"] = sample_id
    reduced_df = junction_df[["chrom", "exact_jct_start", "exact_jct_end", "sample_name", "score", "strand"]]
    reduced_df.to_csv(output_jct_file, index=False, header=False, sep="\t")


if __name__ == "__main__":
    # Input files
    snakemake_input_file = snakemake.input.regtools_junc_files
    # Output file
    snakemake_output_file = snakemake.output.output_file

    sample_id = snakemake.wildcards.sample_id

    # Extract the actual junctions
    extract_actual_junctions_from_regtools_file(snakemake_input_file, snakemake_output_file, sample_id)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
import pandas as pd


def filter_junctions(junction_collection_file, control_samples, condition_samples,
                     only_ctr_junc_file, only_cond_junc_file,
                     max_contrast=0.0,
                     in_all_samples=True):
    """
    Merges the junctions from the different samples into one file
    :param junction_collection_file:    File containing the junctions from all samples
    :param control_samples:             List of samples from condition 1
    :param condition_samples:           List of samples from condition 2
    :param only_ctr_junc_file:          Output file for junctions only in condition 1
    :param only_cond_junc_file:         Output file for junctions only in condition 2
    :param max_contrast:                Part of total nr of reads of all samples
                                        over all conditions, that is allowed to appear.
                                        E.g. 0.2, where total number of reads is 100 -> then max a total of 20 (-> 20%)
                                         reads are allowed to be in the contra-condition
    :param in_all_samples:              If True, junctions have to be in all samples
    :return:
    """

    # Column-names: Each line is a exon-exon junction
    total_junction_df = pd.read_csv(junction_collection_file, low_memory=False, sep="\t")

    # Select first 5 info columns and respective sample columns
    print("Control samples: ", control_samples)
    print("Condition samples: ", condition_samples)
    selected_columns = total_junction_df.columns[:5].tolist() + control_samples + condition_samples
    total_junction_df_selected = total_junction_df[selected_columns]

    # Collect total sum & compute maximum contrast
    total_junction_df_selected["total_sum"] = (total_junction_df_selected[control_samples].sum(axis=1)
                                               + total_junction_df_selected[condition_samples].sum(axis=1))
    total_junction_df_selected["max_contrast"] = total_junction_df_selected["total_sum"]*max_contrast

    # ------------- 1. Filter only control junctions -------------
    only_control_junctions_df = apply_filtering(total_junction_df_selected, control_samples, condition_samples, in_all_samples)
    only_control_junctions_df = only_control_junctions_df.sort_values(by="total_sum", ascending=False)

    # ------------- 2. Filter only condition junctions -------------
    only_condition_junctions_df = apply_filtering(total_junction_df_selected, condition_samples, control_samples, in_all_samples)
    only_condition_junctions_df = only_condition_junctions_df.sort_values(by="total_sum", ascending=False)

    # Save the results in output dir
    only_control_junctions_df.to_csv(only_ctr_junc_file, index=False, sep="\t")
    only_condition_junctions_df.to_csv(only_cond_junc_file, index=False, sep="\t")


def apply_filtering(input_df, samples_1, samples_2, in_all_samples_bool):
    """
    Filters the junctions in the input_df
    1. Only rows where at least one read in all samples of samples_1 are collected
    2. If in_all_samples_bool is True, then only junctions in all samples_1 are collected
    3. Only junctions where the sum of all samples_1 is higher than the sum of all samples_2 (depending on max_contrast)

    :param input_df:            Input dataframe
    :param samples_1:           List of samples from condition 1
    :param samples_2:           List of samples from condition 2
    :param in_all_samples_bool:     If True, junctions have to be in all samples of condition 1
    :return:
    """

    # 1. Select only rows where at least one sample has a value > 0
    df_with_s1_jcts = input_df[input_df[samples_1].sum(axis=1) > 0]

    # 2. Every junction has to be in all s1 samples
    if in_all_samples_bool:
        for sample in samples_1:
            df_with_s1_jcts = df_with_s1_jcts[df_with_s1_jcts[sample] > 0]

    # 3. Filtering depending on the max_contrast column
    df_with_s1_jcts = df_with_s1_jcts[(df_with_s1_jcts[samples_1].sum(axis=1) > df_with_s1_jcts["max_contrast"])
                                      & (df_with_s1_jcts[samples_2].sum(axis=1) <= df_with_s1_jcts["max_contrast"])]

    return df_with_s1_jcts


if __name__ == "__main__":
    # Input files
    snakemake_junction_collection_file = snakemake.input.junction_collection_file
    # params
    snakemake_control_samples = snakemake.params.control_samples
    snakemake_condition_samples = snakemake.params.condition_samples
    # Output file
    snakemake_only_control_junctions_file = snakemake.output.only_control_junctions_file
    snakemake_only_condition_junctions_file = snakemake.output.only_condition_junctions_file

    # params
    snakemake_max_contrast = snakemake.params.max_contrast

    # Filter junctions
    filter_junctions(snakemake_junction_collection_file, snakemake_control_samples, snakemake_condition_samples,
                     snakemake_only_control_junctions_file, snakemake_only_condition_junctions_file,
                     snakemake_max_contrast)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
library("AnnotationDbi")
library("org.Hs.eg.db")


extract_gene_id_from_info_col <- function(data_frame_obj, info_col, gene_id_col="gene_ensembl_id") {
	"
	Extracts gene ID from info column.
	"
	# Extract gene-IDs from info_col
	# Each entry in info_col looks like this:
	# gene_id "ENSG00000186092"; transcript_id "ENST00000335137"; exon_number "1"; gene_name "OR4F5"; gene_biotype "protein_coding"; transcript_name "OR4F5-201"; exon_id "ENSE00002234944";
	# Extract the first part of the string, i.e. the gene_id
	gene_ids <- lapply(data_frame_obj[info_col], FUN=function(x) {
		gene_id <- gsub(pattern=".*gene_id \"", replacement="", x=x)
		gene_id <- gsub(pattern="\";.*", replacement="", x=gene_id)
		return(gene_id)
		}
	)
	data_frame_obj[gene_id_col] <- gene_ids

	return(data_frame_obj)
}


add_gene_symbol_and_entrez_id_to_results <- function(data_frame_obj,
														gene_ensembl_id_col="gene_ensembl_id",
														gene_name_col="gene_name") {
	"
	Adds gene symbols and entrez-IDs to results object.
	"
	gene_ids_vector <- as.vector(t(data_frame_obj[gene_ensembl_id_col]))

	# If empty gene_ids_vector, then return fill with NA
	if (length(gene_ids_vector) == 0) {
		data_frame_obj[gene_name_col] <- character(0)
	}

	else {
		# Add gene symbols
		# Something breaks here when setting a new column name
		data_frame_obj[gene_name_col] <- AnnotationDbi::mapIds(org.Hs.eg.db::org.Hs.eg.db,
															  keys=gene_ids_vector,
															  column="SYMBOL",
															  keytype="ENSEMBL",
															  multiVals="first")
	}
	return(data_frame_obj)
}




# Main function
main <- function() {
	# Input
	input_table_files <- snakemake@input
	# Output
	output_files <- snakemake@output

	# info_col
	info_col_name <- snakemake@params[["info_col_name"]]
	gene_ensembl_id_col_name = snakemake@params[["gene_ensembl_id_col_name"]]
	gene_name_col_name = snakemake@params[["gene_name_col_name"]]


	# Loop over input files
	for (i in seq_along(input_table_files)) {
		# Read input table
		df <- read.table(toString(input_table_files[i]), sep="\t", header=TRUE, stringsAsFactors=FALSE)

		# Extract gene ID from info column
		df <- extract_gene_id_from_info_col(df, info_col=info_col_name, gene_id_col=gene_ensembl_id_col_name)

		# Add gene symbols and entrez-IDs
		df <- add_gene_symbol_and_entrez_id_to_results(df,
			gene_ensembl_id_col=gene_ensembl_id_col_name, gene_name_col=gene_name_col_name)


		# Put gene_ensembl_id_col and gene_name_col to the front
		input_table <- df[, c(gene_ensembl_id_col_name, gene_name_col_name,
			setdiff(colnames(df), c(gene_ensembl_id_col_name, gene_name_col_name)))]

		# Write output table
		write.table(input_table, file=toString(output_files[i]), sep="\t", quote=FALSE, row.names=FALSE)
	}
}


# Run main function
main()
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
import pandas as pd


def merge_junctions_naivly(input_file_list, input_sample_list, output_file):
    """
    Merges the junctions from the different samples into one file.
    Naivly merging: If a junction is present in one sample, it is present in the merged file.

    :param input_file_list:     List of files containing the junctions from all samples
    :param input_sample_list:   List of sample names
    :param output_file:         Output file for merged junctions
    :return:
    """

    # Column-names: Each line is a exon-exon junction
    common_cols = ["chrom", "exact_jct_start", "exact_jct_end", "strand", "add_info"]       # Shared column names
    column_types = {"chrom": "category", "exact_jct_start": "uint32", "exact_jct_end": "uint32", "strand": "category"}
    summary_df = pd.DataFrame(columns=common_cols)
    summary_df.astype(column_types)

    # Iterate over all samples and merge the junctions into one file
    for counter, input_file in enumerate(input_file_list):
        sample_name = input_sample_list[counter]

        current_df = pd.read_csv(input_file, low_memory=False, sep="\t")
        current_df.fillna(0, inplace=True)
        reduced_df = current_df.iloc[:, 0:6]
        reduced_df.columns = ["chrom", "exact_jct_start", "exact_jct_end", "sample_name", "score", "strand"]
        reduced_df["add_info"] = current_df.iloc[:, -2]
        reduced_df.rename(columns={"score": sample_name}, inplace=True)     # Rename the score column to the sample name
        reduced_df = reduced_df[["chrom", "exact_jct_start", "exact_jct_end", "strand", "add_info", sample_name]]
        # Set datatypes to save memory
        reduced_df.astype(column_types)
        reduced_df.astype({sample_name: "uint16"})

        # Merge the junctions
        summary_df = summary_df.merge(reduced_df, on=common_cols, how="outer")

    # Save the results to a file
    summary_df.fillna(0).to_csv(output_file, index=False, sep="\t")


if __name__ == "__main__":
    # Input files
    snakemake_input_files = snakemake.input.all_junc_files
    snakemake_input_samples = snakemake.params.sample_names
    # Output file
    snakemake_output_file = snakemake.output.output_file

    # Merge the junctions
    merge_junctions_naivly(snakemake_input_files, snakemake_input_samples, snakemake_output_file)
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
import pandas as pd


def load_simple_result_file(input_file_path, input_gene_name_col, input_adjusted_pval_col, tool_name):
    """
    Simply loads the given file (automatic separator detection) and
    returns a dataframe with the following columns:
    - gene_name
    - <tool_name> adjusted-p-value
    - <tool_name> ranking
    """

    df = None

    if input_file_path:
        # New column names
        ranking_col = tool_name + ": ranking"
        adjusted_pval_col = tool_name + ": adjusted-p-value"

        # Load file -> automatically detect separator
        df = pd.read_csv(str(input_file_path), sep=None)
        df[ranking_col] = df.index +1   # +1 because index starts at 0

        # Rename gene name column
        df = df.rename(columns={input_gene_name_col: "gene_name"})

        # Rename adjusted p-value column
        if input_adjusted_pval_col:
            df = df.rename(columns={input_adjusted_pval_col: adjusted_pval_col})
            df = df[["gene_name", adjusted_pval_col, ranking_col]]
        else:   # no adjusted p-value column given -> So do not use it
            df = df[["gene_name", ranking_col]]

    else:
        print("No input file for " + tool_name + " provided. Skipping...")

    return df


def load_result_files():
    """
    Loads result files from all different tools of the given workflow.
    Returns an array of the resulting dataframes.

    Attention: Makes use of snakemake.input and snakemake.params.
    :return:
    """
    output_result_dfs = []

    # 1. Load pjd results
    try:
        # PJD condition
        if snakemake.input.pjd_condition:
            pjd_condition_df = load_simple_result_file(snakemake.input.pjd_condition,
                                                       snakemake.params.pjd_gene_col_name,
                                                       None, "PJD-condition")
            output_result_dfs.append(pjd_condition_df)
        # PJD control
        if snakemake.input.pjd_control:
            pjd_control_df = load_simple_result_file(snakemake.input.pjd_control, snakemake.params.pjd_gene_col_name,
                                                     None, "PJD-control")
            output_result_dfs.append(pjd_control_df)

    except AttributeError as e:
        print("1. No input file for PJD provided. Skipping...")
        print(e)

    # 2. Load Leafcutter results
    try:
        if snakemake.input.leafcutter_results:
            leafcutter_df = load_simple_result_file(snakemake.input.leafcutter_results, snakemake.params.leafcutter_gene_col_name,
                                                    snakemake.params.leafcutter_adjusted_pval_col_name, "Leafcutter")
            output_result_dfs.append(leafcutter_df)
    except AttributeError as e:
        print("2. No input file for Leafcutter provided. Skipping...")
        print(e)

    # 3. Load fraser results
    try:
        if snakemake.input.fraser_results:
            fraser_df = pd.read_csv(str(snakemake.input.fraser_results), sep=",")

            # select only condition samples -> params.samples_with_condition
            fraser_df = fraser_df[fraser_df["sampleID"].isin(snakemake.params.samples_with_condition)]

            fraser_df["FRASER: ranking"] = fraser_df.index
            fraser_df = fraser_df.rename(columns={snakemake.params.fraser_gene_col_name: "gene_name"})
            fraser_df = fraser_df.rename(columns=
                                         {snakemake.params.fraser_adjusted_pval_col_name: "FRASER: adjusted-p-value"})
            fraser_df = fraser_df[["gene_name", "FRASER: adjusted-p-value", "FRASER: ranking"]]
            output_result_dfs.append(fraser_df)
    except AttributeError as e:
        print("3. No input file for FRASER provided. Skipping...")
        print(e)

    # 4. Load dexseq results
    try:
        if snakemake.input.dexseq_results:
            dexseq_df = load_simple_result_file(snakemake.input.dexseq_results,
                                                snakemake.params.dexseq_gene_col_name,
                                                snakemake.params.dexseq_adjusted_pval_col_name, "DEXSeq")
            output_result_dfs.append(dexseq_df)
    except AttributeError as e:
        print("2. No input file for DEXSeq provided. Skipping...")
        print(e)

    # 5. Load rMATS results
    try:
        # 1. Load rMATS results for A3SS
        if snakemake.input.rmats_results_a3ss_jcec:
            rmats_a3ss_jcec_df = load_simple_result_file(snakemake.input.rmats_results_a3ss_jcec,
                                                         snakemake.params.rmats_gene_col_name,
                                                         snakemake.params.rmats_adjusted_pval_col_name, "rMATS-A3SS")
            output_result_dfs.append(rmats_a3ss_jcec_df)

        # 2. Load rMATS results for A5SS
        if snakemake.input.rmats_results_a5ss_jcec:
            rmats_a5ss_jcec_df = load_simple_result_file(snakemake.input.rmats_results_a5ss_jcec,
                                                         snakemake.params.rmats_gene_col_name,
                                                         snakemake.params.rmats_adjusted_pval_col_name, "rMATS-A5SS")
            output_result_dfs.append(rmats_a5ss_jcec_df)

        # 3. Load rMATS results for MXE
        if snakemake.input.rmats_results_mxe_jcec:
            rmats_mxe_jcec_df = load_simple_result_file(snakemake.input.rmats_results_mxe_jcec,
                                                        snakemake.params.rmats_gene_col_name,
                                                        snakemake.params.rmats_adjusted_pval_col_name, "rMATS-MXE")
            output_result_dfs.append(rmats_mxe_jcec_df)

        # 4. Load rMATS results for RI
        if snakemake.input.rmats_results_ri_jcec:
            rmats_ri_jcec_df = load_simple_result_file(snakemake.input.rmats_results_ri_jcec,
                                                       snakemake.params.rmats_gene_col_name,
                                                       snakemake.params.rmats_adjusted_pval_col_name, "rMATS-RI")
            output_result_dfs.append(rmats_ri_jcec_df)

        # 5. Load rMATS results for SE
        if snakemake.input.rmats_results_se_jcec:
            rmats_se_jcec_df = load_simple_result_file(snakemake.input.rmats_results_se_jcec,
                                                       snakemake.params.rmats_gene_col_name,
                                                       snakemake.params.rmats_adjusted_pval_col_name, "rMATS-SE")
            output_result_dfs.append(rmats_se_jcec_df)
    except AttributeError as e:
        print("5. No input file for rMATS provided. Skipping...")
        print(e)

    # Finally return the list of dataframes
    return output_result_dfs


def merge_results(output_result_dfs):
    """"
    Load result files from different tools and merge them into one dataframe.
    ATTENTION: Remove empty gene_name rows, since otherwise a memory overload occurs during merging.
    """
    merged_df = output_result_dfs[0]
    for df in output_result_dfs[1:]:
        merged_df = merged_df.merge(df, on="gene_name", how="outer")

    # count non-empty cells per row for chosen columns
    # Select columns that contain "ranking" in the column name
    ranking_cols_without_rmats = [col for col in merged_df.columns if "ranking" in col and "rMATS" not in col]
    ranking_cols_with_rmats = [col for col in merged_df.columns if "ranking" in col]
    # A: Agreement score without rMATS
    if len(ranking_cols_without_rmats) == 0:
        merged_df["Agreement Sum without rMATS"] = 0
    else:
        merged_df["Agreement Sum without rMATS"] = merged_df[ranking_cols_without_rmats].notnull().sum(axis=1)
    # B: Agreement score with rMATS
    merged_df["Agreement Sum with rMATS"] = merged_df[ranking_cols_with_rmats].notnull().sum(axis=1)

    # sort by detection sum
    merged_df = merged_df.sort_values(by=["Agreement Sum without rMATS", "Agreement Sum with rMATS"],
                                      ascending=[False, False])

    # replace NaN with empty string
    merged_df = merged_df.fillna("None")
    # Place col "gene_name", "Agreement Sum without rMATS", "Agreement Sum with rMATS" at the beginning
    cols = ["gene_name", "Agreement Sum without rMATS", "Agreement Sum with rMATS"] + \
           list([col for col in merged_df.columns if col not in ["gene_name", "Agreement Sum without rMATS",
                                                                 "Agreement Sum with rMATS"]
           ])
    merged_df = merged_df[cols]

    return merged_df


if __name__ == "__main__":
    # input
    input = snakemake.input
    # output
    output = snakemake.output
    # params
    params = snakemake.params

    # load results
    print("Loading result files", flush=True)
    result_dfs = load_result_files()    # Makes use of snakemake.input and snakemake.params
    # assert that list is not empty
    assert result_dfs, "No results loaded. Check input files."

    # clean results_dfs
    reduced_result_dfs = []
    print("Cleaning result dataframes", flush=True)
    for df in result_dfs:
        # 1. Remove rows where gene_name is "None", or ".", or "NA"
        df = df[~df["gene_name"].isin(["None", ".", "NA"])]
        # 2. Remove rows where gene_name is NaN
        df = df.dropna(subset=["gene_name"])
        # 3. Remove duplicate entries where gene_name is not unique
        df = df.drop_duplicates(subset="gene_name", keep="first")

        reduced_result_dfs.append(df)

    # merge results
    print("Now merging results", flush=True)
    merged_df = merge_results(reduced_result_dfs)
    print("Merging done", flush=True)

    # write output
    merged_df.to_csv(output[0], sep="\t", index=False)
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
library("ReportingTools")	# For creating HTML reports
library("knitr")		# For creating HTML reports

library("lattice") # For plotting


# ================ Hard coded HTML code changes =================
add_index_col_fct <- "
  [...document.querySelectorAll('#id2 tr')].forEach((row, i) => {
    var cell = document.createElement(i<2 ? 'th' : 'td');

    if (i <2) {
        row.insertCell(0);
    } else {
        var cell = row.insertCell(0);
        cell.classList.add('my_index_col');
        cell.innerHTML = (i-1);
    }
});"
add_index_col_update_fct <- "
  t.on('draw.dt', function(){
    console.log('Update index');
    let n = 0;
    $('.my_index_col').each(function () {
        $(this).html(++n);
    })
})"


# 1. Insert index column -> Must be inserted before DataTable is initialized
original_table_init_fct_head <- "function configureTable(i, el) {"
substitute_table_init_fct_head <- paste(original_table_init_fct_head, add_index_col_fct)

# 2. Remove pre-ordering of table
remove_to_disable_preordering <- "\"aaSorting\":[[0,'asc']],"

# ------------- Following hack code is not needed anymore... -----------------
# 3. Create local variable "t" that references the datatable
original_js_fct_head <- "$(this).dataTable({"
substitute_js_fct_head <- paste("var t = ", original_js_fct_head)

# 4. Use this as anchor to add more JS code
original_js_fct_tail <- '}).columnFilter({sPlaceHolder: "head:before",
                                aoColumns : filterClasses
                                });'



create_html_table <- function(input_file_path, sep="\t", title="Report Title", info_text="Info Text",
                              base_name="my_report", output_dir=".") {
  # Load table from data file
  # as.data.frame(resOrdered)
  input_table <- as.data.frame(read.csv(input_file_path, header=TRUE, sep=sep))
  if (nrow(input_table) == 0) {
    input_table[nrow(input_table)+1,] <- "No data"
  }
  # Remove column with no header (R names them "X")
  remove.cols <- names(input_table) %in% c("", "X")
  input_table <- input_table[! remove.cols]

  # Use ReportingTools to automatically generate dynamic HTML documents
  html_report <- ReportingTools::HTMLReport(shortName=base_name, title=title,
                                            reportDirectory=output_dir)

  # 1. Add a table to the report
  ReportingTools::publish(input_table, html_report)
  # 2. Add info text to the report
  ReportingTools::publish(info_text, html_report)

  # Also graphs can be added to the report
  # # Randomly
  # y <- rnorm(500)
  # plot<-lattice::histogram(y, main="Sample of 500 observations from a Normal (0,1)")
  # # 3. Add plot to the report
  # ReportingTools::publish(plot, html_report)

  # Finally, create the report
  ReportingTools::finish(html_report)
}

replace_external_scripts_and_styles <- function(input_html_file) {
  # Replace external scripts and styles with internal copies

  # Read input file
  html_file_content <- readLines(input_html_file, warn=FALSE)

  external_js_scripts <- c('<script language="JavaScript" src="jslib/jquery-1.8.0.min.js"></script>',
    '<script language="JavaScript" src="jslib/jquery.dataTables-1.9.3.js"></script>',
    '<script language="JavaScript" src="jslib/bootstrap.js"></script>',
    '<script language="JavaScript" src="jslib/jquery.dataTables.columnFilter.js"></script>',
    '<script language="JavaScript" src="jslib/jquery.dataTables.plugins.js"></script>',
    '<script language="JavaScript" src="jslib/jquery.dataTables.reprise.js"></script>',
    '<script language="JavaScript" src="jslib/bootstrap.js"></script>')

  external_css_styles <- c('<link rel="stylesheet" type="text/css" href="csslib/bootstrap.css" />',
    '<link rel="stylesheet" type="text/css" href="csslib/reprise.table.bootstrap.css" />')

  # Replace external scripts with CDN versions
  jquery_cdn <- '<script src="https://code.jquery.com/jquery-3.6.3.min.js" integrity="sha256-pvPw+upLPUjgMXY0G+8O0xUf+/Im1MZjXxxgOcBQBXU=" crossorigin="anonymous"></script>'
  jquery_datatable_cdn <- '<script src="https://cdn.datatables.net/1.13.1/js/jquery.dataTables.min.js"></script>'
  bootstrap_js_cdn <- '<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/bootstrap.min.js"></script>'
  html_file_content <- sub(external_js_scripts[1], jquery_cdn, html_file_content)
  html_file_content <- sub(external_js_scripts[2], jquery_datatable_cdn, html_file_content)
  html_file_content <- sub(external_js_scripts[3], bootstrap_js_cdn, html_file_content)

  # Replace external styles with CDN versions
  bootstrap_css_cdn <- '<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" />'
  html_file_content <- sub(external_css_styles[1], bootstrap_css_cdn, html_file_content)

  # Replace external scripts with local copies
  for (js_import in external_js_scripts[4:length(external_js_scripts)]) {
    js_source_file <- sub('<script language="JavaScript" src="', '', js_import)
    js_source_file <- sub('"></script>', '', js_source_file)
    path_to_js_file <- file.path(dirname(input_html_file), js_source_file)
    js_code <- paste(readLines(path_to_js_file, warn=FALSE), collapse="\n")

    # Replace external script with internal script
    html_file_content <- sub(js_import, paste('<script language="JavaScript">', js_code, '</script>', sep="\n"), html_file_content)
  }

  # Replace external styles with local copies
  for (css_import in external_css_styles[2:length(external_css_styles)]) {
    css_source_file <- sub('<link rel="stylesheet" type="text/css" href="', '', css_import)
    css_source_file <- sub('" />', '', css_source_file)
    path_to_css_file <- file.path(dirname(input_html_file), css_source_file)
    css_code <- paste(readLines(path_to_css_file, warn=FALSE), collapse="\n")

    # Replace external style with internal style
    html_file_content <- sub(css_import, paste('<style>', css_code, '</style>', sep="\n"), html_file_content)
  }

  return(html_file_content)
  # Write HTML file
}

add_index_column_functionality <- function(input_html_content) {
  "
  Add index column functionality to the HTML table
  Also disable pre-sorting by the first column
  "
  # 1. Insert index column -> Must be inserted before DataTable is initialized
  input_html_content <- sub(original_table_init_fct_head, substitute_table_init_fct_head, input_html_content, fixed=TRUE)
  # 2. Remove pre-ordering of table
  input_html_content <- sub(remove_to_disable_preordering, "", input_html_content, fixed=TRUE)

  return(input_html_content)
}

add_csv_download_button <- function(input_html_content) {
  "
  Add CSV download button to the HTML table.
  Button has class 'buttons-csv'.
  "
  # DataTables: Select only CSV button in intialization
  original_initialization <- "$(this).dataTable({"
  new_initialization <- "$(this).dataTable({\n\"buttons\": [\"csvHtml5\"],"

  # DataTables: Integration of buttons into DOM
  original_dom_declaration <- "\"sDom\": \"<'row'<'span6'l><'span6'f>r>t<'row'<'span6'i><'span6'p>>\","
  new_dom_declaration <- "\"sDom\": \"<'row'<'span6'lB><'span6'f>r>t<'row'<'span6'i><'span6'p>>\","

  # JS libraries
  additional_js_lib_1 <- '<script src="https://cdn.datatables.net/buttons/2.3.6/js/dataTables.buttons.min.js"></script>'
  additional_js_lib_2 <- '<script src="https://cdn.datatables.net/buttons/2.3.6/js/buttons.html5.min.js"></script>'

  # CSS changes
  # Make position relative and float right
  additional_css_changes <- "<style> .buttons-csv { position: relative; float: right; } </style>"

  # Button classes
  # -> Add btn-primary class to CSV button (which has class 'buttons-csv')
  add_class_script <- "<script>$(document).ready(function(){$('button.buttons-csv').addClass('btn btn-sm btn-primary mb-2');} );</script>"

  # Introduce changes
  # 0. DataTables initialization
  input_html_content <- sub(original_initialization, new_initialization, input_html_content, fixed=TRUE)
  # 1. DOM declaration
  input_html_content <- sub(original_dom_declaration, new_dom_declaration, input_html_content, fixed=TRUE)
  # 2. JS libraries
  input_html_content <- sub("</head>", paste(additional_js_lib_1, additional_js_lib_2, "</head>", sep="\n"), input_html_content, fixed=TRUE)
  # 3. CSS changes
  input_html_content <- sub("</head>", paste(additional_css_changes, "</head>", sep="\n"), input_html_content, fixed=TRUE)
  # 4. Button classes
  input_html_content <- sub("</body>", paste(add_class_script, "</body>", sep="\n"), input_html_content, fixed=TRUE)

  return(input_html_content)
}

fix_table_width <- function(input_html_content) {
  "
    Fix table width to 100% -> Make it scrollable
  "
  # Insert wrapper at initialization to manage scrolling (scrollX has issue with alignment of headers...)
  original_initialization <- "$(this).dataTable({"
  new_initialization <- paste(original_initialization, '"initComplete": function (settings, json) {
      $(this).wrap("<div style=\'overflow:auto; width:100%; position:relative;\'></div>");
    },', sep="\n")

  # Ellipsis style for long text
  additional_css_changes <- "<style> table.dataTable td  {
        max-width: 250px;
        white-space: nowrap;
        text-overflow: ellipsis;
        overflow: hidden;
      }
    </style>"

  # 1. DataTables initialization
  input_html_content <- sub(original_initialization, new_initialization, input_html_content, fixed=TRUE)
  # 2. CSS changes
  input_html_content <- sub("</head>", paste(additional_css_changes, "</head>", sep="\n"), input_html_content, fixed=TRUE)

  return(input_html_content)
}

convert_numeric_entries <- function(input_html_content) {
  "
  Convert numeric entries to be displayed properly:
    - Integer values are displayed without decimal places
    - Float values are displayed with 2 decimal places
    - Values < 0.01 are displayed in scientific notation
  "
  # Convert numeric entries to numeric values
  original_table_init <- '"aoColumnDefs": ['
  render_fct_entry <- "{
                            targets: '_all',
                            render: function (data, type, full, meta) {
                                let float_data = parseFloat(data);
                                if (Number.isInteger(float_data)) {
                                    return float_data.toLocaleString('en-US', { maximumFractionDigits: 0, minimumFractionDigits: 0 });
                                } else if (isNaN(float_data)) {
                                    return data;
                                } else {
                                    if (float_data < 0.01) {
                                        return float_data.toExponential(2);
                                    } else {
                                        return float_data.toLocaleString('en-US', { maximumFractionDigits: 3, minimumFractionDigits: 2 });
                                    }
                                }
                            }
                        },"
  input_html_content <- sub(original_table_init, paste(original_table_init, render_fct_entry, sep="\n"), input_html_content, fixed=TRUE)
  return(input_html_content)
}


# Main function
main <- function() {
  # Import snakemake arguments
  input_files <- snakemake@params[["input_files"]]
  input_separators <- snakemake@params[["data_separators"]]
  input_titles <- snakemake@params[["data_titles"]]
  input_info_texts <- snakemake@params[["info_texts"]]
  output_dir <- snakemake@params[["html_output_dir"]]
  output_file_basenames <- snakemake@params[["html_output_file_basenames"]]

  # Iterate over all inputs and create HTML-reports
  for (i in 1:length(input_files)) {
    output_file_basename <- output_file_basenames[i]
    output_html_file <- file.path(output_dir, output_file_basename)

    print(paste("Creating report for", input_files[i]))
    print(paste("Output file basename:", output_file_basename))
    create_html_table(input_files[i], sep=input_separators[i], title=input_titles[i],
                      info_text=input_info_texts[i], base_name=output_file_basename,
                      output_dir=output_dir)

    # Replace external scripts and styles with internal copies
    updated_html_file <- replace_external_scripts_and_styles(output_html_file)
    # Add index column functionality
    updated_html_file <- add_index_column_functionality(updated_html_file)
    # Add CSV download button
    updated_html_file <- add_csv_download_button(updated_html_file)
    # Fix table width
    updated_html_file <- fix_table_width(updated_html_file)
    # Convert numeric entries
    updated_html_file <- convert_numeric_entries(updated_html_file)

    # Write updated HTML file
    writeLines(updated_html_file, output_html_file)
  }
}

main()
115
116
run:
    pep.sample_table.to_csv(output[0], index=False)
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
__author__ = "Julian de Ruiter"
__copyright__ = "Copyright 2017, Julian de Ruiter"
__email__ = "[email protected]"
__license__ = "MIT"


from os import path
import re
from tempfile import TemporaryDirectory

from snakemake.shell import shell

log = snakemake.log_fmt_shell(stdout=True, stderr=True)


def basename_without_ext(file_path):
    """Returns basename of file path, without the file extension."""

    base = path.basename(file_path)
    # Remove file extension(s) (similar to the internal fastqc approach)
    base = re.sub("\\.gz$", "", base)
    base = re.sub("\\.bz2$", "", base)
    base = re.sub("\\.txt$", "", base)
    base = re.sub("\\.fastq$", "", base)
    base = re.sub("\\.fq$", "", base)
    base = re.sub("\\.sam$", "", base)
    base = re.sub("\\.bam$", "", base)

    return base


# Run fastqc, since there can be race conditions if multiple jobs
# use the same fastqc dir, we create a temp dir.
with TemporaryDirectory() as tempdir:
    shell(
        "fastqc {snakemake.params} -t {snakemake.threads} "
        "--outdir {tempdir:q} {snakemake.input[0]:q}"
        " {log}"
    )

    # Move outputs into proper position.
    output_base = basename_without_ext(snakemake.input[0])
    html_path = path.join(tempdir, output_base + "_fastqc.html")
    zip_path = path.join(tempdir, output_base + "_fastqc.zip")

    if snakemake.output.html != html_path:
        shell("mv {html_path:q} {snakemake.output.html:q}")

    if snakemake.output.zip != zip_path:
        shell("mv {zip_path:q} {snakemake.output.zip:q}")
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
__author__ = "Thibault Dayris"
__copyright__ = "Copyright 2019, Dayris Thibault"
__email__ = "[email protected]"
__license__ = "MIT"

from snakemake.shell import shell
from snakemake.utils import makedirs

log = snakemake.log_fmt_shell(stdout=True, stderr=True)

extra = snakemake.params.get("extra", "")
sjdb_overhang = snakemake.params.get("sjdbOverhang", "100")

gtf = snakemake.input.get("gtf")
if gtf is not None:
    gtf = "--sjdbGTFfile " + gtf
    sjdb_overhang = "--sjdbOverhang " + sjdb_overhang
else:
    gtf = sjdb_overhang = ""

makedirs(snakemake.output)

shell(
    "STAR "  # Tool
    "--runMode genomeGenerate "  # Indexation mode
    "{extra} "  # Optional parameters
    "--runThreadN {snakemake.threads} "  # Number of threads
    "--genomeDir {snakemake.output} "  # Path to output
    "--genomeFastaFiles {snakemake.input.fasta} "  # Path to fasta files
    "{sjdb_overhang} "  # Read-len - 1
    "{gtf} "  # Highly recommended GTF
    "{log}"  # Logging
)
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
__author__ = "Julian de Ruiter"
__copyright__ = "Copyright 2017, Julian de Ruiter"
__email__ = "[email protected]"
__license__ = "MIT"


from os import path

from snakemake.shell import shell


input_dirs = set(path.dirname(fp) for fp in snakemake.input)
output_dir = path.dirname(snakemake.output[0])
output_name = path.basename(snakemake.output[0])
log = snakemake.log_fmt_shell(stdout=True, stderr=True)

shell(
    "multiqc"
    " {snakemake.params}"
    " --force"
    " -o {output_dir}"
    " -n {output_name}"
    " {input_dirs}"
    " {log}"
)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
__author__ = "Jan Forster"
__copyright__ = "Copyright 2019, Jan Forster"
__email__ = "[email protected]"
__license__ = "MIT"


import os
from snakemake.shell import shell

extra = snakemake.params.get("extra", "")
log = snakemake.log_fmt_shell(stdout=True, stderr=True)

discarded_fusions = snakemake.output.get("discarded", "")
if discarded_fusions:
    discarded_cmd = "-O " + discarded_fusions
else:
    discarded_cmd = ""

blacklist = snakemake.params.get("blacklist")
if blacklist:
    blacklist_cmd = "-b " + blacklist
else:
    blacklist_cmd = ""

known_fusions = snakemake.params.get("known_fusions")
if known_fusions:
    known_cmd = "-k" + known_fusions
else:
    known_cmd = ""

sv_file = snakemake.params.get("sv_file")
if sv_file:
    sv_cmd = "-d" + sv_file
else:
    sv_cmd = ""

shell(
    "arriba "
    "-x {snakemake.input.bam} "
    "-a {snakemake.input.genome} "
    "-g {snakemake.input.annotation} "
    "{blacklist_cmd} "
    "{known_cmd} "
    "{sv_cmd} "
    "-o {snakemake.output.fusions} "
    "{discarded_cmd} "
    "{extra} "
    "{log}"
)
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
__author__ = "Johannes Köster, Jorge Langa"
__copyright__ = "Copyright 2016, Johannes Köster"
__email__ = "[email protected]"
__license__ = "MIT"


from snakemake.shell import shell
from snakemake_wrapper_utils.java import get_java_opts

# Distribute available threads between trimmomatic itself and any potential pigz instances
def distribute_threads(input_files, output_files, available_threads):
    gzipped_input_files = sum(1 for file in input_files if file.endswith(".gz"))
    gzipped_output_files = sum(1 for file in output_files if file.endswith(".gz"))
    potential_threads_per_process = available_threads // (
        1 + gzipped_input_files + gzipped_output_files
    )
    if potential_threads_per_process > 0:
        # decompressing pigz creates at most 4 threads
        pigz_input_threads = (
            min(4, potential_threads_per_process) if gzipped_input_files != 0 else 0
        )
        pigz_output_threads = (
            (available_threads - pigz_input_threads * gzipped_input_files)
            // (1 + gzipped_output_files)
            if gzipped_output_files != 0
            else 0
        )
        trimmomatic_threads = (
            available_threads
            - pigz_input_threads * gzipped_input_files
            - pigz_output_threads * gzipped_output_files
        )
    else:
        # not enough threads for pigz
        pigz_input_threads = 0
        pigz_output_threads = 0
        trimmomatic_threads = available_threads
    return trimmomatic_threads, pigz_input_threads, pigz_output_threads


def compose_input_gz(filename, threads):
    if filename.endswith(".gz") and threads > 0:
        return "<(pigz -p {threads} --decompress --stdout {filename})".format(
            threads=threads, filename=filename
        )
    return filename


def compose_output_gz(filename, threads, compression_level):
    if filename.endswith(".gz") and threads > 0:
        return ">(pigz -p {threads} {compression_level} > {filename})".format(
            threads=threads, compression_level=compression_level, filename=filename
        )
    return filename


extra = snakemake.params.get("extra", "")
java_opts = get_java_opts(snakemake)
log = snakemake.log_fmt_shell(stdout=True, stderr=True)
compression_level = snakemake.params.get("compression_level", "-5")
trimmer = " ".join(snakemake.params.trimmer)

# Distribute threads
input_files = [snakemake.input.r1, snakemake.input.r2]
output_files = [
    snakemake.output.r1,
    snakemake.output.r1_unpaired,
    snakemake.output.r2,
    snakemake.output.r2_unpaired,
]

trimmomatic_threads, input_threads, output_threads = distribute_threads(
    input_files, output_files, snakemake.threads
)

input_r1, input_r2 = [
    compose_input_gz(filename, input_threads) for filename in input_files
]

output_r1, output_r1_unp, output_r2, output_r2_unp = [
    compose_output_gz(filename, output_threads, compression_level)
    for filename in output_files
]

shell(
    "trimmomatic PE -threads {trimmomatic_threads} {java_opts} {extra} "
    "{input_r1} {input_r2} "
    "{output_r1} {output_r1_unp} "
    "{output_r2} {output_r2_unp} "
    "{trimmer} "
    "{log}"
)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
__author__ = "Johannes Köster"
__copyright__ = "Copyright 2016, Johannes Köster"
__email__ = "[email protected]"
__license__ = "MIT"


from snakemake.shell import shell

extra = snakemake.params.get("extra", "")
log = snakemake.log_fmt_shell(stdout=True, stderr=True)

# Samtools takes additional threads through its option -@
# One thread for samtools merge
# Other threads are *additional* threads passed to the '-@' argument
threads = "" if snakemake.threads <= 1 else " -@ {} ".format(snakemake.threads - 1)

shell(
    "samtools index {threads} {extra} {snakemake.input[0]} {snakemake.output[0]} {log}"
)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
__author__ = "Johannes Köster"
__copyright__ = "Copyright 2016, Johannes Köster"
__email__ = "[email protected]"
__license__ = "MIT"


import tempfile
from pathlib import Path
from snakemake.shell import shell
from snakemake_wrapper_utils.samtools import get_samtools_opts


samtools_opts = get_samtools_opts(snakemake)
extra = snakemake.params.get("extra", "")
log = snakemake.log_fmt_shell(stdout=True, stderr=True)


with tempfile.TemporaryDirectory() as tmpdir:
    tmp_prefix = Path(tmpdir) / "samtools_fastq.sort_"

    shell(
        "samtools sort {samtools_opts} {extra} -T {tmp_prefix} {snakemake.input[0]} {log}"
    )
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
__author__ = "Johannes Köster"
__copyright__ = "Copyright 2016, Johannes Köster"
__email__ = "[email protected]"
__license__ = "MIT"


from snakemake.shell import shell
from snakemake_wrapper_utils.samtools import get_samtools_opts

samtools_opts = get_samtools_opts(snakemake)
extra = snakemake.params.get("extra", "")
log = snakemake.log_fmt_shell(stdout=True, stderr=True, append=True)


shell("samtools view {samtools_opts} {extra} {snakemake.input[0]} {log}")
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
__author__ = "Johannes Köster"
__copyright__ = "Copyright 2016, Johannes Köster"
__email__ = "[email protected]"
__license__ = "MIT"


import tempfile
from pathlib import Path
from snakemake.shell import shell
from snakemake_wrapper_utils.samtools import get_samtools_opts


samtools_opts = get_samtools_opts(snakemake)
extra = snakemake.params.get("extra", "")
log = snakemake.log_fmt_shell(stdout=True, stderr=True)


with tempfile.TemporaryDirectory() as tmpdir:
    tmp_prefix = Path(tmpdir) / "samtools_fastq.sort_"

    shell(
        "samtools sort {samtools_opts} {extra} -T {tmp_prefix} {snakemake.input[0]} {log}"
    )
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
__author__ = "Joël Simoneau"
__copyright__ = "Copyright 2019, Joël Simoneau"
__email__ = "[email protected]"
__license__ = "MIT"

from snakemake.shell import shell

# Creating log
log = snakemake.log_fmt_shell(stdout=True, stderr=True)

# Placeholder for optional parameters
extra = snakemake.params.get("extra", "")

# Allowing for multiple FASTA files
fasta = snakemake.input.get("fasta")
assert fasta is not None, "input-> a FASTA-file is required"
fasta = " ".join(fasta) if isinstance(fasta, list) else fasta

shell(
    "kallisto index "  # Tool
    "{extra} "  # Optional parameters
    "--index={snakemake.output.index} "  # Output file
    "{fasta} "  # Input FASTA files
    "{log}"  # Logging
)
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
__author__ = "Joël Simoneau"
__copyright__ = "Copyright 2019, Joël Simoneau"
__email__ = "[email protected]"
__license__ = "MIT"

from snakemake.shell import shell

# Creating log
log = snakemake.log_fmt_shell(stdout=True, stderr=True)

# Placeholder for optional parameters
extra = snakemake.params.get("extra", "")

# Allowing for multiple FASTQ files
fastq = snakemake.input.get("fastq")
assert fastq is not None, "input-> a FASTQ-file is required"
fastq = " ".join(fastq) if isinstance(fastq, list) else fastq

shell(
    "kallisto quant "  # Tool
    "{extra} "  # Optional parameters
    "--threads={snakemake.threads} "  # Number of threads
    "--index={snakemake.input.index} "  # Input file
    "--output-dir={snakemake.output} "  # Output directory
    "{fastq} "  # Input FASTQ files
    "{log}"  # Logging
)
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
__author__ = "Thibault Dayris"
__copyright__ = "Copyright 2022, Thibault Dayris"
__email__ = "[email protected]"
__license__ = "MIT"


from snakemake.shell import shell

log = snakemake.log_fmt_shell(stdout=False, stderr=True, append=True)
required_thread_nb = 1

genome = snakemake.input["genome"]
if genome.endswith(".gz"):
    genome = f"<( gzip --stdout --decompress {genome} )"
    required_thread_nb += 1  # Add a thread for gzip uncompression
elif genome.endswith(".bz2"):
    genome = f"<( bzip2 --stdout --decompress {genome} )"
    required_thread_nb += 1  # Add a thread for bzip2 uncompression

if snakemake.threads < required_thread_nb:
    raise ValueError(
        f"Salmon decoy wrapper requires exactly {required_thread_nb} threads, "
        f"but only {snakemake.threads} were provided"
    )

sequences = [
    snakemake.input["transcriptome"],
    snakemake.input["genome"],
    snakemake.output["gentrome"],
]
if all(fasta.endswith(".gz") for fasta in sequences):
    # Then all input sequences are gzipped. The output will also be gzipped.
    pass
elif all(fasta.endswith(".bz2") for fasta in sequences):
    # Then all input sequences are bgzipped. The output will also be bgzipped.
    pass
elif all(fasta.endswith((".fa", ".fna", ".fasta")) for fasta in sequences):
    # Then all input sequences are raw fasta. The output will also be raw fasta.
    pass
else:
    raise ValueError(
        "Mixed compression status: Either all fasta sequences are compressed "
        "with the *same* compression algorithm, or none of them are compressed."
    )

# Gathering decoy sequences names
# Sed command works as follow:
# -n       = do not print all lines
# s/ .*//g = Remove anything after spaces. (remove comments)
# s/>//p  = Remove '>' character at the begining of sequence names. Print names.
shell("( sed -n 's/ .*//g;s/>//p' {genome} ) > {snakemake.output.decoys} {log}")

# Building big gentrome file
shell(
    "cat {snakemake.input.transcriptome} {snakemake.input.genome} "
    "> {snakemake.output.gentrome} {log}"
)
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
__author__ = "Tessa Pierce"
__copyright__ = "Copyright 2018, Tessa Pierce"
__email__ = "[email protected]"
__license__ = "MIT"

from os.path import dirname
from snakemake.shell import shell
from tempfile import TemporaryDirectory

log = snakemake.log_fmt_shell(stdout=True, stderr=True)
extra = snakemake.params.get("extra", "")

decoys = snakemake.input.get("decoys", "")
if decoys:
    decoys = f"--decoys {decoys}"

output = snakemake.output
if len(output) > 1:
    output = dirname(snakemake.output[0])

with TemporaryDirectory() as tempdir:
    shell(
        "salmon index "
        "--transcripts {snakemake.input.sequences} "
        "--index {output} "
        "--threads {snakemake.threads} "
        "--tmpdir {tempdir} "
        "{decoys} "
        "{extra} "
        "{log}"
    )
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
__author__ = "Tessa Pierce"
__copyright__ = "Copyright 2018, Tessa Pierce"
__email__ = "[email protected]"
__license__ = "MIT"


from os.path import dirname
from snakemake.shell import shell


class MixedPairedUnpairedInput(Exception):
    def __init__(self):
        super().__init__(
            "Salmon cannot quantify mixed paired/unpaired input files. "
            "Please input either `r1`, `r2` (paired) or `r` (unpaired)"
        )


class MissingMateError(Exception):
    def __init__(self):
        super().__init__(
            "Salmon requires an equal number of paired reads in `r1` and `r2`,"
            " or a list of unpaired reads `r`"
        )


def uncompress_bz2(snake_io, salmon_threads):
    """
    Provide bzip2 on-the-fly decompression

    For each of these b-unzipping, a thread will be used. Therefore, the maximum number of threads given to Salmon
    shall be reduced by one in order not to be killed on a cluster.
    """

    # Asking forgiveness instead of permission
    try:
        # If no error are raised, then we have a string.
        if snake_io.endswith("bz2"):
            return [f"<( bzip2 --decompress --stdout {snake_io} )"], salmon_threads - 1
        return [snake_io], salmon_threads
    except AttributeError:
        # As an error has been raise, we have a list of fastq files.
        fq_files = []
        for fastq in snake_io:
            if fastq.endswith("bz2"):
                fq_files.append(f"<( bzip2 --decompress --stdout {fastq} )")
                salmon_threads -= 1
            else:
                fq_files.append(fastq)
        return fq_files, salmon_threads


log = snakemake.log_fmt_shell(stdout=True, stderr=True)
libtype = snakemake.params.get("libtype", "A")
max_threads = snakemake.threads

extra = snakemake.params.get("extra", "")
if "--validateMappings" in extra:
    raise DeprecationWarning("`--validateMappings` is deprecated and has no effect")

r1 = snakemake.input.get("r1")
r2 = snakemake.input.get("r2")
r = snakemake.input.get("r")


if all(mate is not None for mate in [r1, r2]):
    r1, max_threads = uncompress_bz2(r1, max_threads)
    r2, max_threads = uncompress_bz2(r2, max_threads)

    if len(r1) != len(r2):
        raise MissingMateError()
    if r is not None:
        raise MixedPairedUnpairedInput()

    r1_cmd = " --mates1 {}".format(" ".join(r1))
    r2_cmd = " --mates2 {}".format(" ".join(r2))
    read_cmd = " ".join([r1_cmd, r2_cmd])

elif r is not None:
    if any(mate is not None for mate in [r1, r2]):
        raise MixedPairedUnpairedInput()

    r, max_threads = uncompress_bz2(r, max_threads)
    read_cmd = " --unmatedReads {}".format(" ".join(r))

else:
    raise MissingMateError()

gene_map = snakemake.input.get("gtf", "")
if gene_map:
    gene_map = f"--geneMap {gene_map}"

bam = snakemake.output.get("bam", "")
if bam:
    bam = f"--writeMappings {bam}"

outdir = dirname(snakemake.output.get("quant"))
index = snakemake.input["index"]
if isinstance(index, list):
    index = dirname(index[0])

if max_threads < 1:
    raise ValueError(
        "On-the-fly b-unzipping have raised the required number of threads. "
        f"Please request at least {1 - max_threads} more threads."
    )

shell(
    "salmon quant --index {index} "
    " --libType {libtype} {read_cmd} --output {outdir} {gene_map} "
    " --threads {max_threads} {extra} {bam} {log}"
)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
__author__ = "Johannes Köster"
__copyright__ = "Copyright 2016, Johannes Köster"
__email__ = "[email protected]"
__license__ = "MIT"


import tempfile
from pathlib import Path
from snakemake.shell import shell
from snakemake_wrapper_utils.samtools import get_samtools_opts


samtools_opts = get_samtools_opts(snakemake)
extra = snakemake.params.get("extra", "")
log = snakemake.log_fmt_shell(stdout=True, stderr=True)


with tempfile.TemporaryDirectory() as tmpdir:
    tmp_prefix = Path(tmpdir) / "samtools_fastq.sort_"

    shell(
        "samtools sort {samtools_opts} {extra} -T {tmp_prefix} {snakemake.input[0]} {log}"
    )
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
__author__ = "Johannes Köster"
__copyright__ = "Copyright 2016, Johannes Köster"
__email__ = "[email protected]"
__license__ = "MIT"


import os
import tempfile
from snakemake.shell import shell


extra = snakemake.params.get("extra", "")
log = snakemake.log_fmt_shell(stdout=False, stderr=True)


fq1 = snakemake.input.get("fq1")
assert fq1 is not None, "input-> fq1 is a required input parameter"
fq1 = (
    [snakemake.input.fq1]
    if isinstance(snakemake.input.fq1, str)
    else snakemake.input.fq1
)
fq2 = snakemake.input.get("fq2")
if fq2:
    fq2 = (
        [snakemake.input.fq2]
        if isinstance(snakemake.input.fq2, str)
        else snakemake.input.fq2
    )
    assert len(fq1) == len(
        fq2
    ), "input-> equal number of files required for fq1 and fq2"
input_str_fq1 = ",".join(fq1)
input_str_fq2 = ",".join(fq2) if fq2 is not None else ""
input_str = " ".join([input_str_fq1, input_str_fq2])


if fq1[0].endswith(".gz"):
    readcmd = "--readFilesCommand gunzip -c"
elif fq1[0].endswith(".bz2"):
    readcmd = "--readFilesCommand bunzip2 -c"
else:
    readcmd = ""


index = snakemake.input.get("idx")
if not index:
    index = snakemake.params.get("idx", "")


if "--outSAMtype BAM SortedByCoordinate" in extra:
    stdout = "BAM_SortedByCoordinate"
elif "BAM Unsorted" in extra:
    stdout = "BAM_Unsorted"
else:
    stdout = "SAM"


with tempfile.TemporaryDirectory() as tmpdir:
    shell(
        "STAR "
        " --runThreadN {snakemake.threads}"
        " --genomeDir {index}"
        " --readFilesIn {input_str}"
        " {readcmd}"
        " {extra}"
        " --outTmpDir {tmpdir}/STARtmp"
        " --outFileNamePrefix {tmpdir}/"
        " --outStd {stdout}"
        " > {snakemake.output.aln}"
        " {log}"
    )

    if snakemake.output.get("reads_per_gene"):
        shell("cat {tmpdir}/ReadsPerGene.out.tab > {snakemake.output.reads_per_gene:q}")
    if snakemake.output.get("chim_junc"):
        shell("cat {tmpdir}/Chimeric.out.junction > {snakemake.output.chim_junc:q}")
    if snakemake.output.get("sj"):
        shell("cat {tmpdir}/SJ.out.tab > {snakemake.output.sj:q}")
    if snakemake.output.get("log"):
        shell("cat {tmpdir}/Log.out > {snakemake.output.log:q}")
    if snakemake.output.get("log_progress"):
        shell("cat {tmpdir}/Log.progress.out > {snakemake.output.log_progress:q}")
    if snakemake.output.get("log_final"):
        shell("cat {tmpdir}/Log.final.out > {snakemake.output.log_final:q}")
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
__author__ = "Antonie Vietor"
__copyright__ = "Copyright 2020, Antonie Vietor"
__email__ = "[email protected]"
__license__ = "MIT"

from snakemake.shell import shell

log = snakemake.log_fmt_shell(stdout=False, stderr=True)

shell(
    "(bamtools stats {snakemake.params} -in {snakemake.input[0]} > {snakemake.output[0]}) {log}"
)
ShowHide 135 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...