Bioinformatics pipeline for the analysis of amplicon sequencing data of eDNA samples from the PacMAN project

public public 1yr ago 0 bookmarks

Bioinformatics pipeline for the PacMAN project UNDER DEVELOPMENT

This is the bioinformatics pipeline developed for the PacMAN (Pacific Islands Marine Bioinvasions Alert Network). This pipeline cleans and classifies sequences from eDNA samples. The PacMAN-pipeline is still under development with a first production release planned in 2023. The steps in this pipeline are compiled from publicly available bioinformatic pipelines like ANACAPA , tourmaline , tagseq-qiime2-snakemake , pema , CASCABEL and MBARI-BOG . The pipeline is based on the snakemake workflow management system. At first, we will develop this pipeline only keeping in mind CO1 data, but we want to expand the process to other barcodes as well, so that in the future it could be used for OBIS datasets broadly.

The initial pipeline has the following steps:

  1. Trimmomatic

    • Quality trimming and removal of sequencing adapters
  2. Cutadapt

    • Removal of primers
  3. dada2

    • ASV inference
  4. Bowtie2

    • sequence alignment with a reference database
  5. BLCA

    • Bayesian-based last common ancestor inference
  6. BLAST

    • Blast search of remaining unknown sequences agains the NCBI nt database
  7. Data formatting

    • Export to DwC-A compatible tables

Steps that still need to be added to the pipeline:

  1. Data quality checkpoints for the scripts

  2. Automatic revese complement of primer sequences.

  3. Simplify use of default parameters for dada2?

  4. Either make downstream formatting from taxonomic assignment more broad, or make separate downstream rules for other taxonomic classification methods.

Preparation for the run:

Install conda and snakemake .

At the moment, the pipeline is planned to be run with the --use-conda flag, where each rule has an isolated environment, that will be installed in the working directory using conda.

Note : A future possibility for OBIS, would be to start building pipelines based on snakemake modules, to allow for more flexible development. Similar to what is being done here .

Before running the pipeline, the user must modify the files found in the config folder.

What is needed:

  1. The information on the provided sequence files connected to the sample names

    • manifest_pe.csv , contains the columns: sample-id , file-path and direction (forward or reverse).
  2. The information on the samples and linked metadata.

    • Fill in sample_data_template.csv : can contain all DwC data that should be added to the occurrence and dna-derived data tables

    • Note! control samples can be marked by adding occurrenceStatus as absent .
      --> The ASVs from these samples will be removed from all samples, before the occurrence table is made

  3. Make sure you have the reference database of choice

    • The fasta file with all sequences,

    • And the taxa file where the fasta-ids are linked to the taxonomic information

  4. Change the config.yaml file for the specific run.

    • PROJECT name: Usually a specific sample set

    • RUN name: the run with a specific combination of samples and/or parameters for the analysis

    • SAMPLE_SET : manifest file path

    • sample-data-file : manifest file path

    • reference database: name of the database, fasta file and taxa file.

    • Primers used in both forward and reverse configuration

    • Chosen parameters for each step. (Template file configured for CO1 data using the Leray-Geller primer set).

The config file is then given to the pipeline during initiation (can be located anywhere).

Once this information is added, and the config file is filled in, a dry-run of the pipeline can be performed for testing with:

snakemake --use-conda --configfile ./config/config.yaml --rerun-incomplete --printshellcmds --cores 1 -np

Removing the -np flag will initiate the run.

Note, the pipeline is still under development and testing

Run using Docker

The repository includes a Dockerfile to run the entire pipeline in a Docker container. To do so, add your data files to the data directory and run the following commands to build the container and run the pipeline:

docker build -t pipeline .
docker run -v $(pwd):/src pipeline /bin/bash \
 -c "snakemake --use-conda -p --cores all"

Example when using external data and results folders:

docker build -t pipeline .
docker run \
 -v /home/ubuntu/data/dev/PacMAN-pipeline:/src \
 -v /home/ubuntu/data:/src/data \
 -v /home/ubuntu/data/results:/src/results \
 --rm \
 pipeline \
 /bin/bash -c "snakemake --rerun-incomplete --use-conda -p --cores all --configfile data/config/config_rey_noblast_2samples.yaml"

Steps

The pipeline will run the following steps (also see diagram ):

1. Initiate file structure

The run will first initiate a folder structure in the results folder as follows

PROJECT
├── samples
| ├── sample_1
| │ ├── forward (link to sample file)
| │ └── reverse (link to sample file)
| ├── sample_2
| ├── sample_3
| | .
| | .
| | .
| ├── sample_n
| └── multiqc_RUN.html
└── runs └── RUN ├── 01-trimmed ├── 03-dada2 ├── 04-taxonomy ├── 05-dwca └── 06-report

Samples will be linked to the file structure and their quality will be analysed with fastqc. All quality files of the raw sequence files will be summarized with multiqc, and can be found in /PROJECT/samples/multqc_RUN.html .

1. Trimming and 2. removing primers

The sequences are trimmed and primers are removed utilising trimmomatic and cutadapt. Different illumina adapters are available through the trimmomatic pipeline in the resources folder (custom adapters can also be added). The primers must be added to the config file in both forward and reverse (reverse complement) directions.

3. dada2

ASVs are inferred with dada2, which is run in 2 steps. Initially filtering of samples is done based on user-defined parameters. The quality of sequences before and after this filtering is shown in aggregate in 2 plots ( 06-reports/dada2 ), and can be found separately for each samples in the 03-dada2/quality folder.

dada2 returns the ASV-table ( 03-dada2/seqtab-nochim.txt ), as well as the sequences of each asv ( 03-dada2/rep-seqs.fna ). In addition the number of reads filtered at each step and remaining after sample processing are returned in the 06-report/dada2_stats.txt file, and will be added to the report.

4. taxonomy

Because the taxonomic classification uses bowtie2 alignment, the reference database must first be built using bowtie2 build (if not already available). This will take a while, but will be available for all future runs with the same reference database. The database files are added to the resources folder of the PacMAN pipeline.

Taxonomy assignment proceeds as in the ANACAPA pipeline. The sequences are first aligned to the reference database with bowtie2, and the best 100 alignments are chosen. From these alignments the taxonomy is classified based on the bowtie2-blca algorithm. Each assigned taxonomic level receives a confidence score between 0-100. In the next step the user can decide which cutoff will be used for the final taxonomic assignments.

The tax table returned by BLCA is then filtered based on this cutoff, and returned in the 04-taxonomy/identity_filtered/ folder

5. Blast and lca (optional)

There is an option in the pipeline to further classify sequences that remained unclassified with BLASTn against the full ncbi nt database. However multiple resources are required for this to work. We recommend having a local copy of the full NCBI nt database available to run this step with the pipeline. If you have in total <10kbp of data (50 unknown sequences of 200 bp), you may also run the query in remote mode. We may include a loop to do this with more data at a later stage, but running the analysis remotely for more sequences will require a lot of time.

The user will need to also have access to NCBI-nt to TaxonID mapping files to get the scientific names of the sequences. The pipeline uses BASTA to filter and classify the Blast results based on an lca analysis. If a tax database is not provided in the configfile, the pipeline will prompt BASTA to download the tax_db (gb) to the resources folder. This will also take a long time.

5. dwca

In the final steps of the pipeline LSIDs are defined for the assigned taxonomic names, and the occurrence table and dna-derived data extension table are built for submitting into OBIS.

This step also returns a table 05-dwca/Taxa_not_in_worms.csv , containing the taxonomic names and linked asvs that were not given an lsid. This table will require manual inspection, and possibly contacting the WoRMS team.

In this step the unknown sequences are given the ID for 'Biota'. Non-marine species (most taxa with no lsid), and ASVs found in the control sample(s) are not added to the final dwca-tables. All of these can still be found in the table 05-dwca/Full_tax_table_with_lsids as well as the 05-dwca/phyloseq_object.rds , which can be read with the phyloseq R package for further analysis and visualization.

Note! With this strategy, sequences that are known but not marine, are not included in the occurrence tables, while sequences that are not known are always included (as 'Biota').

Fields that also still need to be added/modified based on the genetic data guidelines are:

  • identificationRemarks : The report of the analysis run

  • identificationReferences : Website of this pipeline

6. Reporting

An HTML report is made in the final steps with the statistics of the full run, to give an overview of what was done during the analysis and what the effect was on the results. Still more analysis will be added to this report.

Code Snippets

  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
from __future__ import print_function, division
import sys
import os
import signal

if sys.version_info < (3, 0):
    from StringIO import StringIO
else:
    from io import StringIO

try:
    from Bio import AlignIO, SeqIO
except ImportError:
    sys.stderr.write("Error! BioPython is not detected!\n")
    sys.exit(1)

import random
import subprocess
import re
from collections import namedtuple, defaultdict
import argparse
import json

'''
BLCA Core annotation tool
'''


class SamEntry(object):
    def __init__(self, raw_row):
        self.qname = raw_row[0]
        self.flag = raw_row[1]
        self.rname = raw_row[2]
        self.pos = raw_row[3]
        self.mapq = raw_row[4]
        self.cigar = raw_row[5]
        self.rnext = raw_row[6]
        self.pnext = raw_row[7]
        self.tlen = raw_row[8]
        self.seq = raw_row[9]
        self.qual = raw_row[10]
        self.alignment_scores = [int(score.split(':')[-1]) for score in raw_row[11:13]]
        # sometimes it is a column early  because there is no second best match
        if raw_row[17][:2] == 'MD':
            self.md_z_flags = raw_row[17].split(':')[-1]
        else:
            self.md_z_flags = raw_row[18].split(':')[-1]

        total_match_count, total_count = self.calulate_match_count(self.md_z_flags)
        self.identity_ratio_old = total_match_count / total_count
        self.identity_ratio = total_match_count / float(len(self.seq))

        self.soft_clipped_ratio = self.cigar_total_soft_clipping() / total_count

        self.total_match = total_match_count

    def calulate_match_count(self, md_z_flags):
        tokenizer = re.compile(r'(\d+)|(\^[A-Z])|([A-Z])')
        total_match_count = 0.0
        total_mismatch_count = 0.0
        for item in tokenizer.finditer(md_z_flags):
            match_count, deletion, mismatch = item.groups()
            if match_count:
                total_match_count += int(match_count)
            if deletion:
                total_mismatch_count += 1
            if mismatch:
                total_mismatch_count += 1

        return total_match_count, (total_match_count + total_mismatch_count)

    # I am not completely sure if this is the logic that you want to use for checking unmapped at the ends
    # This function returns the max S at either end of the CIGAR score, 0  if there is no S at both ends
    def cigar_max_s(self):
        beginning_s, ending_s = self.get_soft_clipping()
        return max(beginning_s, ending_s)

    def cigar_total_soft_clipping(self):
        beginning_s, ending_s = self.get_soft_clipping()
        return beginning_s + ending_s

    def get_soft_clipping(self):
        tokenizer = re.compile(r'(?:\d+)|(?:[A-Z=])')
        cigar_elements = tokenizer.findall(self.cigar)

        if cigar_elements[1] == 'S':
            beginning_s = int(cigar_elements[0])
        else:
            beginning_s = 0

        if cigar_elements[-1] == 'S':
            ending_s = int(cigar_elements[-2])
        else:
            ending_s = 0

        return (beginning_s, ending_s)


class NotAvailableHandler(object):
    def __init__(self):
        self.count = 0
        self.na_forms = {'na', 'NA', 'Not Available', 'not available', 'Not available', 'nan'}
        self.encoded_na_form = 'NA;;'

    def encode_if_na(self, taxon):
        if taxon not in self.na_forms:
            return taxon
        self.count += 1
        return 'NA;;{}'.format(self.count)

    def decode_if_na(self, taxon):
        if not taxon.startswith(self.encoded_na_form):
            return taxon

        return 'NA'


parser = argparse.ArgumentParser(description='Bayesian-based LCA taxonomic classification method')

##### Required arguments #####
required = parser.add_argument_group('required arguments')
required.add_argument("-i", "--sam", help="Input SAM file", type=str, required=True)
required.add_argument('-q', '--reference', help="Reference fasta file", type=str, required=True)
required.add_argument("-r", "--tax", help="reference taxonomy file for the Database", type=str, required=True)

##### Taxonomy filtering arguments #####
taxoptions = parser.add_argument_group('taxonomy profiling options [filtering of hits]')
taxoptions.add_argument("-n", "--nper", help="number of times to bootstrap. Default: 100", type=int, default=100)
taxoptions.add_argument("-b", "--iset", help="minimum identity score to include", type=float, default=0.8)
taxoptions.add_argument('-l', '--length', help="minimum length of hit to include relative to query", type=float,
                        default=0.5)
taxoptions.add_argument('-s', '--softclipping', help='maximum soft clipped ratio to include', type=float, default=0.2)
##### Alignment control arguments #####
alignoptions = parser.add_argument_group('alignment control arguments')
alignoptions.add_argument("-m", "--match", default=1.0, help="alignment match score. Default: 1", type=float)
alignoptions.add_argument("-f", "--mismatch", default=-2.5, help="alignment mismatch penalty. Default: -2.5",
                          type=float)
alignoptions.add_argument("-g", "--ngap", default=-2.0, help="alignment gap penalty. Default: -2", type=float)
##### Other arguments #####
optional = parser.add_argument_group('other arguments')
optional.add_argument('-p', '--muscle', help='Path to call muscle default: muscle', default='muscle')

optional.add_argument("-o","--outfile",help="output file name. Default: <fasta>.blca.out",type=str)
optional.add_argument("-v","--votesfile",help="votes file name. Default: <fasta>.blca.votes.jsonlines",type=str)
optional.add_argument('--continue_mode', help="continue from a previous run by appending to the same output file", action='store_true')
optional.add_argument("--muscle_use_diags", help="pass the diag argument to muscle", action='store_true')
optional.add_argument("--muscle_max_iterations", help="set the max number of iterations for muscle", type=int, default=16)

##### parse arguments #####
args = parser.parse_args()

### bootstrap times ###
nper = args.nper  # number of bootstrap to permute
### Filter hits per query ###
iset = args.iset  # identify threshold
### Alignment options ###
ngap = args.ngap  # gap penalty
match = args.match  # match score
mismatch = args.mismatch  # mismatch penalty
min_length = args.length
sam_file_name = args.sam
outfile_name = args.outfile or (sam_file_name + '.blca.out')
votesfile_name = args.votesfile or (sam_file_name + '.blca.votes.jsonlines')
reference_fasta = args.reference
tax = args.tax
muscle_path = args.muscle
max_soft_clipping_allowed = args.softclipping
continue_mode = args.continue_mode
muscle_use_diags = args.muscle_use_diags
muscle_max_iterations = args.muscle_max_iterations


levels = ["superkingdom", "phylum", "class", "order", "family", "genus", "species"]


def check_program(prgname):
    '''Check whether a program has been installed and put in the PATH'''
    path = os.popen("which " + prgname).read().rstrip()
    if len(path) > 0 and os.path.exists(path):
        print(prgname + " is located in your PATH!")
    else:
        print("ERROR: " + prgname + " is NOT in your PATH, please set up " + prgname + "!")
        sys.exit(1)


def get_dic_from_aln(aln):
    '''Read in alignment and convert it into a dictionary'''
    alignment = AlignIO.read(aln, "clustal")
    alndic = {}
    for r in alignment:
        alndic[r.id] = list(r.seq)
    return alndic


def pairwise_score(alndic, query, match, mismatch, ngap):
    '''Calculate pairwise alignment score given a query'''
    nt = ["A", "C", "T", "G", "g", "a", "c", "t"]
    hitscore = {}
    for k, v in alndic.items():
        if k != query:
            hitscore[k] = 0
            for i in range(len(v)):
                if (alndic[query][i] in nt) and (v[i] in nt) and (alndic[query][i] == v[i]):
                    hitscore[k] += float(match)
                elif (alndic[query][i] not in nt) and (v[i] not in nt) and (alndic[query][i] == v[i]):
                    hitscore[k] += float(0)
                elif ((alndic[query][i] not in nt) or (v[i] not in nt)) and (alndic[query][i] != v[i]):
                    hitscore[k] += float(mismatch)
                elif (alndic[query][i] in nt) and (v[i] in nt) and (alndic[query][i] != v[i]):
                    hitscore[k] += float(ngap)
    total = float(sum(hitscore.values()))
    if total <= 0:
        total = 1
    for k, v in hitscore.items():
        hitscore[k] = v / total
    return hitscore


def random_aln_score(alndic, query, match, mismatch, ngap):
    '''Randomize the alignment, and calculate the score'''
    nt = ["A", "C", "T", "G", "g", "a", "c", "t"]
    idx = []
    for i in range(len(list(alndic.values())[0])):
        idx.append(random.choice(range(len(list(alndic.values())[0]))))

    hitscore = {}
    for k, v in alndic.items():
        if k != query:
            hitscore[k] = 0
            for i in idx:
                if (alndic[query][i] in nt) and (v[i] in nt) and (alndic[query][i] == v[i]):
                    hitscore[k] += float(match)
                elif (alndic[query][i] not in nt) and (v[i] not in nt) and (alndic[query][i] == v[i]):
                    hitscore[k] += float(0)
                elif ((alndic[query][i] not in nt) or (v[i] not in nt)) and (alndic[query][i] != v[i]):
                    hitscore[k] += float(mismatch)
                elif (alndic[query][i] in nt) and (v[i] in nt) and (alndic[query][i] != v[i]):
                    hitscore[k] += float(ngap)
    return hitscore


def get_gap_pos(query, alndic):
    '''Get the gap position in the alignment'''
    for i in range(len(alndic[query])):
        if alndic[query][i] != "-":
            start = i
            break
    for i in range(len(alndic[query]) - 1, 0, -1):
        if alndic[query][i] != "-":
            end = i
            break
    return start, end


def cut_gap(alndic, start, end):
    '''Given a start and end gap position, truncate the alignmnet'''
    trunc_alndic = {}
    for k_truc, v_truc in alndic.items():
        trunc_alndic[k_truc] = v_truc[start:end]
    return trunc_alndic


def read_tax_acc(taxfile, not_available_handler):
    tx = open(taxfile)
    acctax = {}
    for l in tx:
        lne = l.rstrip().strip(";").split("\t")
        if len(lne) != 2:
            continue
        if (levels[0] + ':') not in l:
            taxons = [not_available_handler.encode_if_na(taxon) for taxon in lne[1].split(';')]
            acctax[lne[0].split('.')[0]] = dict(zip(levels, taxons))
        else:
            pairs = [x.split(":", 1) for x in lne[1].split(";")]
            encoded = [(level, not_available_handler.encode_if_na(taxon)) for level, taxon in pairs]
            acctax[lne[0].split(".")[0]] = dict(encoded)
    tx.close()
    return acctax


################################################################
##
## 	Running Script Start
##
################################################################

## check whether muscle is located in the path
# check_program("muscle")

### read in pre-formatted lineage information ###
na_handler = NotAvailableHandler()
acc2tax = read_tax_acc(tax, na_handler)
print("> 1 > Read in taxonomy information!")

reference_sequences = {}
with open(reference_fasta) as f:
    for r in SeqIO.parse(f, "fasta"):
        reference_sequences[r.id] = str(r.seq)

print("> 2 > Read in reference db")

SequenceInfo = namedtuple('SequenceInfo', ['seq', 'hits'])
### read in input fasta file ###
input_sequences = {}
possible_rejects = set()
with open(sam_file_name) as sam_file:
    for line in sam_file:
        pieces = line.strip().split('\t')
        entry = SamEntry(pieces)

        if entry.identity_ratio_old < iset:
            possible_rejects.add(entry.qname)
        if entry.soft_clipped_ratio > max_soft_clipping_allowed:
            possible_rejects.add(entry.qname)
        elif entry.rname not in reference_sequences:
            possible_rejects.add(entry.qname)
        elif len(reference_sequences[entry.rname]) / float(len(entry.seq)) < min_length:
            possible_rejects.add(entry.qname)
        elif entry.qname not in input_sequences:
            input_sequences[entry.qname] = SequenceInfo(seq=entry.seq, hits=[entry.rname])
        else:
            input_sequences[entry.qname].hits.append(entry.rname)

rejects = possible_rejects.difference(set(input_sequences))

print("> 3 > Read in bowtie2 output!")

already_assigned = set()
if continue_mode:
    for line in open(outfile_name):
        already_assigned.add(line.split('\t')[0])

    outfile = open(outfile_name, 'a')
else:
    outfile = open(outfile_name, 'w')

for seqn, info in input_sequences.items():
    if seqn in already_assigned:
        continue

print("> 3 > Read in bowtie2 output!")

count = 0
outfile = open(outfile_name, 'w')
votesfile = open(votesfile_name, 'w')

for seqn, info in input_sequences.items():
    count += 1

    if seqn in acc2tax:
        print("[WARNING] Your sequence " + seqn + " has the same ID as the reference database! Please correct it!")
        print("...Skipping sequence " + seqn + " ......")
        outfile.write(seqn + "\tSkipped\n")
        continue

    ### Get all the hits list belong to the same query ###
    ### Add query fasta sequence to extracted hit fasta ###
    fifsa = []
    for hit in info.hits:
        if hit not in reference_sequences:
            print("Missing reference sequence for " + hit)
            continue
        fifsa.append(">{}\n{}\n".format(hit, reference_sequences[hit]))
    fifsa.append(">" + seqn + "\n" + info.seq)
    fifsa = "\n".join(fifsa)
    # os.system("rm " + seqn + ".dblist")

    ### Run muscle ###
    muscle_call = [muscle_path, '-quiet', '-clw', '-maxiters',  str(muscle_max_iterations)]
    if muscle_use_diags:
        muscle_call.append('-diags')
    proc = subprocess.Popen(muscle_call, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE)
    outs, errs = proc.communicate(fifsa.encode('utf-8'))

    if proc.returncode == -signal.SIGSEGV:
        print("Error: segmentation fault")

    alndic = get_dic_from_aln(StringIO(outs.decode('utf-8')))
    # os.system("rm " + seqn + ".hits.fsa")
    # os.system("rm " + seqn + ".muscle")
    #    	print "Processing:",k1

    ### get gap position and truncate the alignment###
    ## SAARA: Was getting here a keyerror, but wanted to see if otherwise working so added the exception
    # try:
    #     start, end = get_gap_pos(seqn, alndic)
    # except KeyError:
    #     continue
    start, end = get_gap_pos(seqn, alndic)
    trunc_alndic = cut_gap(alndic, start, end)
    orgscore = pairwise_score(trunc_alndic, seqn, match, mismatch, ngap)
    ### start bootstrap ###
    perdict = {}  # record alignmet score for each iteration
    pervote = {}  # record vote after nper bootstrap

    for j in range(nper):
        random_scores = random_aln_score(trunc_alndic, seqn, match, mismatch, ngap)
        perdict[j] = random_scores
        max_score = max(random_scores.values())
        hits_with_max_score = [k3 for k3, v3 in random_scores.items() if v3 == max_score]
        vote_share = 1.0 / len(hits_with_max_score)
        for hit in hits_with_max_score:
            if hit in pervote:
                pervote[hit] += vote_share
            else:
                pervote[hit] = vote_share

    ### normalize vote by total votes ###
    ttlvote = sum(pervote.values())
    for k4, v4 in pervote.items():
        pervote[k4] = v4 / ttlvote * 100
    ###

    votes_by_level = {}
    for level in levels:
        votes_by_level[level] = defaultdict(int)

    for hit in orgscore.keys():
        short_hit_name = hit.split(".")[0]
        if short_hit_name not in acc2tax:
            print("Missing taxonomy info for ", short_hit_name)
            continue
        hit_taxonomy = acc2tax[short_hit_name]
        for level in levels:
            # deal with missing values in the taxonomy
            if level not in hit_taxonomy:
                hit_taxonomy[level] = na_handler.encode_if_na("NA")

            if hit in pervote:
                votes_by_level[level][hit_taxonomy[level]] += pervote[hit]
            else:
                votes_by_level[level][hit_taxonomy[level]] += 0

    votesfile.write(json.dumps({"seqn": seqn, "votes": votes_by_level}) + "\n")

    try:
        outfile.write(seqn + "\t")
        for level in levels:
            levels_votes = votes_by_level[level]
            outfile.write(level + ":" + na_handler.decode_if_na(max(levels_votes, key=levels_votes.get)) + ";")
        outfile.write("\t")
        for level in levels:
            levels_votes = votes_by_level[level]
            outfile.write(level + ":" + str(max(levels_votes.values())) + ";")
        outfile.write("\t" + ";".join(info.hits))
        outfile.write("\n")
    except ValueError as e:
        print(f"ValueError for {seqn}, possibly due to missing votes")
        outfile.write(seqn + "\tUnclassified\n")

for seqn in rejects:
    outfile.write(seqn + "\tUnclassified\n")

outfile.close()
votesfile.close()
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
library(plyr)
library(dplyr)
library(yaml)
library(dada2)
library(Biostrings)
library(ggplot2)
library(tidyr)
library(tibble)

args <- commandArgs(trailingOnly = T)
config <- read_yaml(args[2])

# Set the different paths for all the supplied libraries
# NOTICE: only forward files given as input
filtFs <- args[3:length(args)]
filtFs_single <- gsub("_1P", "_1U", filtFs)

filtRs <- gsub("_1P", "_2P", filtFs)
filtRs_single <- gsub("_2P", "_2U", filtRs)

outpath <- args[1]

# The parameters nti/ntj, need the chosen nucleotides in a vector format:
nti <- strsplit(config$DADA2$plotERRORS$nti, "")[[1]]
if (length(nti) == 0) {
  message("Default value given to nti, all nucleotide transitions will be shown in the error plot")
  nti <- c("A", "C", "G", "T")
}

ntj <- strsplit(config$DADA2$plotERRORS$ntj, "")[[1]]
if (length(nti) == 0) {
  message("Default value given to ntj, all nucleotide transitions will be shown in the error plot")
  ntj <- c("A", "C", "G", "T")
}

#For the analysis of Novaseq data, use a different error model, because of binned quality values
#https://github.com/ErnakovichLab/dada2_ernakovichlab/tree/split_for_premise
#https://github.com/benjjneb/dada2/issues/1307 and issue 791

loessErrfun_mod4 <- function(trans) {
  qq <- as.numeric(colnames(trans))
  est <- matrix(0, nrow=0, ncol=length(qq))
  for(nti in c("A","C","G","T")) {
    for(ntj in c("A","C","G","T")) {
      if(nti != ntj) {
        errs <- trans[paste0(nti,"2",ntj),]
        tot <- colSums(trans[paste0(nti,"2",c("A","C","G","T")),])
        rlogp <- log10((errs+1)/tot)  # 1 psuedocount for each err, but if tot=0 will give NA
        rlogp[is.infinite(rlogp)] <- NA
        df <- data.frame(q=qq, errs=errs, tot=tot, rlogp=rlogp)

        # original
        # ###! mod.lo <- loess(rlogp ~ q, df, weights=errs) ###!
        # mod.lo <- loess(rlogp ~ q, df, weights=tot) ###!
        # #        mod.lo <- loess(rlogp ~ q, df)

        # jonalim's solution
        # https://github.com/benjjneb/dada2/issues/938
        mod.lo <- loess(rlogp ~ q, df, weights = log10(tot),degree = 1, span = 0.95)

        pred <- predict(mod.lo, qq)
        maxrli <- max(which(!is.na(pred)))
        minrli <- min(which(!is.na(pred)))
        pred[seq_along(pred)>maxrli] <- pred[[maxrli]]
        pred[seq_along(pred)<minrli] <- pred[[minrli]]
        est <- rbind(est, 10^pred)
      } # if(nti != ntj)
    } # for(ntj in c("A","C","G","T"))
  } # for(nti in c("A","C","G","T"))

  # HACKY
  MAX_ERROR_RATE <- 0.25
  MIN_ERROR_RATE <- 1e-7
  est[est>MAX_ERROR_RATE] <- MAX_ERROR_RATE
  est[est<MIN_ERROR_RATE] <- MIN_ERROR_RATE

  # enforce monotonicity
  # https://github.com/benjjneb/dada2/issues/791
  estorig <- est
  est <- est %>%
    data.frame() %>%
    mutate_all(funs(case_when(. < X40 ~ X40,
                              . >= X40 ~ .))) %>% as.matrix()
  rownames(est) <- rownames(estorig)
  colnames(est) <- colnames(estorig)

  # Expand the err matrix with the self-transition probs
  err <- rbind(1-colSums(est[1:3,]), est[1:3,],
               est[4,], 1-colSums(est[4:6,]), est[5:6,],
               est[7:8,], 1-colSums(est[7:9,]), est[9,],
               est[10:12,], 1-colSums(est[10:12,]))
  rownames(err) <- paste0(rep(c("A","C","G","T"), each=4), "2", c("A","C","G","T"))
  colnames(err) <- colnames(trans)
  # Return
  return(err)
}

# Get sample names
sample.names <- gsub("_1P.fastq.gz", "", basename(filtFs))
message(paste0("Sample ", sample.names, " will be analyzed", collapse = "\n"))

# Assign names to files
names(filtFs) <- sample.names
names(filtRs) <- sample.names
names(filtFs_single) <- sample.names
names(filtRs_single) <- sample.names

allfiles <- list(filtFs, filtRs, filtFs_single, filtRs_single)
names(allfiles) <- c("filtFs", "filtRs", "filtFs_single", "filtRs_single")
errs <- list()
dereps <- list()
dadas <- list()
seqtab <- list()
files_exist <- list()


for (i in 1:4) {

  message("Check that files are not empty")

  files_loop <- c()

  for (j in 1:length(allfiles[[i]])) {
    info <- file.info(allfiles[[i]][j])
    if (!is.na(info$size) & info$size > 20) { # 20 is the size of an empty .gz file
      files_loop <- c(files_loop, allfiles[[i]][j])
    }
  }

  if (is.null(files_loop)) {
    files_exist[i] <- list(NULL)
  } else {
    files_exist[[i]] <- files_loop
  }
}

# Update sample names with existing paired files
sample.names<-names(files_exist[[1]])

# Loop through all file types (forward, reverse, unpaired forward, unpaired reverse) for learning errors and dereplicating
for (i in 1:4) {

  if (length(files_exist[[i]])!=0) {        #any(file.exists(files_exist[[i]]))

    message(paste("learning error rates of files:", i, ": (1) forward paired (2) reverse paired (3) forward single and (4) reverse single " , sep=" "))

    #Add here a more generic character matching
    if (config$meta$sequencing$seq_meth=="NovaSeq6000"){ 
    message("learning error rates using a modified error model for NovaSeq data")

    errs[[i]] <- learnErrors(files_exist[[i]], #allfiles[[i]][file.exists(allfiles[[i]])]
                            multithread = config$DADA2$learnERRORS$multithread,
                            nbases = as.numeric(config$DADA2$learnERRORS$nbases),
                            randomize = config$DADA2$learnERRORS$randomize,
                            MAX_CONSIST = as.numeric(config$DADA2$learnERRORS$MAX_CONSIST),
                            OMEGA_C = as.numeric(config$DADA2$learnERRORS$OMEGA_C),
                            verbose = config$DADA2$learnERRORS$verbose,
                            errorEstimationFunction = loessErrfun_mod4,
                            )


    } else {


    errs[[i]] <- learnErrors(files_exist[[i]], #allfiles[[i]][file.exists(allfiles[[i]])]
                            multithread = config$DADA2$learnERRORS$multithread,
                            nbases = as.numeric(config$DADA2$learnERRORS$nbases),
                            randomize = config$DADA2$learnERRORS$randomize,
                            MAX_CONSIST = as.numeric(config$DADA2$learnERRORS$MAX_CONSIST),
                            OMEGA_C = as.numeric(config$DADA2$learnERRORS$OMEGA_C),
                            verbose = config$DADA2$learnERRORS$verbose)

    }

    message("Making error estimation plots of reads")
    png(filename = paste0(outpath, "06-report/dada2/error_profile_", names(allfiles)[i], ".png"))
      p_ERR <- plotErrors(
        errs[[i]],
        nti = nti,
        ntj = ntj,
        obs = config$DADA2$plotERRORS$obs,
        err_out = config$DADA2$plotERRORS$err_out,
        err_in = config$DADA2$plotERRORS$err_in,
        nominalQ = config$DADA2$plotERRORS$nominalQ)
      print(p_ERR)
    dev.off()

    message("Running dereplication of reads")
    #print(paste("files that exist:", files_exist[[i]]))
    dereps[[i]] <- derepFastq(files_exist[[i]], #allfiles[[i]][file.exists(allfiles[[i]])]
                              n = as.numeric(config$DADA2$derepFastq$num),
                              config$DADA2$learnERRORS$verbose)

    message("Running dada on reads")
    dadas[[i]] <- dada(dereps[[i]],
                      errs[[i]],
                      selfConsist = config$DADA2$dada$selfConsist,
                      pool = config$DADA2$dada$pool,
                      priors = config$DADA2$dada$priors,
                      multithread = config$DADA2$learnERRORS$multithread,
                      verbose = config$DADA2$learnERRORS$verbose)

    # Convert to list in case there's only one file
    if (is(dadas[[i]], "dada")) {
      message("Convert to list in case there's only one file")
      dadas[[i]] <- list(dadas[[i]])
      names(dadas[[i]]) <- names(files_exist[[i]])#allfiles[[i]][file.exists(allfiles[[i]])])
    }
    if (is(dereps[[i]], "derep")) {
      dereps[[i]] <- list(dereps[[i]])
      names(dereps[[i]]) <- names(files_exist[[i]])#allfiles[[i]][file.exists(allfiles[[i]])])
    }

  # If no files found for the paired reads (should not be the case!)
  } else if (!grepl("single", names(allfiles)[i])) {

    message("Error: no paired reads to process")
    stop()

  # If no files found for the unpaired reads, continue with the workflow
  # It has to be made sure in the snakefile that this step is run despite not requiring output files
  } else {

    message("No further unpaired reads to process")

  }
}

# Merge forward and reverse paired reads
message("Attempting merge")

# Define function for merging and formatting seqtabs: next steps require an integer matrix
merge_format_seqtab <- function(seqtab1, seqtab2) {
  df1 <- reshape2::melt(seqtab1, varnames = c("sample", "sequence"))
  df2 <- reshape2::melt(seqtab2, varnames = c("sample", "sequence"))
  df <- bind_rows(df1, df2) %>%
    group_by(sample, sequence) %>%
    summarize(value = sum(value)) %>%
    ungroup()
  m <- reshape2::acast(df, sample ~ sequence, value.var = "value")
  mode(m) <- "integer"
  return(m)
}

if (config$DADA2$mergePairs$include) {
  message("merging pairs")
  mergers <- mergePairs(dadas[[1]],
                        dereps[[1]],
                        dadas[[2]],
                        dereps[[2]],
                        minOverlap = as.numeric(config$DADA2$mergePairs$minOverlap),
                        maxMismatch = as.numeric(config$DADA2$mergePairs$maxMismatch),
                        returnRejects = config$DADA2$mergePairs$returnRejects,
                        propagateCol = config$DADA2$mergePairs$propagateCol,
                        justConcatenate = config$DADA2$mergePairs$justConcatenate,
                        trimOverhang = config$DADA2$mergePairs$trimOverhang,
                        verbose = config$DADA2$learnERRORS$verbose)
  if (is.data.frame(mergers)) {
    mergers <- list(mergers) %>% setNames(sample.names)
  }

  # Create read/ASV mapping
  mergers_all <- mergePairs(dadas[[1]], dereps[[1]], dadas[[2]], dereps[[2]],
    minOverlap = as.numeric(config$DADA2$mergePairs$minOverlap), maxMismatch = as.numeric(config$DADA2$mergePairs$maxMismatch),
    returnRejects = TRUE, propagateCol = config$DADA2$mergePairs$propagateCol,
    justConcatenate = config$DADA2$mergePairs$justConcatenate, trimOverhang = config$DADA2$mergePairs$trimOverhang,
    verbose = config$DADA2$learnERRORS$verbose)
  if (is.data.frame(mergers_all)) {
    mergers_all <- list(mergers_all) %>% setNames(sample.names)
  }

  mapping <- lapply(names(mergers_all), function(name) {
    merger_to_dada <- lapply(mergers_all[[name]]$forward, function(x) {
      which(dadas[[1]][[name]]$map == x)
    })
    merger_to_read <- lapply(merger_to_dada, function(x) {
      which(dereps[[1]][[name]]$map %in% x)
    })
    names(merger_to_read) <- mergers_all[[name]]$sequence
    merger_to_read
  })
  names(mapping) <- names(mergers_all)
  for (name in names(mapping)) {
    fq <- microseq::readFastq(filtFs[[name]])
    headers <- sub(" .*", "", fq$Header)
    mapping[[name]] <- mapping[[name]] %>%
      enframe(name = "sequence", value = "read") %>%
      unnest() %>%
      filter(sequence != "") %>%
      mutate(read = headers[read]) %>%
      arrange(sequence, read)
  }

  seqtab <- makeSequenceTable(mergers)

  # When merging is done with returnRejects=TRUE, the abundance of the rejected merges is returned, but not the sequence
  # We want to collect also these single sequences and add them to the seqtab (to avoid loosing ANY data)
  if (config$DADA2$mergePairs$returnRejects == TRUE) {

    unmerged_f <- list()
    unmerged_r <- list()
    concatenated <- list()

    for (i in 1:length(sample.names)) {
      unmerged_f[[i]] <- dadas[[1]][[sample.names[i]]]$sequence[mergers[[sample.names[i]]]$forward[!mergers[[sample.names[i]]]$accept]]
      unmerged_r[[i]] <- dadas[[2]][[sample.names[i]]]$sequence[mergers[[sample.names[i]]]$reverse[!mergers[[sample.names[i]]]$accept]]
      # Here for the rejected reads (!merger$sample$accept) the indices are collected (merger$sample$forward, merger$sample$reverse)
      # The sequences are sourced from the original dada-file (dadaF$sample$denoised, dadaR$sample$denoised)
      # It seems that concatenating these reads and keeping them for further analyses can result in better taxonomic coverage (Dacey et al. 2021 https://doi.org/10.1186/s12859-021-04410-2)
      # Abundances for these reads is taken from the merged abundances.
      # reverse complement reverse reads so that the following taxonomic assignment will work optimally.
      unmerged_r[[i]] <- sapply(sapply(sapply(unmerged_r[[i]], DNAString), Biostrings::reverseComplement), toString)
      sequence <- paste0(unmerged_f[[i]], unmerged_r[[i]])
      abundance <- mergers[[sample.names[i]]]$abundance[!mergers[[sample.names[i]]]$accept]
      concatenated[[i]] <- tibble(sequence, abundance)
    }
    names(concatenated) <- sample.names
    #names(unmerged_r)=sample.names

    # Make sequence table
    seqtab_unmerged <- makeSequenceTable(concatenated)

    # The merged returnrejects = T seqtab also contains a column with an empty header,
    # This is all rejected (non-merged) abundances combined.
    # This column messes with future steps, so we want to remove it.
    # We have instead collected the abundances of the unmerged reads to add to the table with sequences.
    #seqtab <- seqtab[,-which(colnames(seqtab) == "")]

    # Merge with paired reads and format for the next steps (integer matrix)
    seqtab <- merge_format_seqtab(seqtab, seqtab_unmerged)

  }

} else {

  message("no merging of paired reads")
  # Combine forward and reverse sequences to one table
  seqtab1 <- makeSequenceTable(dadas[[1]])
  seqtab2 <- makeSequenceTable(dadas[[2]])
  # Reverse complement reverse reads so that the following taxonomic assignment will work optimally.
  colnames(seqtab2) <- sapply(sapply(sapply(colnames(seqtab2), DNAString), Biostrings::reverseComplement), toString)
  seqtab <- cbind(seqtab1, seqtab2)

}

# Add ASVs from single reads to full table, and format table to the right format to continue with the pipeline
# It has to be an integer matrix with samples as rownames and sequences as column names
if (length(files_exist[[3]])!=0) {
  message("Adding ASVs from unpaired forward reads to ASV-table")
  seqtab3 <- makeSequenceTable(dadas[[3]])
  seqtab <- merge_format_seqtab(seqtab, seqtab3)
}

if (length(files_exist[[4]])!=0) {
  message("Adding ASVs from unpaired reverse reads to ASV-table")
  seqtab4 <- makeSequenceTable(dadas[[4]])
  # Here also the sequences from the reverse reads are reverse complemented before they are added to the sequence table
  colnames(seqtab4) <- sapply(sapply(sapply(colnames(seqtab4), DNAString), reverseComplement), toString)
  seqtab <- merge_format_seqtab(seqtab, seqtab4)
}

# Chimeras removed from the full combined table as per: https://github.com/benjjneb/dada2/issues/1235:
# We think the best way (in most cases, using current common techs -- such is the challenge of recommendations) is to combine the tables from multiple runs and then remove chimeras on that combined table.
# Look into details of how chimera removal is done to understand if this is smart
# In their case they are looking at equal length reads, possibly the single reads and paired reads should not be combined for chimera removal?

message("removing chimeras")
seqtab.nochim <- removeBimeraDenovo(seqtab,
                                    method = config$DADA2$removeBimeraDenovo$method,
                                    multithread = config$DADA2$learnERRORS$multithread,
                                    verbose = config$DADA2$learnERRORS$verbose)

print(dim(seqtab.nochim))
# ASVs denominated by the actual sequence, we want to simplify the names.
new.names <- c(paste("asv.", 1:length(colnames(seqtab.nochim)), sep = ""))
message(head(new.names))

# Save fasta, before changing names in the seqtab table
message("Making fasta table")
uniquesToFasta(seqtab.nochim, fout = paste0(outpath, "03-dada2/rep-seqs.fna"), ids = new.names)

# Export read/asv mapping
if (exists("mapping")) {
  message("Exporting read/asv mapping")
  asv_sequences <- data.frame(asv = new.names, sequence = colnames(seqtab.nochim))
  for (name in names(mapping)) {
    mapping[[name]] <- mapping[[name]] %>%
      left_join(asv_sequences, by = "sequence") %>%
      select(read, asv) %>%
      filter(!is.na(asv))
    dir.create(file.path(paste0(outpath, "03-dada2/mapping/")))
    dir.create(file.path(paste0(outpath, "03-dada2/mapping/", name)))
    write.table(mapping[[name]], paste0(outpath, "03-dada2/mapping/", name, "/", name, "_mapping.txt"), sep = "\t", col.names = TRUE, row.names = FALSE, quote = FALSE)
  }
}

# Finally, change names in otu table
message("Changing sequence names")
colnames(seqtab.nochim) <- new.names
#Replace any NAs that are possibly in the seqtab with 0s
seqtab.nochim[is.na(seqtab.nochim)] <- 0


# This show sequence length distributions (see if you should include this)
#seq_hist <- table(nchar(getSequences(seqtab)))
#fname_seqh <- paste(args[6],"seq_hist.txt",sep="")
#write.table(seq_hist, file = fname_seqh  , sep = "\t", quote=FALSE, col.names = FALSE)

# Collect results of how many reads are available at each step in a table:
getN <- function(x) sum(getUniques(x))

# Make a table with all information on the reads retained from the run, if paired reads were merged:
message("Making summary table")
if (config$DADA2$mergePairs$include) {
  track <- cbind(sapply(dadas[[1]], getN), sapply(dadas[[2]], getN), sapply(mergers, getN), rowSums(seqtab.nochim), rowSums(seqtab.nochim != 0))
  colnames(track) <- c("denoisedF", "denoisedR", "merged", "nonchim", "ASVs")
# If paired reads were not merged:
} else {
  track <- cbind(sapply(dadas[[1]], getN), sapply(dadas[[2]], getN), rowSums(seqtab.nochim), rowSums(seqtab.nochim != 0))
  colnames(track) <- c("denoisedF", "denoisedR", "nonchim", "ASVs")
}

# Add also information on the reads that came from possibly evaluated single reads
if (length(files_exist[[3]])!=0) {
  message("Adding info from unpaired forward reads to the summary table")
  denoisedF_single <- sapply(dadas[[3]], getN)
  track <- merge(track, data.frame(denoisedF_single), by = 0, all = TRUE)
  rownames(track) <- track$Row.names
  track <- subset(track, select = -Row.names)
  # Reorder columns:
  track <- track %>% relocate(denoisedF_single, .before = nonchim)
}

if (length(files_exist[[4]])!=0) {
  message("Adding info from unpaired reverse reads to the summary table")
  denoisedR_single <- sapply(dadas[[4]], getN)
  track <- merge(track, data.frame(denoisedR_single), by = 0, all = TRUE)
  rownames(track) <- track$Row.names
  track <- subset(track, select = -Row.names)
  track <- track %>% relocate(denoisedR_single, .before = nonchim)
}

rownames(track)
#rownames(track) <- sample.names
#message(track)

# Read results of filtering step and append the results of ASV step:
out <- read.table(paste0(outpath, "06-report/dada2/dada2_filtering_stats.txt"), header = TRUE)
#track <- cbind(out, track)
track <- merge(out, track, by = 0, all = TRUE)
rownames(track) <- track$Row.names
track <- subset(track, select = -Row.names)


# Write output tables of reads and the otu_table:
write.table(track, paste0(outpath, "06-report/dada2/dada2_stats.txt"), row.names = TRUE, col.names = TRUE, quote = FALSE)
write.table(t(seqtab.nochim), paste0(outpath,"03-dada2/seqtab-nochim.txt"), sep = "\t", row.names = TRUE, col.names = NA, quote = FALSE)

# Fix for https://github.com/tidyverse/ggplot2/issues/2787
if (file.exists("Rplots.pdf")) {
  file.remove("Rplots.pdf")
}
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
library(yaml)
library(dada2)
#library(Biostrings)
library(ggplot2)

args <- commandArgs(trailingOnly = TRUE)
# Add all arguments for dada2 parameters from configfile!
config <- read_yaml(args[2])

# Set the different paths for all the supplied libraries
paths <- args[3:length(args)]
#print(paths)

outpath <- args[1]
#print(outpath)

# Set the different paths for all the supplied libraries
# NOTICE: only forward files given as input
filesForw <- paths
filesRev <- gsub("_1P", "_2P", filesForw)

filesForw_single <- gsub("_1P", "_1U", filesForw)
filesRev_single <- gsub("_1P", "_2U", filesForw)

# Get sample names
#sample.names <- gsub("_1P.fastq.gz", "", basename(filesForw))
#message(paste("Sample", sample.names, "will be analysed", collapse = "\n"))
sample.names <- list()

# Make folder for quality plots, and report
dir.create(paste0(outpath, "03-dada2/quality"), recursive = TRUE)
dir.create(paste0(outpath, "06-report/dada2/"))

allfiles <- list(filesForw, filesRev, filesForw_single, filesRev_single)
names(allfiles) <- c("filesForw", "filesRev", "filesForw_single", "filesRev_single")
files_exist <- list()
filts <- list()
quals <- list()

# loop through 1. forward paired, 2. reverse paired 3. forward single 4. reverse single reads
# First check that files are not empty (cutadapt returns empty files for those that don't pass any filters.)
# Filter and trim does not make empty files that don't pass filters, so make these separately.
message("Analyse different files group separately")

for (i in 1:4) {

  message("Check that files are not empty")

  files_loop <- c()

  for (j in 1:length(allfiles[[i]])) {
    info <- file.info(allfiles[[i]][j])
    if (!is.na(info$size) & info$size > 20) { # 20 is the size of an empty .gz file
      files_loop <- c(files_loop, allfiles[[i]][j])
    }
  }

  if (is.null(files_loop)) {
    files_exist[i] <- list(NULL)
  } else {
    files_exist[[i]] <- files_loop
  }

  if (length(files_exist[[i]] != 0)) {
    message(paste("making quality plots of raw reads:", i, ": (1) forward paired (2) reverse paired (3) forward single and (4) reverse single " , sep = " "))

    sample.names[[i]] <- gsub("_1P.fastq.gz", "", basename(files_exist[[i]]))
    sample.names[[i]] <- gsub("_2P.fastq.gz", "", sample.names[[i]])
    sample.names[[i]] <- gsub("_1U.fastq.gz", "", sample.names[[i]])
    sample.names[[i]] <- gsub("_2U.fastq.gz", "", sample.names[[i]])

    message(paste(names(allfiles)[i], "reads of sample", sample.names[[i]], "will be analysed", collapse = "\n"))

    quals[[i]] <- gsub("02-cutadapt/", "03-dada2/quality/", files_exist[[i]])
    quals[[i]] <- gsub(".fastq.gz", ".png", quals[[i]])

    plot_list <- list()
    message("Making quality plots")
    for (j in 1:length(quals[[i]])) {
      dir.create(dirname(quals[[i]][j]), showWarnings = FALSE)
      p <- plotQualityProfile(files_exist[[i]][j])
      ggsave(quals[[i]][j], plot = p, dpi = 150, width = 10, height = 10, units = "cm")
    }

    #Make aggregate plot of all reads
    print(plotQualityProfile(files_exist[[i]], aggregate = T))
    ggsave(paste0(outpath,"06-report/dada2/aggregate_quality_profiles_", names(allfiles)[i], ".png"), dpi = 300, width = 10, height = 10, units = "cm")

    #Create path and file names for filtered samples"
    filts[[i]] <- gsub("02-cutadapt/", "03-dada2/filtered/", files_exist[[i]])

    #assign names to files
    names(filts[[i]]) <- sample.names[[i]]

  } else {

    message(paste("No reads to process for", names(allfiles)[i], sep=" "))

  }
}

# Run filtering on reads:
# Paired reads together and single reads separately
if (config$meta$sequencing$lib_layout=="Paired") {
  message("Filtering and Trimming paired reads based on parameter set in the config file")

  out <- filterAndTrim(files_exist[[1]], filts[[1]], files_exist[[2]], filts[[2]],
    truncLen = c(config$DADA2$filterAndTrim$Trunc_len_f,config$DADA2$filterAndTrim$Trunc_len_r),
    truncQ = config$DADA2$filterAndTrim$TruncQ,
    trimRight = config$DADA2$filterAndTrim$Trim_right,
    trimLeft = config$DADA2$filterAndTrim$Trim_left,
    maxLen = config$DADA2$filterAndTrim$maxLen,
    minLen = config$DADA2$filterAndTrim$minLen,
    maxN = config$DADA2$filterAndTrim$maxN,
    minQ = config$DADA2$filterAndTrim$minQ,
    maxEE = config$DADA2$filterAndTrim$MaxEE,
    rm.phix = config$DADA2$filterAndTrim$Rm.phix,
    orient.fwd = config$DADA2$filterAndTrim$orient.fwd,
    matchIDs = config$DADA2$filterAndTrim$matchIDs,
    id.sep = config$DADA2$filterAndTrim$id.sep,
    id.field = config$DADA2$filterAndTrim$id.field,
    compress = config$DADA2$filterAndTrim$compress,
    multithread = config$DADA2$filterAndTrim$multithread,
    n = config$DADA2$filterAndTrim$num,
    OMP = config$DADA2$filterAndTrim$OMP,
    verbose = config$DADA2$filterAndTrim$verbose
  )

# Write out to save the effect of filtering on the reads:
rownames(out) <- sample.names[[1]]
out <- as.data.frame(out)
colnames(out)[2] <- "reads.out.paired"
stats_reads <- out
#write.table(out, paste0(outpath, "06-report/dada2/dada2_filtering_stats_paired_reads.txt"), row.names = TRUE, col.names = TRUE, quote = FALSE)

qualfiltsFs <- gsub(".png", "_filtered.png", quals[[1]])
qualfiltsRs <- gsub(".png", "_filtered.png", quals[[2]])

# Make quality profile plots.
message("Making quality plots of the filtered reads")
filts_passedF <- c()
filts_passedR <- c()
plot_list <- list()

for (j in 1:length(qualfiltsFs)) {
  if (file.exists(filts[[1]][j])) {
      filts_passedF <- c(filts_passedF, filts[[1]][j])
      dir.create(dirname(qualfiltsFs[j]), showWarnings = FALSE)
      p <- plotQualityProfile(filts[[1]][j])
      ggsave(qualfiltsFs[j], plot = p, dpi = 150, width = 10, height = 10, units = "cm")
  }
  if (file.exists(filts[[2]][j])) {
    filts_passedR <- c(filts_passedR, filts[[2]][j])
    dir.create(dirname(qualfiltsRs[j]), showWarnings = FALSE)
    q <- plotQualityProfile(filts[[2]][j])
    ggsave(qualfiltsRs[j], plot = q, dpi = 150, width = 10, height = 10, units = "cm")
  }
}

print(plotQualityProfile(filts_passedF, aggregate = T))
ggsave(paste0(outpath, "06-report/dada2/aggregate_quality_profiles_paired_filtered_forward.png"), dpi = 300, width = 10, height = 10, units = "cm")
print(plotQualityProfile(filts_passedR, aggregate = T))
ggsave(paste0(outpath, "06-report/dada2/aggregate_quality_profiles_paired_filtered_reverse.png"), dpi = 300, width = 10, height = 10, units = "cm")

} else {

message("Paired read files will be analysed in single-end mode")

#Append files_exist 1P and files_exist 2P to the single reads so that they are analysed in the same workflow
#Forward reads:
if (length(files_exist[[3]] != 0)) { 
files_exist[[3]]=c(files_exist[[1]], files_exist[[3]])
filts[[3]]=c(filts[[1]], filts[[3]])
quals[[3]]=c(quals[[1]], quals[[3]])
sample.names[[3]]=c(sample.names[[1]], sample.names[[3]])
} else {
files_exist[[3]]=files_exist[[1]]
filts[[3]]=filts[[1]]
quals[[3]]=quals[[1]]
sample.names[[3]]=sample.names[[1]]
}

#Reverse reads:
if (length(files_exist[[4]] != 0)) { 
files_exist[[4]]=c(files_exist[[2]], files_exist[[4]])
filts[[4]]=c(filts[[2]], filts[[4]])
quals[[4]]=c(quals[[2]], quals[[4]])
sample.names[[4]]=c(sample.names[[2]], sample.names[[4]])
} else {
files_exist[[4]]=files_exist[[2]]
filts[[4]]=filts[[2]]
quals[[4]]=quals[[2]]
sample.names[[4]]=sample.names[[2]]
}

} 

# Same for single reads:

message("Filtering and Trimming unpaired forward reads based on parameter set in the config file")

if (length(files_exist[[3]]) != 0) {

  #filts[[3]] <- filts[[3]][file.exists(filts[[3]])]
  out <- filterAndTrim(files_exist[[3]], filts[[3]],
    truncLen = config$DADA2$filterAndTrim$Trunc_len_f,
    truncQ = config$DADA2$filterAndTrim$TruncQ,
    trimRight = config$DADA2$filterAndTrim$Trim_right,
    trimLeft = config$DADA2$filterAndTrim$Trim_left,
    maxLen = config$DADA2$filterAndTrim$maxLen,
    minLen = config$DADA2$filterAndTrim$minLen,
    maxN = config$DADA2$filterAndTrim$maxN,
    minQ = config$DADA2$filterAndTrim$minQ,
    maxEE = config$DADA2$filterAndTrim$MaxEE,
    rm.phix = config$DADA2$filterAndTrim$Rm.phix,
    orient.fwd = config$DADA2$filterAndTrim$orient.fwd,
    matchIDs = config$DADA2$filterAndTrim$matchIDs,
    id.sep = config$DADA2$filterAndTrim$id.sep,
    id.field = config$DADA2$filterAndTrim$id.field,
    compress = config$DADA2$filterAndTrim$compress,
    multithread = config$DADA2$filterAndTrim$multithread,
    n = config$DADA2$filterAndTrim$num,
    OMP = config$DADA2$filterAndTrim$OMP,
    verbose = config$DADA2$filterAndTrim$verbose
  )

  # Write out to save the effect of filtering on the reads:
  rownames(out) <- sample.names[[3]]
  out <- as.data.frame(out)
  colnames(out) <- c("reads.in.forward.single", "reads.out.forward.single")

  if (config$meta$sequencing$lib_layout=="Paired") {
  stats_reads <- cbind(stats_reads, out[match(rownames(stats_reads), rownames(out)),])
  #stats_reads$reads.out.forward.single = out$reads.out.forward.single[match(rownames(stats_reads), rownames(out))]
  #write.table(out, paste0(outpath, "06-report/dada2/dada2_filtering_stats_unpaired_forward_reads.txt"), row.names = TRUE, col.names = TRUE, quote = FALSE)
  } else {
    stats_reads <- out
  }


  qualfiltsFs_single <- gsub(".png", "_filtered.png", quals[[3]])

  # Make quality profile plots.
  message("Making quality plots of the filtered reads")
  filts_passedF <- c()
  plot_list <- list()

  for (j in 1:length(qualfiltsFs_single)) {
    if (file.exists(filts[[3]][j])) {
      filts_passedF <- c(filts_passedF, filts[[3]][j])
      dir.create(dirname(qualfiltsFs_single[j]), showWarnings = FALSE)
      p <- plotQualityProfile(filts[[3]][j])
      ggsave(qualfiltsFs_single[j], plot = p, dpi = 150, width = 10, height = 10, units = "cm")
    }
  }

if (length(filts_passedF)!=0){ 
    print(plotQualityProfile(filts_passedF, aggregate = T))
    ggsave(paste0(outpath, "06-report/dada2/aggregate_quality_profiles_filtered_unpaired_forward.png"), dpi = 300, width = 10, height = 10, units = "cm")
  }

}

message("Filtering and Trimming unpaired reverse reads based on parameter set in the config file")

if (length(files_exist[[4]]) != 0) {

  out <- filterAndTrim(files_exist[[4]], filts[[4]],
    truncLen = config$DADA2$filterAndTrim$Trunc_len_r,
    truncQ = config$DADA2$filterAndTrim$TruncQ,
    trimRight = config$DADA2$filterAndTrim$Trim_right,
    trimLeft = config$DADA2$filterAndTrim$Trim_left,
    maxLen = config$DADA2$filterAndTrim$maxLen,
    minLen = config$DADA2$filterAndTrim$minLen,
    maxN = config$DADA2$filterAndTrim$maxN,
    minQ = config$DADA2$filterAndTrim$minQ,
    maxEE = config$DADA2$filterAndTrim$MaxEE,
    rm.phix = config$DADA2$filterAndTrim$Rm.phix,
    orient.fwd = config$DADA2$filterAndTrim$orient.fwd,
    matchIDs = config$DADA2$filterAndTrim$matchIDs,
    id.sep = config$DADA2$filterAndTrim$id.sep,
    id.field = config$DADA2$filterAndTrim$id.field,
    compress = config$DADA2$filterAndTrim$compress,
    multithread = config$DADA2$filterAndTrim$multithread,
    n = config$DADA2$filterAndTrim$num,
    OMP = config$DADA2$filterAndTrim$OMP,
    verbose = config$DADA2$filterAndTrim$verbose
  )

  # Write out to save the effect of filtering on the reads:
  rownames(out) <- sample.names[[4]]
  out <- as.data.frame(out)
  colnames(out) <- c("reads.in.reverse.single", "reads.out.reverse.single")
  stats_reads <- cbind(stats_reads, out[match(rownames(stats_reads), rownames(out)),])
  #stats_reads$reads.out.reverse.single = out$reads.out.reverse.single[match(rownames(stats_reads), rownames(out))]
  #write.table(out, paste0(outpath, "06-report/dada2/dada2_filtering_stats_unpaired_reverse_reads.txt"), row.names = TRUE, col.names = TRUE, quote = FALSE)

  qualfiltsRs_single <- gsub(".png", "_filtered.png", quals[[4]])

  # Make quality profile plots.
  message("Making quality plots of the filtered reads")
  filts_passedR <- c()
  plot_list <- list()


  for (j in 1:length(qualfiltsRs_single)) {
    if (file.exists(filts[[4]][j])){
      filts_passedR <- c(filts_passedR, filts[[4]][j])
      dir.create(dirname(qualfiltsRs_single[j]), showWarnings = FALSE)
      p <- plotQualityProfile(filts[[4]][j])
      ggsave(qualfiltsRs_single[j], plot = p, dpi = 150, width = 10, height = 10, units = "cm")
    }
  }

  if (length(filts_passedR)!=0){ 
    print(plotQualityProfile(filts_passedR, aggregate = T))
    ggsave(paste0(outpath, "06-report/dada2/aggregate_quality_profiles_filtered_unpaired_reverse.png"), dpi = 300, width = 10, height = 10, units = "cm")
  }
}

write.table(stats_reads, paste0(outpath, "06-report/dada2/dada2_filtering_stats.txt"), row.names = TRUE, col.names = TRUE, quote = FALSE)

# Fix for https://github.com/tidyverse/ggplot2/issues/2787
if (file.exists("Rplots.pdf")) {
  file.remove("Rplots.pdf")
}

# Make empty files for the those samples that did not pass filtering (and no files therefore created)

## There is no error if all reads have been eliminated, but no files
  ## are written in this case. Check for the output files, and if they
  ## don't exist, create empty ones.

 for (i in 1:length(filts)) {
  for(fn in filts[[i]]){
    if(!file.exists(fn)){
      cat(gettextf('creating empty file %s\n', fn))
      gzf = gzfile(fn)
      cat('', file=gzf, fill=FALSE)
      close(gzf)
    }}}
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
library("stringr")
library("phyloseq")
library("dplyr")
library("xml2")
library("yaml")

# Parse arguments
args <- commandArgs(trailingOnly = T)
message("args <- ", capture.output(dput(args))) # output for debugging

outpath <- args[1]
tax_file_path <- args[3]
otu_file_path <- args[2]
rep_seqs_path <- args[4]
sample_file_path <- args[5]
config_path <- args[6]

# LOAD data (this will be done from input later on)

tax_file <- read.csv(tax_file_path, sep = "\t", header = T, row.names = 1)
otu_file <- read.csv(otu_file_path, sep = "\t", header = T, row.names = 1, check.names = F)
Rep_seqs <- Biostrings::readDNAStringSet(rep_seqs_path)
sample_file <- read.csv(sample_file_path, sep = ";", header = T, row.names = 1)
config <- read_yaml(config_path)

# TODO: Add checks!
# 1. Make sure that the values are given
# 2. Make sure that the sample names in the otu-file (the original sample table), and the sample_data match!
# 3. Make sure the required fields are not empty
#       BasisOfRecord, eventdate, occurrencestatus (what about latitude and longitude?)

########################### 2. Add user provided fields to sample data ############################################################################################

# First change empty values to NA
#args[args[5:15] == "None"] <- NA

sample_file$target_gene <- config$meta$sequencing$target_gene
sample_file$subfragment <- config$meta$sequencing$subfragment 
sample_file$pcr_primer_forward <- config$meta$sequencing$pcr_primer_forward 
sample_file$pcr_primer_reverse <- config$meta$sequencing$pcr_primer_reverse 
sample_file$pcr_primer_name_forward <- config$meta$sequencing$pcr_primer_name_forw 
sample_file$pcr_primer_name_reverse <- config$meta$sequencing$pcr_primer_name_reverse 
sample_file$pcr_primer_reference <- config$meta$sequencing$pcr_primer_reference 
sample_file$lib_layout <- config$meta$sequencing$lib_layout
sample_file$seq_meth <- config$meta$sequencing$seq_meth 
sample_file$sop <- config$meta$sequencing$sop 
sample_file$votu_db <- config$DATABASE$name 

# Addition of possible extra fields:
extra_fields <- config$meta$sequencing$extra_fields
args_name_value <- data.frame(command = 1, value = 1)

if (extra_fields != "None") {
  extra_args <- str_split(extra_fields, ",", simplify = T)
  for (i in 1:length(extra_args)) {
    args_name_value[i,] <- str_split(extra_args[i], ":", simplify = T)
  }
  for(i in 1:nrow(args_name_value)) {
    sample_file[,args_name_value[i, 1]] <- args_name_value[i, 2]
  }
}

########################### 3. Collect all values to phyloseq object ######################################

tax_table <- phyloseq::tax_table(as(tax_file, "matrix"))
otu_table <- phyloseq::otu_table(otu_file, taxa_are_rows = T)
sample_data <- phyloseq::sample_data(sample_file)

# Here I make a phyloseq object with the three files
phydata <- phyloseq::phyloseq(otu_table, tax_table, sample_data)
phydata <- phyloseq::merge_phyloseq(phydata, Rep_seqs)
# Print the amount of information stored:
phydata

# Save the phyloseq rdata object for easier access in the future
print("Saving the phyloseq table to an Rdata object, for ease of access for data analysis later")
print("The object can be loaded with readRDS, while the phyloseq package and library is loaded")
print(paste("The saved object can be found here: ", outpath, "phyloseq_object.rds", sep = ""))
saveRDS(phydata, paste0(outpath, "phyloseq_object.rds"))

# Remove the OTUs that are found in the control samples (occurrenceStatus==absent)
if ("absent" %in% sample_data$occurrenceStatus) {
  control_taxa <- taxa_names(filter_taxa(subset_samples(phydata, occurrenceStatus == "absent"), function(x) sum(x) > 0, TRUE))
  good_taxa <- taxa_names(phydata)[!(taxa_names(phydata) %in% control_taxa)]
  phydata_no_control <- prune_taxa(good_taxa, phydata)
}

# Here add the total read counts in each sample to the sample_data table:
# Here we should add a check that the samples are in the right order:
if ("absent" %in% sample_data$occurrenceStatus) {
  sample_data(phydata_no_control)$sampleSizeValue <- sample_sums(phydata)
  sample_data(phydata_no_control)$organismQuantityType <- "DNA Sequence reads"
  sample_data(phydata_no_control)$sampleSizeUnit <- "DNA Sequence reads"
  phydf <- psmelt(phydata_no_control)
} else {
  sample_data(phydata)$sampleSizeValue <- sample_sums(phydata)
  sample_data(phydata)$organismQuantityType <- "DNA Sequence reads"
  sample_data(phydata)$sampleSizeUnit <- "DNA Sequence reads"
  phydf <- psmelt(phydata)
}

phydf$occurrenceID <- paste(phydf$OTU, phydf$Sample, sep = "_")
phydf$materialSampleID <- phydf$Sample

# Change names where necessary
#names(phydf[names(phydf)=="lastvalue"])="ScientificName"
phydf <- phydf %>%
  rename(organismQuantity = Abundance)
  # %>%
  # add_column(organismQuantityType = "DNA Sequence reads") %>%
  # add_column(sampleSizeUnit = "DNA Sequence reads")

# Remove 0 Abundance data (not valuable for us)
phydf_present <- phydf[phydf$organismQuantity > 0,]

# Write tables with all the fields found in the current tables:

get_dwc_fields <- function(spec_url) {
  doc <- read_xml(spec_url)
  doc %>%
    xml_ns_strip() %>%
    xml_find_all("//property") %>%
    xml_attr(attr = "name")
}

spec_occurrence <- "https://rs.gbif.org/core/dwc_occurrence_2022-02-02.xml"
occurrence_table_fields <- get_dwc_fields(spec_occurrence)

spec_dna <- "https://rs.gbif.org/extension/gbif/1.0/dna_derived_data_2021-07-05.xml"
DNA_extension_fields <- get_dwc_fields(spec_dna)

occurrence_table <- phydf_present[,colnames(phydf_present) %in% occurrence_table_fields]
DNA_derived_data_extension <- phydf_present[,colnames(phydf_present) %in% c("occurrenceID", DNA_extension_fields)]

write.table(occurrence_table, paste0(outpath, "Occurrence_table.tsv"), sep = "\t", row.names = FALSE, col.names = TRUE, quote = FALSE, na = "")
write.table(DNA_derived_data_extension, paste0(outpath, "DNA_extension_table.tsv"), sep = "\t", row.names = FALSE, col.names = TRUE, quote = FALSE, na = "")
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
library(worrms)
library(stringr)
library(Biostrings)
library(dplyr)
library(tidyr)

RANKS <- c("kingdom", "phylum", "class", "order", "family", "genus", "species")
PIPELINE <- "bowtie2;2.4.4;ANACAPA-blca;2021"
BLAST_VERSION <- "blastn;2.12.0"
BLAST_DB <- "NCBI-nt;"
REF_DB_PATTERN <- "identity_filtered/\\s*(.*?)\\s*_blca_tax_table"

# Parse arguments
args <- commandArgs(trailingOnly = T)
message("args <- ", capture.output(dput(args))) # output for debugging

outpath <- args[1]
tax_file_path <- args[2]
rep_seqs_path <- args[3]
if (length(args) == 5) {
    basta_file_path <- args[4]
    blast_date <- args[5]
} else {
    basta_file_path <- NULL
}

########################### 1. Read input files  ################################################################################################################

tax_file <- read.csv(tax_file_path, sep = "\t", header = T) %>%
  rename("verbatimIdentification" = "taxonomy_confidence")
# Add annotation pipeline and reference database
tax_file$otu_seq_comp_appr <- PIPELINE
result <- regmatches(tax_file_path, regexec(REF_DB_PATTERN, tax_file_path))
otu_db <- result[[1]][2]
tax_file$otu_db <- otu_db

rep_seqs <- Biostrings::readDNAStringSet(rep_seqs_path)

# If blast was performed on the unknown sequences:
if (!is.null(basta_file_path)) {
  if (file.size(basta_file_path) > 0) {
    message("0. Results of Blast annotation read")
    basta_file <- read.csv(basta_file_path, sep = "\t", header = F)
    colnames(basta_file) <- c("rowname", "sum.taxonomy")
    basta_file$verbatimIdentification <- basta_file$sum.taxonomy
    # Add annotation pipeline and reference database
    basta_file$otu_seq_comp_appr <- BLAST_VERSION
    basta_file$otu_db <- paste0(BLAST_DB, blast_date)
  }
}

########################### 2. Modify tax table for taxonomic ranks, and find all possible worms ids (using worrms package) #####################################

message("1. Modify tax table for taxonomic ranks, and find all possible worms ids (using worrms package)")

# Clean set of taxon names into taxonomy as a named list
clean_taxonomy <- function(taxa) {
  if (all(str_detect(taxa, "([a-z]+)__(.*)_[0-9]"))) {
    taxa <- taxa[taxa != "" & taxa != "NA" & taxa != "nan"]
    if (length(taxa) == 0) {
      return(list(kingdom = NA))
    }
    parts <- str_match(taxa, "([a-z]+)__(.*)_[0-9]")
    ranks <- recode(parts[,2], "k" = "kingdom", "p" = "phylum", "c" = "class", "o" = "order", "f" = "family", "g" = "genus", "s" = "species")
    taxon_names <- as.list(parts[,3])
    names(taxon_names) <- ranks
    return(taxon_names)
  } else {
    if (length(taxa) == 0) {
      return(list(kingdom = NA))
    }
    taxa[taxa %in% c("", "NA", "nan", "unknown", "Unknown")] <- NA
    taxon_names <- setNames(as.list(taxa), RANKS[1:length(taxa)])
    return(taxon_names)
  }
}

if (exists("basta_file")) {
  # Remove ASVs present in basta file from tax file
  tax_file <- tax_file %>% filter(!rowname %in% basta_file$rowname)
  # Merge
  tax_file <- bind_rows(tax_file, basta_file)
}

taxonomies <- str_split(str_replace(tax_file$sum.taxonomy, ";+$", ""), ";")
cleaned <- lapply(taxonomies, clean_taxonomy)

taxmat <- cleaned %>%
  bind_rows() %>%
  as.data.frame() %>%
  select(!!!RANKS) %>%
  mutate(verbatimIdentification = tax_file$verbatimIdentification)%>%
  mutate(otu_seq_comp_appr = tax_file$otu_seq_comp_appr)%>%
  mutate(otu_db = tax_file$otu_db)

row.names(taxmat) <- tax_file$rowname

# Add possible remaining unknowns to the taxmat based on asvs in the rep_seqs (keep all ASVs in the final dataset)
rep_seqs_unknown <- names(rep_seqs[!names(rep_seqs)%in%row.names(taxmat),])
taxmat_unknown <- data.frame(
  otu_seq_comp_appr = rep(PIPELINE, length(rep_seqs_unknown)),
  otu_db = rep(otu_db, length(rep_seqs_unknown))
)
row.names(taxmat_unknown) <- rep_seqs_unknown
taxmat <- bind_rows(taxmat, taxmat_unknown)

# TODO: failing with rate limit, submit in batches with at least version 0.4.3 of worrms (https://anaconda.org/conda-forge/r-worrms)
match_name <- function(name) {
  lsid <- tryCatch({
    res <- wm_records_names(name, marine_only = FALSE)
    # TODO: fix
    Sys.sleep(1)
    matches <- res[[1]] %>%
      filter(match_type == "exact" | match_type == "exact_genus" | match_type == "exact_subgenus")
    if (nrow(matches) > 1) {
      message(paste0("Multiple matches for ", name))
    }
    return(matches[1,])
  }, error = function(cond) {
    message(cond)
    return(NULL)
  })
}

# Taxon names across all ranks
tax_names <- taxmat %>% select(!!!RANKS) %>% unlist() %>% na.omit() %>% unique() %>% sort()
matches <- sapply(tax_names, match_name)

taxmat$scientificName <- NA
taxmat$scientificNameID <- NA

for (i in 1:nrow(taxmat)) {

  lsids <- taxmat[i, RANKS] %>%
    as.character() %>%
    sapply(function(x) { matches[[x]]$lsid }) %>%
    sapply(function(x) { ifelse(is.null(x), NA, x) }) %>%
    unlist()
  if (all(is.na(lsids))) next

  most_specific_name <- taxmat[i, max(which(!is.na(lsids)))]
  scientificnameid <- matches[[most_specific_name]]$lsid

    taxmat$scientificName[i] <- matches[[most_specific_name]]$scientificname
    taxmat$scientificNameID[i] <- matches[[most_specific_name]]$lsid
    taxmat$taxonRank[i] <- tolower(matches[[most_specific_name]]$rank)
    taxmat$kingdom[i] <- matches[[most_specific_name]]$kingdom
    taxmat$phylum[i] <- matches[[most_specific_name]]$phylum
    taxmat$class[i] <- matches[[most_specific_name]]$class
    taxmat$order[i] <- matches[[most_specific_name]]$order
    taxmat$family[i] <- matches[[most_specific_name]]$family
    taxmat$genus[i] <- matches[[most_specific_name]]$genus
}

# Add Incertae sedis LSID in case there is no last value
taxmat$scientificName[is.na(taxmat$scientificName)] <- "Incertae sedis"
taxmat$scientificNameID[is.na(taxmat$scientificNameID)] <- "urn:lsid:marinespecies.org:taxname:12"

# Names not in WoRMS
names_not_in_worms <- names(matches)[sapply(matches, is.null)]
message("Number of species names not recognized in WORMS: ", length(names_not_in_worms))

# Add sequence to the tax_table slot (linked to each asv)
taxmat$DNA_sequence <- as.character(rep_seqs[row.names(taxmat)])

# Write table of unknown names to make manual inspection easier:
write.table(names_not_in_worms, paste0(outpath, "Taxa_not_in_worms.tsv"), sep = "\t", row.names = TRUE, col.names = TRUE, quote = FALSE, na = "")

# Write tax table
write.table(taxmat, paste0(outpath, "Full_tax_table_with_lsids.tsv"), sep = "\t", row.names = TRUE, col.names = TRUE, quote = FALSE, na = "")
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
import sys
from pathlib import Path
import csv
import os

if len(sys.argv) != 4:
    print("This program needs 3 arguments:")
    print("- Project name\n- Manifest file with three columns: sample-id, file-path, direction\n- Sample name")
    sys.exit()

script_path, project_name, manifest_path, sample_name = sys.argv

folders = [
    "results/" + project_name + "/samples/" + sample_name + "/rawdata/forward_reads",
    "results/" + project_name + "/samples/" + sample_name + "/rawdata/reverse_reads"
]

for folder in folders:
    Path(folder).mkdir(parents=True, exist_ok=True)
    print("Created folder %s" % (folder))

with open(manifest_path) as csv_file:
    reader = csv.reader(csv_file)
    next(reader)
    for row in reader:
        sample_id, file_path, direction = row

        if sample_id == sample_name:
            target_file = "fw.fastq.gz" if direction == "forward" else "rv.fastq.gz"
            target = os.path.abspath("results/" + project_name + "/samples/" + sample_id + "/rawdata/" + direction + "_reads/" + target_file)
            source = os.path.abspath(file_path)
            if Path(target).exists() or Path(target).is_symlink():
                Path(target).unlink()
                print("Removed existing symlink %s" % (target))
            os.symlink(source, target)
            print("Created symlink %s -> %s" % (target, source))
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
import argparse
import shutil


def summarize_taxonomy(full_taxonomy, confidences):
    full_taxonomy = full_taxonomy.rstrip(';')
    confidences = confidences.rstrip(';')
    taxonomy_components = [f.split(":") for f in full_taxonomy.split(";")]
    confidence_components = [c.split(":") for c in confidences.split(";")]
    summary = ";".join([":".join((t[0], t[1], confidence_components[i][1])) for i, t in enumerate(taxonomy_components)])
    return summary


def truncate_taxonomy(full_taxonomy, confidences, cutoff):
    # the taxonomy and confidences may have an extra semicolon at the end
    full_taxonomy = full_taxonomy.rstrip(';')
    confidences = confidences.rstrip(';')
    taxonomy = dict([level.split(':', 1) for level in full_taxonomy.split(';')])
    truncated_taxonomy = {}
    for level_info in confidences.split(';'):
        level_name, confidence_value = level_info.split(':')
        if float(confidence_value) >= cutoff:
            truncated_taxonomy[level_name] = taxonomy[level_name]
    return truncated_taxonomy


def reformat_summary(summary_file_name, output_file_name, cutoff):
    output_levels = ["superkingdom","phylum", "class", "order", "family", "genus", "species"]

    summary = open(summary_file_name).readlines()
    previous_header = summary[0].strip().split('\t')
    taxonomy_index = previous_header.index('taxonomy')
    confidence_index = previous_header.index('taxonomy_confidence')
    header = previous_header[:taxonomy_index] + ['sum.taxonomy', 'taxonomy_confidence']
    output = open(output_file_name + '.tmp', 'w')
    output.write('\t'.join(header) + '\n')

    for line in summary[1:]:
        fields = line.strip('\n').split('\t')
        # a colon in the taxonomy means that something was found
        if ':' in fields[taxonomy_index]:
            taxonomy = truncate_taxonomy(fields[taxonomy_index], fields[confidence_index], cutoff)
            output_taxonomy = [taxonomy.get(level, '') for level in output_levels]
            taxonomy_summary = summarize_taxonomy(fields[taxonomy_index], fields[confidence_index])
        else:
            output_taxonomy = ''
            taxonomy_summary = ''
        fields_to_write = fields[:taxonomy_index] + [';'.join(output_taxonomy)] + [taxonomy_summary]
        output.write('\t'.join(fields_to_write) + '\n')

    output.close()
    shutil.move(output_file_name + '.tmp', output_file_name)


parser = argparse.ArgumentParser(description='Reformats a summary table for use in R code')
parser.add_argument('summary_file', type=str,  help='Summary file')
parser.add_argument('output_file', type=str, help='File where output will be written')
parser.add_argument('cutoff', type=float, help='Confidence percent cutoff to include [0-100]')

if __name__ == '__main__':
    args = parser.parse_args()
    summary_file = args.summary_file
    cutoff = args.cutoff
    output_file = args.output_file
    reformat_summary(summary_file, output_file, cutoff)
92
93
94
shell:
    "python workflow/scripts/init_sample_from_manifest_by_sample.py " + \
      config["PROJECT"]+" {input} {wildcards.samples} "
116
117
shell:
    "fastqc {input.r1} {input.r2} -o results/{wildcards.PROJECT}/samples/{wildcards.samples}/qc/"
134
135
shell:
  "multiqc -dd 2 -n {output.raw_multi_html} {input.raw_qc_fw} {input.raw_qc_rv}"
157
158
159
160
161
162
163
164
165
166
shell:
  "trimmomatic PE \
  {input}  \
  {output} \
  ILLUMINACLIP:{config[trimmomatic][ILLUMINACLIP]} \
  MAXINFO:{config[trimmomatic][MAXINFO]} \
  LEADING:{config[trimmomatic][LEADING]} \
  TRAILING:{config[trimmomatic][TRAILING]} \
  {config[trimmomatic][extra_params]} \
  2> {log.f1}"
186
187
188
189
190
191
192
193
194
195
196
shell:
  "cutadapt  \
  -g {config[cutadapt][forward_primer]} \
  -G {config[cutadapt][reverse_primer]} \
  -A {config[cutadapt][rc_forward_primer]} \
  -a {config[cutadapt][rc_reverse_primer]} \
  -o {output.o1} -p {output.o2} \
  {input.p1} {input.p2} \
  --minimum-length 1 \
  {config[cutadapt][extra_params]} \
  1> {log.f1}"
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
shell:
   "cutadapt \
  -g {config[cutadapt][forward_primer]} \
  -a {config[cutadapt][rc_reverse_primer]} \
  -o {output.o1} \
  {input.p1} \
  --minimum-length 1 \
  {config[cutadapt][se_extra_params]} \
  1> {log.f1}; \
  cutadapt \
  -g {config[cutadapt][reverse_primer]} \
  -a {config[cutadapt][rc_forward_primer]} \
  -o {output.o2} \
  {input.p2} \
  --minimum-length 1 \
  {config[cutadapt][se_extra_params]} \
  1> {log.f2}"
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
shell:
    "cutadapt \
      -g {config[cutadapt][forward_primer]} \
      -a {config[cutadapt][rc_reverse_primer]} \
      -o {output.o1} \
      {input.u1} \
      --minimum-length 1 \
      {config[cutadapt][se_extra_params]} \
      1> {log.f1}; \
      cutadapt \
      -g {config[cutadapt][reverse_primer]} \
      -a {config[cutadapt][rc_forward_primer]} \
      -o {output.o2} \
      {input.u2} \
      --minimum-length 1 \
      {config[cutadapt][se_extra_params]} \
      1> {log.f2}"
293
294
295
296
297
shell:
    "Rscript ./workflow/scripts/Dada2_FilterAndTrim_combined.R \
    results/{config[PROJECT]}/runs/{config[RUN]}/ \
    "+config_path+" \
    {input.f1}"
313
314
315
316
317
shell:
    "Rscript ./workflow/scripts/Dada2_ASVInference_Single.R \
    results/{config[PROJECT]}/runs/{config[RUN]}/ \
    "+config_path+" \
    {input.FiltF}"
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
        shell:
          "bowtie2-build {input.fasta} resources/bowtie2_dbs/" + \
            config["DATABASE"]["name"]+"/"+config["DATABASE"]["name"]


# Input for bowtie2 depends on if the mapping was given or not: now if statement, could be changed to something else?

if config["DATABASE"]["location_bowtie2"] != None:
    rule bowtie2:
      input:
        rep_seqs = expand(
          "results/{PROJECT}/runs/{RUN}/03-dada2/rep-seqs.fna", PROJECT=PROJECT, RUN=RUN),
      output:
        o1 = "results/{PROJECT}/runs/{RUN}/04-taxonomy/bowtie2/{DATABASE}_bowtie2_local.sam",
        o2 = "results/{PROJECT}/runs/{RUN}/04-taxonomy/bowtie2/{DATABASE}_bowtie2_rejects.fasta",
      log:
        f1 = "results/{PROJECT}/runs/{RUN}/06-report/bowtie2/bowtie2_{DATABASE}_log.txt"
360
361
362
363
364
365
366
367
368
369
370
371
372
shell:
  "bowtie2 -x {config[DATABASE][location_bowtie2]} \
  -f -U {input.rep_seqs} \
  -S {output.o1} \
  --no-hd \
  --no-sq \
  --very-sensitive \
  --local \
  --no-unal \
  -p {config[TAXONOMY][threads]} \
  -k {config[TAXONOMY][distinct_alignments]} \
  --un {output.o2} \
  2> {log.f1}"
391
392
393
394
395
396
397
398
399
400
401
402
403
shell:
  "bowtie2 -x {params.ref_idx_base} \
  -f -U {input.rep_seqs} \
  -S {output.o1} \
  --no-hd \
  --no-sq \
  --very-sensitive \
  --local \
  --no-unal \
  -p {config[TAXONOMY][threads]} \
  -k {config[TAXONOMY][distinct_alignments]} \
  --un {output.o2} \
  2> {log.f1}"
416
417
418
419
420
421
422
423
424
425
426
427
428
shell:
  "python ./workflow/scripts/blca_from_bowtie.py \
  -i {input.f1} \
  -r {config[DATABASE][taxa]} \
  -q {config[DATABASE][fasta]} \
  -b {config[TAXONOMY][min_identity]} \
  -l {config[TAXONOMY][min_length]} \
  -n {config[TAXONOMY][bootstrap_no]} \
  -m {config[TAXONOMY][match_score]} \
  -f {config[TAXONOMY][mismatch_penalty]} \
  -g {config[TAXONOMY][gap_penalty]} \
  -o {output.o1} \
  -v {output.o2} "
436
437
438
439
440
441
shell:
  "sed -e '1s/^/rowname\ttaxonomy\ttaxonomy_confidence\taccessions\\n/' {input.f1} > {output.o1}; \
  python ./workflow/scripts/reformat_summary_for_r.py \
  {output.o1} \
  {output.o2} \
  {config[TAXONOMY][blca_confidence_cutoff]}"
460
461
462
463
464
465
466
467
468
469
470
471
472
shell:
    "awk '/Unclassified/ {{print $1}}' {input.taxa} > {output.o1}; \
    awk '/;;;;;;/ {{print $1}}' {input.filtered} | cat >> {output.o1}; \
    sort -t . -k 2n -o {output.o1} {output.o1}; \
    seqtk subseq {input.fasta} {output.o1}  > {output.o2}; \
    cat {input.rejects} >> {output.o2}; \
    blastn \
    -query {output.o2} \
    -out {output.o3}  \
    -outfmt 6 \
    -perc_identity {config[BLAST][perc_identity]} \
    -num_threads {threads} \
    {config[BLAST][database]}" 
486
487
488
489
490
491
492
493
494
495
496
497
shell:
    "basta sequence \
    -p {config[BLAST][portion_of_hits]} \
    -i {config[BLAST][percent_identity]} \
    -l {config[BLAST][alignment_length]} \
    -e {config[BLAST][e_value]} \
    -n {config[BLAST][max_hits]} \
    -m {config[BLAST][min_hits]} \
    -d {config[BLAST][tax_db]} \
    {input.blast_results} \
    {output.basta_results} \
    gb"
506
507
508
509
510
511
512
513
514
515
516
517
518
shell:
    "basta download gb -d ./resources/tax_db/; \
    basta sequence \
    -p {config[BLAST][portion_of_hits]} \
    -i {config[BLAST][percent_identity]} \
    -l {config[BLAST][alignment_length]} \
    -e {config[BLAST][e-value]} \
    -n {config[BLAST][max_hits]} \
    -m {config[BLAST][min_hits]} \
    -d ./resources/tax_db/ \
    {input.blast_results} \
    {output.basta_results} \
    gb"
556
557
558
559
560
561
562
shell:
  "Rscript ./workflow/scripts/get_lsids.R \
  results/{wildcards.PROJECT}/runs/{wildcards.RUN}/05-dwca/ \
  {input.f1} \
  {input.f2} \
  {input.f3} \
  {config[BLAST][database_date]}"
575
576
577
578
579
580
581
582
shell:
  "Rscript ./workflow/scripts/format_for_dwc_new.R \
  results/{wildcards.PROJECT}/runs/{wildcards.RUN}/05-dwca/ \
  {input.f1} \
  {input.f2} \
  {input.f3} \
  {config[meta][sampling][sample_data_file]}\
  "+config_path+" "
595
596
shell:
  "Rscript -e \"rmarkdown::render('workflow/scripts/Report_PacMAN_Pipeline.Rmd', output_file = '../../results/{PROJECT}/runs/{RUN}/06-report/report.html', params=list(config='"+config_path+"'))\""
ShowHide 18 more snippets with no or duplicated tags.

Login to post a comment if you would like to share your experience with this workflow.

Do you know this workflow well? If so, you can request seller status , and start supporting this workflow.

Free

Created: 1yr ago
Updated: 1yr ago
Maitainers: public
URL: https://github.com/iobis/PacMAN-pipeline
Name: pacman-pipeline
Version: 1
Badge:
workflow icon

Insert copied code into your website to add a link to this workflow.

Downloaded: 0
Copyright: Public Domain
License: MIT License
  • Future updates

Related Workflows

cellranger-snakemake-gke
snakemake workflow to run cellranger on a given bucket using gke.
A Snakemake workflow for running cellranger on a given bucket using Google Kubernetes Engine. The usage of this workflow ...