I used the below mentioned steps to quality filter and de-replicate reads generated post amplicon sequencing of particular gene
My questions are:
Can someone please share a standard workflow for further analyzing (after step “f”) the quality filtered reads till the taxonomy assignment step?
- Sequencing success and read quality verified using FastQC v0.11.8
- Merging forward and reverse reads of each sample
- Forward and reverse primer trimming as well as retaining at least 108bp length reads from each sample
- Error filtering (error rate 0.5, filter length 100-300bp, min size length-8) in Usearch
- Finding unique sequences
- Unoise3 function in USearch (denoising) with alpha value of 5 for generation of ZOTU
My questions are:
- Do we have to denoise each sample individually or merge all the quality filtered reads from different samples and then denoise it?
- What are the exact tools and functions required for each process (individual denoising/ multiple merged read denoising)
- How do we get sample specific abundance for each ESV/ASV/ZOTU, in case we denoise individually?
- I have nearly 100 samples. How can I parallelly run the same steps on different samples?
Can someone please share a standard workflow for further analyzing (after step “f”) the quality filtered reads till the taxonomy assignment step?