Hi there,
I am using QIIME to process my dataset of 2 million joined MiSeq PE-reads (~500 bp) from a 16S rDNA community survey.
I do have the output of the proprietory pipeline of the sequencing vendor, but my PI asked me to simulate every step, just to know whats going on.
I was able to mimick the joining and the demultiplexing with QIIME, as well with some quality checks.
My last step was a standalone chimera check with uchime.
Now, on for the OTU picking.
My input file is about 2 million reads in 38 samples, and the proprietory pipeline of the sequencing vendor yielded about 14.000 OTUs (biologically, this makes sense, as the 38 samples are coming from different sites and different treatments).
However, when i was using QIIME with de_novo_clustering at 97% similarity and methods...
....uclust with the default command
i got 320.000 clusters!
...usearch with an additional chimera slaying step and the removal of single- and doubletons:
pick_otus.py -i in.fasta -m usearch -o out -s 0.97 --db_filepath path_to_reference_chimera_db --minsize 3
i received 800.000 clusters!
With cd-hit i failed, because my fasta input doesnt contain the amplification primers anymore (see my other thread).
The files the sequencing vendor provided me made me believe that usearch was indeed used (there is a sortlen in the output, and a *.uc file as well).
Questions:
Where did i fail?
How can i be less stringent?
How do you cluster your 16S illumina data?
Is there a denoising step in the QIIME for illumina, similar to the 454 denoising?
Thank you very much!
I am stuck, and its kinda urgent.
PS: I cannot use the qiime tutorial for illumina because i cannot run iPython on my machine.
I am using QIIME to process my dataset of 2 million joined MiSeq PE-reads (~500 bp) from a 16S rDNA community survey.
I do have the output of the proprietory pipeline of the sequencing vendor, but my PI asked me to simulate every step, just to know whats going on.
I was able to mimick the joining and the demultiplexing with QIIME, as well with some quality checks.
My last step was a standalone chimera check with uchime.
Now, on for the OTU picking.
My input file is about 2 million reads in 38 samples, and the proprietory pipeline of the sequencing vendor yielded about 14.000 OTUs (biologically, this makes sense, as the 38 samples are coming from different sites and different treatments).
However, when i was using QIIME with de_novo_clustering at 97% similarity and methods...
....uclust with the default command
i got 320.000 clusters!
...usearch with an additional chimera slaying step and the removal of single- and doubletons:
pick_otus.py -i in.fasta -m usearch -o out -s 0.97 --db_filepath path_to_reference_chimera_db --minsize 3
i received 800.000 clusters!
With cd-hit i failed, because my fasta input doesnt contain the amplification primers anymore (see my other thread).
The files the sequencing vendor provided me made me believe that usearch was indeed used (there is a sortlen in the output, and a *.uc file as well).
Questions:
Where did i fail?
How can i be less stringent?
How do you cluster your 16S illumina data?
Is there a denoising step in the QIIME for illumina, similar to the 454 denoising?
Thank you very much!
I am stuck, and its kinda urgent.
PS: I cannot use the qiime tutorial for illumina because i cannot run iPython on my machine.
Comment