There have been a few old posts but nothing recent.
We are interested to know how even people are managing to get reads for each sample pooled at equimolar amounts in a run.
Our team performs library preparations with standard Illumina DNA prep libraries and supplied 384 UDI indexes.
Quantification via flourometric assay, and gel image of each library to get the average size and convert to an accurate nM measure for each library.
We have tried qPCR quantification also.
We monitor for small fragments and primer dimers etc.
Our question is how much variance between sample reads is generally accepted (potentially due to indexes used?- there seems to be some supporting literature on the Illumina website that suggests this can be a factor).
Are other groups seeing something like +/- 3 million reads when attempting to pool evenly at 10 million reads per sample for example.
We would look to further refine but are unsure how tightly this metric can be controlled?
Thanks for anyone else's experience, insight, tips or tricks!
We are interested to know how even people are managing to get reads for each sample pooled at equimolar amounts in a run.
Our team performs library preparations with standard Illumina DNA prep libraries and supplied 384 UDI indexes.
Quantification via flourometric assay, and gel image of each library to get the average size and convert to an accurate nM measure for each library.
We have tried qPCR quantification also.
We monitor for small fragments and primer dimers etc.
Our question is how much variance between sample reads is generally accepted (potentially due to indexes used?- there seems to be some supporting literature on the Illumina website that suggests this can be a factor).
Are other groups seeing something like +/- 3 million reads when attempting to pool evenly at 10 million reads per sample for example.
We would look to further refine but are unsure how tightly this metric can be controlled?
Thanks for anyone else's experience, insight, tips or tricks!
Comment