We have done our first v.3 run and the data, while more voluminous (330M beads at 50bp mate-pair vs. 180M beads at 25bp mate-pair), seems to be of worse quality in that we are (a) not achieving as good of a coverage to the reference and (b) the number of reads with errors versus the reference is much higher.
We were fortunate in that the customer who just completed the v.2 mate pair run wanted to redo it using v.3. So I have a good baseline for how our non-model-organism eukaryotic DNA should match up to the reference. For v.2 the coverage ranged from 69% to 87% (depending on the chromosome) while the v.3 data achieved only 56% to 73%. Obviously with about 4 times the data (~2x the beads and 2x the read length) we were expected better coverage with v.3 or at least equal coverage but certainly not worse!
Also the number of beads with errors is much higher than with v.2.
This is our very first v.3 run and, of course, we expect teething problems. I am ready to toss this problem off as being a startup bug. Still, if anyone has some insight into what might be causing the problem then please get hold of me. ([email protected]). Thanks.
We were fortunate in that the customer who just completed the v.2 mate pair run wanted to redo it using v.3. So I have a good baseline for how our non-model-organism eukaryotic DNA should match up to the reference. For v.2 the coverage ranged from 69% to 87% (depending on the chromosome) while the v.3 data achieved only 56% to 73%. Obviously with about 4 times the data (~2x the beads and 2x the read length) we were expected better coverage with v.3 or at least equal coverage but certainly not worse!
Also the number of beads with errors is much higher than with v.2.
This is our very first v.3 run and, of course, we expect teething problems. I am ready to toss this problem off as being a startup bug. Still, if anyone has some insight into what might be causing the problem then please get hold of me. ([email protected]). Thanks.
Comment