Header Leaderboard Ad

Collapse

A first look at Illumina’s new NextSeq 500

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Brian Bushnell
    replied
    Originally posted by AllSeq View Post
    Why would the NextSeq and HiSeq 3000/4000 be similar? The use different chemistries and different flow cells. Wouldn't the 3000/4000 be most similar to the HiSeq X? (Or did you just mean they're similar in that they're both bad platforms, but for different reasons?)
    They use 2-color chemistry (IIRC). I don't know if the problem is the chemistry, the optics, or the software; but if it's the software, I'd expect the 3000/4000 to be more similar to the NextSeq than the 2500. Also, I've only looked at a single sample of HiSeq 4000 data, but the quality was low; similar to the NextSeq. Since I've seen both good and bad data from the same NextSeq machine, it's obviously possible to produce good data with 2-color chemistry and NextSeq optics. It would be nice if this was all a software issue.

    Leave a comment:


  • GenoMax
    replied
    Originally posted by AllSeq View Post
    Why would the NextSeq and HiSeq 3000/4000 be similar? The use different chemistries and different flow cells. Wouldn't the 3000/4000 be most similar to the HiSeq X? (Or did you just mean they're similar in that they're both bad platforms, but for different reasons?)
    I think it comes down to the bcl2fastq version used for data processing (binned q-scores) for NextSeq and HiSeq 4000.

    Hopefully @Brian will have some clarification once he has chased that down that information.

    Leave a comment:


  • AllSeq
    replied
    Originally posted by Brian Bushnell View Post
    But certainly, I would avoid the NextSeq (and HiSeq 3000/4000 which I suspect are similar) when possible, if you have access to Illumina's high quality platforms (HiSeq 2000/2500 or MiSeq).
    Why would the NextSeq and HiSeq 3000/4000 be similar? The use different chemistries and different flow cells. Wouldn't the 3000/4000 be most similar to the HiSeq X? (Or did you just mean they're similar in that they're both bad platforms, but for different reasons?)

    Leave a comment:


  • GenoMax
    replied
    Originally posted by Brian Bushnell View Post
    I'm not really sure. The HiSeq quality scores are not binned, though.
    That probably means they are using the older bcl2fastq (or CASAVA) v.1.8.4.

    I'm going to talk to the person who manages the Illumina software versions after gathering some more evidence, because we probably will want to roll back to an earlier version, once it's clear which earlier version was better.

    Also, does have experience with 3rd-party Illumina base-callers?
    That is NOT an option for NextSeq and HiSeq 3000/4000 which require bcl2fastq v.2.1x for conversion. Perhaps you can ask the person in charge to reprocess HiSeq 2500 data using bcl2fastq v.2.18.

    I don't know if there are any 3rd party callers for new data.

    Leave a comment:


  • Brian Bushnell
    replied
    I'm not really sure. The HiSeq quality scores are not binned, though. I'm going to talk to the person who manages the Illumina software versions after gathering some more evidence, because we probably will want to roll back to an earlier version, once it's clear which earlier version was better.

    Also, does have experience with 3rd-party Illumina base-callers?

    Edit: We are using 2.16 for NextSeq and 1.8.4 for everything else.
    Last edited by Brian Bushnell; 11-17-2016, 01:20 PM.

    Leave a comment:


  • GenoMax
    replied
    Perhaps what you are observing is differences in bcl2fastq v.1.8.4 and 2.18.x?

    bcl2fastq v.2.x is required for processing data from NextSeq and HiSeq 3000/4000. It can be used to process data from all current Illumina sequencers. It does binned quality for reads as I recall.

    Is your data processed with the same version of bcl2fastq in all cases or was 2500 data processed using bcl2fastq v.1.8.4?

    Leave a comment:


  • Brian Bushnell
    replied
    I'm not sure; I think it's probably fine for quantification unless there's some bias issue, which I have not looked into. I wouldn't want to use it for variant-calling, particularly because a lot of the errors seem like systematic errors that cannot be overcome simply by sequencing deeper. We do use it for multiplexed single cells, because the NextSeq platform has shown lower rates of cross-talk than HiSeq or MiSeq and single-cell sequencing is greatly affected by even low levels of cross-talk. Also, I understand NextSeq is cheaper per base. But certainly, I would avoid the NextSeq (and HiSeq 3000/4000 which I suspect are similar) when possible, if you have access to Illumina's high quality platforms (HiSeq 2000/2500 or MiSeq).

    Leave a comment:


  • GenoMax
    replied
    HiSeq 1T = HiSeq 2500 HO mode?

    Bottom line: If one has a different sequencer accessible walk away from a NextSeq?

    Are Q-scores still important (other than for de novo or diagnostic analyses)?

    Do you know what version of bcl2fastq is being used for your data?
    Last edited by GenoMax; 11-17-2016, 11:07 AM.

    Leave a comment:


  • Brian Bushnell
    replied
    Unfortunately, Illumina's taken a turn for the worse again. I just analyzed some recent data from the NextSeq, HiSeq2500, and HiSeq 1T platforms of the same library. The NextSeq data is dramatically worse than last time I looked at it. Error rates are several times higher, there's a major A/T base frequency divergence in read 2, and the quality scores are inflated again at ~6 points higher than the actual quality. More disturbingly, the HiSeq quality scores are completely inaccurate now, as well, though the actual measured quality is still very high - average Q33 for read 1 and Q29 for read 2 for HiSeq2500, versus Q24 for read 1 and Q18 for read 2 on the NextSeq (those numbers are as measured by counting the match/mismatch rates from mapping, so essentially, NextSeq has roughly 10X the error rate of HiSeq). But the measured discrepancy between claimed and measured quality scores for the HiSeq2500 and HiSeq 1T are BOTH worse than the NextSeq, despite the NextSeq having binned quality scores, and as you can see there are large regions of quality scores simply missing from the HiSeq2500, such as Q3-Q11, Q17-Q21, and Q29. There are clearly major problems with Illumina's current base-calling software, as quality score assignment has drastically regressed since last time I measured it.

    You can see the graphs in this Excel sheet that I've linked. "Raw" is the raw data, "Recal" is after recalibration (which changes the quality scores but nothing else). "NS" is NextSeq, "2500" is HiSeq2500, and "1T" is HiSeq 1T which unfortunately was only run at 2x101bp instead of 2x151bp on the other 2 platforms.

    https://drive.google.com/file/d/0B3l...ew?usp=sharing
    Last edited by Brian Bushnell; 11-17-2016, 10:49 AM.

    Leave a comment:


  • cement_head
    replied
    I know this is an older thread, but now that more and more users of the NextSeq are out there - what is the concensus on the NextSeq data? Is it still problematic relative to MiSeq Data?

    Thanks

    Leave a comment:


  • AlexT
    replied
    I am not sure how much we can use the cluster densities as a measure of run quality. We had good runs with low and high cluster densities, as well as very poor run with normal densities.
    Compared to the MiSeq, our NextSeq is very fragile and the cluster densities go up and down without showing an obvious pattern. In case of the MiSeq we have very stable densities (but in this case usually prepared with Nextera).

    Actually our highest clustered run performed very well and had the following specs:
    clusters: 287-301k/mm^2
    PF: 83,0-84,3
    Q30: 87,9-90,1
    also at 75bp

    Leave a comment:


  • TonyBrooks
    replied
    Originally posted by cmbetts View Post
    Does anyone have any feedback on what density would be considered overclustered on a NextSeq using v2 chemistry? We just got data back from a collaborator with terrible error rates in read 2 with lots of random stretches of variable length polyGs. Comparing to the SAV files from another successful run with an identically constructed library by the same facility, the only obvious run metric that jumps out at me (besides the terrible read quality) is that the failed run had ~20% higher cluster density (240k/mm^2 70%PF vs 200k/mm^2 80%PF). I'm mostly used to looking at HiSeq and MiSeq data, so I'm not sure whether this is significant or not.
    We've run exomes that clustered at 259k/mm2. The data still looked fine to us (92% >Q30, >90% alignment rates). The quality does begin to tail off when over-clustered though. 75bp are generally fine at that density, but the >100bp begins to look really poor. We also use short paired reads for RNA-Seq (43bp paired end) and this tolerates over-clustering much better.

    On another note, we regularly see poly-G reads, (fastqc shows around 2-3% of over-represented sequences) but curiously this tends to happen on read 2 only (failed resynthesis?)

    Leave a comment:


  • cmbetts
    replied
    Originally posted by williamhorne View Post
    Using High output we are actually getting over 500 million reads per run. Unlike our GAII, and HighSeq, we actually have to pay very close attention to cluster density. The target cluster density for high quality samples is 1.75pM-2pM. Anything above and below will results in under/over clustering. So your samples need to be very exact with concentration.

    These are solely made to be streamlined with the BaseSpace. Right now it only works with BaseSpace onsite, not in the cloud as they are having some majority broker issues that still are not resolved. Make sure you do your research in regards to output files and data in regards to basespace because it is not a visual machine. It gives you the output files and you must use 3rd party software on a different computer to view the results. Very annoying.

    Overall very impressed with the NextSeq's, not so much BaseSapce.
    Does anyone have any feedback on what density would be considered overclustered on a NextSeq using v2 chemistry? We just got data back from a collaborator with terrible error rates in read 2 with lots of random stretches of variable length polyGs. Comparing to the SAV files from another successful run with an identically constructed library by the same facility, the only obvious run metric that jumps out at me (besides the terrible read quality) is that the failed run had ~20% higher cluster density (240k/mm^2 70%PF vs 200k/mm^2 80%PF). I'm mostly used to looking at HiSeq and MiSeq data, so I'm not sure whether this is significant or not.

    Leave a comment:


  • rogerzzw
    replied
    BTW, Kentawan

    Another lab using the same kit and protocol do not have this kind issue at all, which makes me very confusion.

    Leave a comment:


  • rogerzzw
    replied
    @Kentawan

    Hi, Kentawan

    we used Kapa quantification kit, too. and we applied very strict quantification. We quantified individual library concentration firstly and pooled them together based on the measured concentration. We did another quantification when we dilute the pool in 20pm to make sure it is 20pm indeed. I believe our quantification is good enough.
    considering cloning check, I do not know if it is proper because we wanna do whole genome sequencing. But I do agree that P5 and P7 might not be good enough and it is possible due to index annealing. I just do not how to confirm it. Do you get some idea?

    Thank you very much

    Roger

    Leave a comment:

Working...
X