Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • DNATECH
    replied
    The latest HiSeq3000 run (we did receive a few flowcells) did average 378 million clusters passing filter, per lane. All libraries were size selected.

    One obvious part of the exclusion amplification as implemented is the very viscous enzyme mix. Probably the diffusion of the library fragments towards the flowcell is very much slowed down (requiring also higher library concentrations?) giving the molecule that arrives first the chance to become amplified and fill entire nanowells before a second one arrives (http://www.google.com/patents/WO2013188582A1?cl=en). The viscosity enhanced "drag" also could explain the stronger bias towards smaller inset size reads?
    The high viscosity buffer together with high library concentrations and "RPA" amplification for the clustering process ("Recombinase Polymerase Amplification" ( http://www.twistdx.co.uk/our_technology/ )) might be sufficient for the Kinetic Exclusion Amplification on the nanowell flowcells? It seems to me that the other methods described in the patent might not be compatible with the old cBots (these can be used for the Hiseq3000/4000 clustering after a software upgrade)?
    Last edited by DNATECH; 07-26-2015, 04:26 PM.

    Leave a comment:


  • pmiguel
    replied
    Originally posted by Brian Bushnell View Post
    Impressive; I was under the impression that inserts much over 800bp simply would not bridge-amplify. Maybe we should try that approach! Anyway, rather than shorter molecules vastly out-competing longer molecules at all lengths, that could be a more of a case where the rates are fairly similar up to a point (1kbp?) after which longer molecules start failing to form clusters at all (even if there were no short molecules present). I'm just guessing, though.
    We did cluster at 1/2 the normal density, so that may have allowed the longer amplicons to form clusters where normally they would not have. Again, my natural inclination is to regard this as some sort of competition. Looking at plots of insert sizes and comparing them to the sizes of the input library it has always looked to me as if all the amplicons queued up by length and then all the shortest ones clustered. Okay, an exaggeration, but more-or-less fitting what one sees.

    --
    Phillip

    Leave a comment:


  • SNPsaurus
    replied
    Brian, when we were developing local assembly of paired-end RAD, we were surprised to see contigs of 1200 bp being assembled (see http://journals.plos.org/plosone/art...l.pone.0018561 figure 4), meaning that there must have been fragments of 1200 bp undergoing bridge amplification. We had to use a "triangle cut" in the gel size selection to over-represent the larger fragments, but they did bridge.

    I think the size preference in the patterned flow cells could be because a small fragment could enter a well after a larger fragment but then outcompete the larger fragment to fill the well. Or in the diffusion kinetics?

    Leave a comment:


  • Brian Bushnell
    replied
    Originally posted by pmiguel View Post
    The 4th post in the thread, I actually converted the mass-based/log-linear plot results from the Agilent bioanalyzer chip to a linear, molecule-based plot. This way it can be directly compared to the insert sizes found by mapping the reads-pairs back to the genome from which they came.

    The result showed that the shorter amplicons must have clustered preferentially. Really preferentially.

    To me this has always suggested there must be some sort of competition for clustering that favors shorter amplicons.
    Impressive; I was under the impression that inserts much over 800bp simply would not bridge-amplify. Maybe we should try that approach! Anyway, rather than shorter molecules vastly out-competing longer molecules at all lengths, that could be a more of a case where the rates are fairly similar up to a point (1kbp?) after which longer molecules start failing to form clusters at all (even if there were no short molecules present). I'm just guessing, though.

    Leave a comment:


  • pmiguel
    replied
    Originally posted by Brian Bushnell View Post

    The insert size distribution is fairly interesting for a couple reasons. It looks like the platform can probably handle inserts over 450bp fairly well; there were some short inserts, but they did not overwhelmingly out-compete the long ones. But the flat distribution of the short-insert tail is odd.
    About the size distribution of the library vs. size distribution of the amplicons that actually cluster. I created a thread some years ago about a somewhat extreme sample clustered on the MiSeq:

    Bridged amplification & clustering followed by sequencing by synthesis. (Genome Analyzer / HiSeq / MiSeq)


    The 4th post in the thread, I actually converted the mass-based/log-linear plot results from the Agilent bioanalyzer chip to a linear, molecule-based plot. This way it can be directly compared to the insert sizes found by mapping the reads-pairs back to the genome from which they came.

    The result showed that the shorter amplicons must have clustered preferentially. Really preferentially.

    To me this has always suggested there must be some sort of competition for clustering that favors shorter amplicons.

    At the much higher clustering concentrations using for the 3000/4000 this process may be exacerbated.

    --
    Phillip

    Leave a comment:


  • DNATECH
    replied
    Thanks a lot for the detailed analysis Brian.
    Lutz

    Leave a comment:


  • pmiguel
    replied
    Originally posted by DNATECH View Post
    Hi Pmiguel,

    the basic procedure looks like:
    - 5 ul of library (2 nM to 3 nM including PhiX)
    - add 5 ul 0,1 N NaOH
    - add 5 ul Tris (200mM)
    - add 35 ul Enzyme Master Mix
    - load all 50 ul onto cBot
    Ah, that's very interesting. They were finally forced to kick that ridiculous 50X dilution/neutralization step to the curb.

    So you cluster at 200-300 pM. About 10-15x what we use on our HiSeq2500.

    --
    Phillip

    Leave a comment:


  • Brian Bushnell
    replied
    Thanks for the clarification, and thanks for sharing your data!

    I did some mapping of the first 16m reads, and generated the following graphs:



    The "Other" category refers to soft-clipped bases, which is very high in this case because PhiX is small so many of the reads went off the end (*Considering these reads have been adapter-trimmed, I have no idea what is being sequenced past the ends of the PhiX genome; it might be interesting to investigate). Overall the average error rate is below 1% but above 0.1% across the read. Read 2 has a higher-than-expected insertion rate in the first half of the read. Oddly, R2 has some Ns only in the first half, and R1 has some Ns only in the second half. Unlike other platforms, the error rate for R2 seems fairly flat across the read.


    This is a different way of looking at the same data.


    The quality accuracy graph indicates that again the Q-scores are binned, and like NextSeq V1, they are highly inflated. Over 70% of the bases were assigned Q41, but the average observed quality for Q41 bases was actually Q31.


    The insert size distribution is fairly interesting for a couple reasons. It looks like the platform can probably handle inserts over 450bp fairly well; there were some short inserts, but they did not overwhelmingly out-compete the long ones. But the flat distribution of the short-insert tail is odd.

    Lastly, it's worth noting that around 83% of the reads mapped to the reference with no mismatches or indels.

    For comparison, I've attached the mhist of a 2x150bp HS2500 run (not on PhiX), below. To me the HS2500 looks better, but not drastically better, in terms of error rates.

    Attached Files
    Last edited by Brian Bushnell; 05-08-2015, 07:02 PM.

    Leave a comment:


  • DNATECH
    replied
    Hi Brian,

    Thanks for looking at the data. The files that I uploaded have 482,680,800 reads. The sequencer generates "reads" for each single nanowell - no matter if it is loaded or not. Thus, the figure of 30% or higher "failing" reads is expected. The SAV viewer indicates a total of 482.68 million nanowells. According to Illumina 60% to 70% of clusters passing filter are considered to be very good; because the figure is calculated with respect to the total number of nanowells. I did intentionally upload files including all non-passing reads (the majority of the "not passing filter" data are likely simply empty nano-wells though).

    Lutz


    Originally posted by Brian Bushnell View Post
    I finally finished downloading these, and I'll take a look at the quality from mapping. But before I do that, I always trim adapters... but I was never sure what kind of adapters PhiX reads had. They don't exactly match any adapters in my list, so I'll call them "PhiX adapters". Here they are, for reference:

    >Read1_adapter
    AGATCGGAAGAGCGGTTCAGCAGGAATGCCGAGACCGATCTCGTATGCCGTCTTCTGCTTGAAA
    >Read2_adapter
    AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATTAAAAAA

    Also, at least for the first 4 million reads, 29.36% failed the chastity filter.
    Last edited by DNATECH; 05-08-2015, 03:38 PM.

    Leave a comment:


  • Brian Bushnell
    replied
    I finally finished downloading these, and I'll take a look at the quality from mapping. But before I do that, I always trim adapters... but I was never sure what kind of adapters PhiX reads had. They don't exactly match any adapters in my list, so I'll call them "PhiX adapters". Here they are, for reference:

    >Read1_adapter
    AGATCGGAAGAGCGGTTCAGCAGGAATGCCGAGACCGATCTCGTATGCCGTCTTCTGCTTGAAA
    >Read2_adapter
    AGATCGGAAGAGCGTCGTGTAGGGAAAGAGTGTAGATCTCGGTGGTCGCCGTATCATTAAAAAA

    Also, at least for the first 4 million reads, 29.36% failed the chastity filter.

    Leave a comment:


  • DNATECH
    replied
    Hi Pmiguel,

    the basic procedure looks like:
    - 5 ul of library (2 nM to 3 nM including PhiX)
    - add 5 ul 0,1 N NaOH
    - add 5 ul Tris (200mM)
    - add 35 ul Enzyme Master Mix
    - load all 50 ul onto cBot

    Originally posted by pmiguel View Post
    Wow, 2000 pM? I think the highest we ever went on the HiSeq2500 was 23 pM.

    --
    Phillip

    Leave a comment:


  • GenoMax
    replied
    To get 2-2.5x more clusters (compared to a 2500) load 100x more? DNA binding in nanowells must not be very efficient.
    Last edited by GenoMax; 05-08-2015, 11:35 AM.

    Leave a comment:


  • pmiguel
    replied
    Originally posted by DNATECH View Post
    Hi Miguel,

    the input was 5ul of PhiX at 2 nM. So far we have used 2 nM concentrations for all our libraries/lanes. Illumina recommends up to 3 nM.
    From what our FAS told us, I got the impression under-loading could be more detrimental than over-loading.
    Wow, 2000 pM? I think the highest we ever went on the HiSeq2500 was 23 pM.

    --
    Phillip

    Leave a comment:


  • DNATECH
    replied
    Hi GenoMax,

    perhaps we are just being careful at the moment - since Illumina seems to be very careful and there is very little information so far. The customer samples (n=11) have been looking great so far except one; this sample had some larger low complexity component to it (which we were not aware off). For this sample the Q30 rates dropped after the first 60 to 70 bases of low complexity bases from 95% to 70%.

    Originally posted by GenoMax View Post
    @DNATECH: Based on this (and your other post) it sounds like you need "near perfect libraries" to get good data from patterned flowcells. This could be a problem for core facilities, where "variable" quality libraries come in from customers.

    It would be interesting to hear about your experiences as real world customer libraries start flowing through.

    Leave a comment:


  • DNATECH
    replied
    Hi Miguel,

    the input was 5ul of PhiX at 2 nM. So far we have used 2 nM concentrations for all our libraries/lanes. Illumina recommends up to 3 nM.
    From what our FAS told us, I got the impression under-loading could be more detrimental than over-loading.

    Originally posted by pmiguel View Post
    What final concentration was the phiX library that you clustered? I mean after neutralization?

    I mean is there no danger of overclustering anymore? That was what I was hoping for when I heard about the patterned flowcells...

    --
    Phillip

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Strategies for Sequencing Challenging Samples
    by seqadmin


    Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
    03-22-2024, 06:39 AM
  • seqadmin
    Techniques and Challenges in Conservation Genomics
    by seqadmin



    The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

    Avian Conservation
    Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
    03-08-2024, 10:41 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 03-27-2024, 06:37 PM
0 responses
12 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-27-2024, 06:07 PM
0 responses
11 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-22-2024, 10:03 AM
0 responses
53 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-21-2024, 07:32 AM
0 responses
69 views
0 likes
Last Post seqadmin  
Working...
X