Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Are Illumina missing a trick over cluster density?

    Don't need any help, just curious. As I understand it Illumina make their flowcells with an excess of primers bound to the slide. It's the up to whoever loads the cBot/clusterstation to work out how much library to add. This is quite a delicate business as everyone wants the highest read numbers possible, but if you overload then the clusters overlap and you loose a lot of the data. Would it be easier for Illumina to make the flowcells with the primers already attached at the optimum density. Then the operator can add an excess of library happy in the knowledge that optimal cluster density can be achieved. If they were feeling really flash then they could somehow print the primers on the slide in an ordered pattern so that the computer knows where to look for clusters, and thus pack them in even tighter.
    I'm probably missing something really obvious. Maybe someone can enlighten me.
    Cheers.

  • #2
    Hi Henry,

    It is not the oligos that dtermine cluster density but the concentration of library loaded by the user. There is little Illumina can do to help individuals quantitate and dilute accordingly. Although there are protocols for Q-PCR to help with this.

    James.

    Comment


    • #3
      Originally posted by james hadfield View Post
      It is not the oligos that dtermine cluster density but the concentration of library loaded by the user
      I know this is the case at the moment, but I was just idly wondering if it might be easier the other way round. At the moment the flowcell is set up so that if the user loads the library properly then the clusters will be dotted around at the correct density with unused oligos all over the place. As you say it's the library concentration that's the limiting step. If the flowcells were made differently with the oligos already at the correct density, with the goal of getting a library fragment onto every oligo then it would be easier for the user since they could simply wash over an excess of library. I don't mind being wrong about this, we were just having a conversation over coffee and it seemed like a good idea.

      Comment


      • #4
        The way I picture this (and please do correct me if I'm off base) is that your DNA fragment falls to the flowcell surface & then amplifies out a certain radius from the initial site. That amplification relies on the surface-bound primers. So the library concentration ensures scatter around the flowcell, but you need a certain density of primers to ensure amplification.

        But, this assumes all primers are the same. If you had two flavors of primers ("docking" and "amplification"), then the docking primers could potentially be less dense (and perhaps patterned to avoid cluster collisions) but retaining a forest of amplification primers.

        Comment


        • #5
          That's pretty much what I imagined. I had kind of envisaged the 'docking' primers being there at the outset, and then washing the flowcell with 'amplification' primers during the various cycles. I could see a possible problem if some people wanted 200bp fragments and others wanted 500bp, so the bridges would be a different size.
          It's all a bit of coffee time musing really. Probably the good folks at Illumina have brighter sparks than me working to increase output. I'll kick myself if someone runs with it and makes millions.

          Comment


          • #6
            I had a similar idea:

            If one printed the capture/amplification oligos onto the flowcell, then it might be possible to perform a sort of sequence capture/enrichment in situ on the flowcell before cluster amplification -- each address (feature) would simply contain sequence-specific oligos targeting a region of interest. Of course, one would require a lot of redundancy amongst features, and very high loading concentrations of fragmented sample DNA to ensure that each feature had a high probability of capturing its cognate target. But kind of intriguing none-the-less.

            One might also imagine a printed flowcell with capture/amplification oligos for genomic targets of interest, in a device that allows one to flow a constant stream of sample across the flowcell. One could then carry out an extended capture/hybridisation process by flowing a large volume of (indexed and pooled?) sample material through the flowcell, possibly circulating the sample repeatedly through the flowcell.

            Comment


            • #7
              Just guessing, but I presume the oligos are either synthesized on the surface of the flowcell or bound later. So to implement the docking/amplification scheme, you would need most of the oligos to be blocked for binding. Likely double stranded. Plus you would need to be able to place the docking oligos at known intervals.

              Seems like a fairly simple idea, so presumably Illumina has looked at it. Must be too expensive to fabricate such a flowcell.

              Also you would need to look at what the incentive would be for Illumina to implement this method. Currently if you are high or low the effect would be your needing to run another lane (or set of lanes.) That doesn't hurt Illumina. Especially since skilled operators are said to be capable of accurately deliver the right density of library molecules to get good results. (I think this has reach near "Urban Legend" status.)

              Generally Illumina will report results of good runs for their specs, so they remain competitive against their bead-based competitors. (Who have a version of exactly the same problem--during emulsion PCR.)

              Were the docking/amplification method deployed, there would still likely be manufacturing issues that would create bad batches. But Illumina would be on the hook for these, whereas their customers currently accept the risk of over or under-estimating the concentration of their sequencing amplicons.

              So my prediction would be that some competitor would need to pull closer to them in market share before this methodology would be seriously considered by Illumina.

              --
              Phillip

              Comment


              • #8
                In addition to the fabrication challenge, the kinetics of reassociation might prove problematic. Annealing rates are dependent upon both library and primer concentrations. Currently, 10e8 - 10e9 library molecules (100 ul/lane @ pM concentration) are annealed to a vast excess (several orders of magnitude higher) of tethered primers with ~10% yield (10e7 - 10e8 clusters). If the docking density for clusters were increased 10X (to 10e9 docking primers), the library concentration would have to approach the original concentration of tethered primers (micromolar range?) to anneal with the same kinetics. And, at higher library concentrations, there's the confounding effect of library molecules reannealing to each other.

                Comment


                • #9
                  A year or two ago Illumina were talking about producing an ordered flowcell so that you could pack more clusters in using technology borrowed from some of their other products.

                  I imagine Illumina would like to get to the stage where one or two lanes could generate enough to cover a whole human genome. I can't imagine they'd give up on research which would keep up the pressure on Life Tech.

                  But as pmiguel says - who knows when they might chose to release it if they do perfect it?

                  Comment


                  • #10
                    As konrad98 has said "Semi-ordered arrays" did show up in some of Illumina's future higher throughput roadmaps but back then (perhaps two-ish years ago) the timeline indicated that it would be potentially available by now. They have done a good job scaling the throughput the last couple of years so maybe they decided to hold onto it as a future improvement when they have exhausted the image processing avenues...

                    Comment


                    • #11
                      My impression from the semi-ordered arrays was that the advances Illumina have made in image analysis greatly reduced the benefits from ordering the clusters.

                      I can't remember who I was talking to (I think it was an Illumina person), but I heard that the clusters on a flowcell never actually overlap - they naturally stop expanding when they hit an adjacent cluster, so theoretically you could end up with a solid lawn of clusters which you later let your image analysis sort out. I can't imagine that ordered arrays would offer much of a capacity increase over this.

                      Comment


                      • #12
                        Yeah, they had data on this at last years AGBT. It was a really interesting demonstration, the clusters they showed were superimposed but not commingled. This lead them to the improved cluster calling algorithm that was being released to the GAIIx at that time.

                        If you could get a really well defined pixel map of the flowcell i think you could probably deal with a solid lawn of clusters.

                        Comment


                        • #13
                          Originally posted by konrad98 View Post
                          I imagine Illumina would like to get to the stage where one or two lanes could generate enough to cover a whole human genome. I can't imagine they'd give up on research which would keep up the pressure on Life Tech.
                          A recent report on GenomeWeb from Illumina CEO Jay Flatley decribes an example in which company researchers generated 1.13 terabases of data in a 14-day run on the HiSeq 2000 with 2 x 150 base reads, about 80 gigabases per day.

                          Thas't a human genome in less than a lane!

                          I am sure I posted something working out how it might be possibel to get to 2TB on HiSeq with some simple improvments in read length and cluster density. Seems they have cracked it and we are bound to hear a lot more at AGBT.

                          I wonder what Life Tech et al are going to show?

                          Comment


                          • #14
                            Originally posted by james hadfield View Post
                            A recent report on GenomeWeb from Illumina CEO Jay Flatley decribes an example in which company researchers generated 1.13 terabases of data in a 14-day run on the HiSeq 2000 with 2 x 150 base reads, about 80 gigabases per day.
                            Oh great - even more data.

                            Won't someone please think of the bioinformaticians?

                            Comment

                            Latest Articles

                            Collapse

                            • seqadmin
                              Strategies for Sequencing Challenging Samples
                              by seqadmin


                              Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
                              03-22-2024, 06:39 AM
                            • seqadmin
                              Techniques and Challenges in Conservation Genomics
                              by seqadmin



                              The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

                              Avian Conservation
                              Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
                              03-08-2024, 10:41 AM

                            ad_right_rmr

                            Collapse

                            News

                            Collapse

                            Topics Statistics Last Post
                            Started by seqadmin, Yesterday, 06:37 PM
                            0 responses
                            11 views
                            0 likes
                            Last Post seqadmin  
                            Started by seqadmin, Yesterday, 06:07 PM
                            0 responses
                            10 views
                            0 likes
                            Last Post seqadmin  
                            Started by seqadmin, 03-22-2024, 10:03 AM
                            0 responses
                            51 views
                            0 likes
                            Last Post seqadmin  
                            Started by seqadmin, 03-21-2024, 07:32 AM
                            0 responses
                            68 views
                            0 likes
                            Last Post seqadmin  
                            Working...
                            X