Announcement

Collapse
No announcement yet.

16S Miseq run with 96 indexed samples

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 16S Miseq run with 96 indexed samples

    Hi everyone,
    First post here, sorry for the bad writing.
    Me and my colleagues are planning to do a Miseq run to examine the 16S RNA of several bacterial samples.
    We would like to sequence as many samples as possible in one run, and we are thinking to use the 96 index nextera kit coupled with de v2 chemistry to sequence the V3-V4 region.
    Our concerns are about 4 points:

    1-Combining 2x250 PE and V3-V4 (460bp) works allright?
    2-How much PhiX we should use to avoid overcluster? We were told that 30% should be enough.
    3-As the number of samples is quite large, we are not sure if we’ll be able to quantify all samples through qPCR. Does a QUBIT based normalization of individual amplicons and a qPCR based normalization of pooled amplicons work? How badly could this bias our run?
    4-If someone usually does this run, how many reads per sample are expected to be yielded?

    Any help would be appreciated.
    Cheers

  • #2
    1. A 460 bp amplicon with a 500-cycle kit is probably not enough overlap. I know Illumina claims a 50-bp overlap is enough, but all my users like a much greater overlap.

    2. I use a 10% phiX spike for amplicon runs and aim for ~800K clusters. This seems to generate good data.

    3. If you are using the 2-step PCR approach to generate your amplicons (or any approach where PCR, not ligation, is your last step, you should be able to quantify and pool based on Qubit readings and then just quantify your pooled library with qPCR. This is what I do and it works well - not perfectly but you are quantifying and pooling small volumes of sample (even with qPCR) so there's a fair amount of pipetting error introduced. Make sure you convert the ng/ul reading spit out by the Qubit to nM so that you are pooling equimolar amounts of DNA.

    4. If you do run a V2 kit, you should end up with ~100K reads per sample. You'd get more reads with a V3 kit.

    Comment


    • #3
      Originally posted by marcpavi View Post
      3-As the number of samples is quite large, we are not sure if we’ll be able to quantify all samples through qPCR. Does a QUBIT based normalization of individual amplicons and a qPCR based normalization of pooled amplicons work? How badly could this bias our run?
      At first we tried to quantify and normalize by hand every PCR sample; quickly realized this too much work. Now after PCR, either one step for 16S or two step for everything else, all reactions are normalized using Invitrogen (now Life Tech) SequalPrep Normalization plates. Very easy and fast. Is it perfect? No, it is definitely good enough to get decent representation of all sample in the pool.

      Comment


      • #4
        1. We do 2x250 for the V4 region and routinely get 90+% pairs that can be joined together.

        2. 30% is overkill, 10% is probably fine like microgirl123 said. We honestly do even less (5% or none) because the MiSeq chemistry is pretty good with low diversity samples.

        3/4. You get some variation in #reads/sample. Fortunately, 16S tends to saturate very quickly (think 10-20k reads/sample), so even 100K with a broad distribution should cover most of your samples.

        Comment


        • #5
          Recently I had a bad run with overcluster density(over 1100 for V2), but we saw very very low passing filter and very low phix. For troubleshooting, we checked everything, libraries no problem, prep steps no problem. I just recall I left cartridge at bench for a while after it thawed. Could that be problem? No amplify but over 1000 cluster density? Anybody left cartridge at bench for how long still got good run?

          Help needed, any hint? Thanks.
          Last edited by GA-J; 09-25-2015, 04:55 PM.

          Comment


          • #6
            With your description it looks like overclustering and consequent low PF. Could you post a screenshot of flow cell chart (from run SAV) by selecting cluster from the top tab.

            Comment


            • #7
              @nucacidhunter: Thank you for your opinion. I am not able to post a screenshot now. I could try on next Monday. But there is no reason to be overloaded. Let's try to dig more.


              Originally posted by nucacidhunter View Post
              With your description it looks like overclustering and consequent low PF. Could you post a screenshot of flow cell chart (from run SAV) by selecting cluster from the top tab.

              Comment


              • #8
                Two main reasons for overclustering:
                1- Underestimating library concentration
                2- pipetting errors

                Other causes for low PF :
                1- using non-optimal custom primers either by design or synthesis inefficiencies
                2- low diversity library

                Comment

                Working...
                X