Announcement

Collapse
No announcement yet.

Low data output from questionable sequencing facility

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Low data output from questionable sequencing facility

    Hi all,

    I have been getting some questionable runs from a sequencing facility and was wondering if there could be any clues as to whether it is their fault or my own fault in lab prep.

    I am sequencing fish gut microbiome libraries on a Miseq 2x300 and the first run I sent off was great (was a different facility); lowest sample had 30k reads. Average was 100k per sample about.

    Then we switched facilities and since then I have never achieved anything close to my first run. And these are all very similar libraries, with near identical protocols except now we use MagBind bead cleans instead of AMPure (very similar product). Now all my runs average about 30k reads per sample, many getting less than 5k, and my latest Miseq run only generated 7 million reads total . I've attached a picture of QC reports for the good run and an example of a current bad one.

    Is it that easy for a facility to mess up so many miseq runs? They have been loading at 12-13pM with 20-25% PhiX. My last library was qubited at 26nM.
    Another piece of info is that the core recently dropped their prices in half. Don't want to give specifics but it is less than $800. So VERY cheap IMO. Not sure if this could influence what goes on at a facility.

    From the reports %PF is around 93-95% and the Q30% is around 84-89% for these runs.

    Let me know if I should provide any other information.
    Any feedback is much appreciated!
    Thanks,
    Sam
    Attached Files
    Last edited by samd; 11-06-2019, 06:46 PM.

  • #2
    More info for following would be useful:
    1- Are the libraries 6S V regions and overall prep workflow
    2- cluster density
    3- How many libraries are multiplexed
    4- Read output

    Comment


    • #3
      This is using custom sequencing primers?

      The heating and cooling elements are not calibrated exactly the same for all MiSeqs (by Illumina). Some risky custom sequencing primer designs can work with some Miseqs better than others.

      Comment


      • #4
        The $800 is very very low. Even twice that would be on the low end at many service providers for 2x300 v3 MiSeq.

        Have you compared Qubit numbers to qPCR to see if there is a mismatch in those approaches to quantifying your library?
        Providing nextRAD genotyping and PacBio sequencing services. http://snpsaurus.com

        Comment


        • #5
          Hi all,
          appreciate the responses

          @nucacidhunter
          1. Primers are the EMP 515-806 V4 region
          2. Cluster density was about 477 if I remember correctly
          3. Just 1 library. Usually I was doing 160 samples per run and this time I reduced the run to about 100 samples and still got crummy results
          4. Read output was actually 25M and then 23M PF which I guess is great actually but then 40% were undetermined and then when I import the fastq files into QIIME2 I only get about 7 million reads (this is before dada2, so still the entire files).

          Comment


          • #6
            2. Cluster density was about 477 if I remember correctly
            That does not jive with read numbers. Are you referring to 23M total reads (R1+R2) or passing clusters?

            That cluster density if not very high to get to 20+M clusters.

            Comment


            • #7
              @luc

              I used the Ilumina unique dual indexes so that should not be an issue right?

              @SNPsaurus

              I have not done that. I know that is much more accurate we just don't exactly have the capabilities in my lab. However, I can use another lab's qPCR machine I would just have to learn the protocol and all. Would this likely help get much better runs?
              Sam

              Comment


              • #8
                Hi GenoMax,

                Under the Indexing QC tab it says Total Reads: 25,058,484 and then PF Reads: 23,419,084. And then the density is at 477. I believe this would be total reads: R1 +R2?
                However, when I go to the Lane Metrics I see 11M reads PF, with density at 472. There is also 0.128 / 0.215 Phase/PrePhase%. Let me know if there is anything else I can look for that might help clarify.
                Thanks
                Last edited by samd; 11-06-2019, 11:07 AM.

                Comment


                • #9
                  Originally posted by samd View Post
                  @SNPsaurus

                  I have not done that. I know that is much more accurate we just don't exactly have the capabilities in my lab. However, I can use another lab's qPCR machine I would just have to learn the protocol and all. Would this likely help get much better runs?
                  Sam
                  After seeing the updates it probably wouldn't help much. Some library preps can have a high percentage of non-functional DNA fragments but a PCR amplicon should be pretty reliable. And if you are getting 25 million raw reads and then just 11 million the issue is somewhere in the demultiplexing perhaps? Have you looked at the undetermined fastq file and looked for index sequences to see if they have Ns or are not present in the index sequence list?
                  Providing nextRAD genotyping and PacBio sequencing services. http://snpsaurus.com

                  Comment


                  • #10
                    Cluster density is kind of low for a v3 kit even for low diversity libraries. We usually sequence the V3V4 region on a V3 600 cycle kit and get around 50M PE reads on ~1000 K/mm2 with around 17 pM loading concentration.

                    Comment


                    • #11
                      Hi SNPsaurus,

                      Ok so after discussing with them the 11M reads simply referred to the forward reads. The 23M PF refers to the forward and reverse. I see a 40% undetermined reads metric. Which is a bit high compared to 25% on my last run so I wonder if this is an indexing issue vs a sequencing issue?

                      @itstrieu: Damn I wish I could get those numbers. Would you recommend upping my concentration to 17pM instead of 13? Or is that dependent on other factors.

                      Comment


                      • #12
                        Originally posted by samd View Post
                        Hi all,
                        appreciate the responses

                        @nucacidhunter
                        1. Primers are the EMP 515-806 V4 region
                        2. Cluster density was about 477 if I remember correctly
                        3. Just 1 library. Usually I was doing 160 samples per run and this time I reduced the run to about 100 samples and still got crummy results
                        4. Read output was actually 25M and then 23M PF which I guess is great actually but then 40% were undetermined and then when I import the fastq files into QIIME2 I only get about 7 million reads (this is before dada2, so still the entire files).
                        Run stats seems within specs for the library type although they seem to be more cautious. To increase output safely following can be done:
                        1- Increasing cluster density to around 800k/mm2
                        2- Tunning PhiX% to 20 if after mapping undetermind reads majority of them origoinats from PhiX. Undetermind reads also could be reads that their index has not been assigned as a result of miss-matches in custom index primer or PCR primers itself.

                        Comment


                        • #13
                          Originally posted by samd View Post
                          Hi SNPsaurus,

                          Ok so after discussing with them the 11M reads simply referred to the forward reads. The 23M PF refers to the forward and reverse. I see a 40% undetermined reads metric. Which is a bit high compared to 25% on my last run so I wonder if this is an indexing issue vs a sequencing issue?

                          @itstrieu: Damn I wish I could get those numbers. Would you recommend upping my concentration to 17pM instead of 13? Or is that dependent on other factors.
                          I would say it is library dependent and what metrics you are aiming for. For V3V4 sequencing, we use spacer primers to add diversity to the run.

                          For V4 sequencing, we use 515F (Parada)–806R (Apprill) primers from EMB with a V2 500 cycle kit and we usually get about 35M PE reads that pass filter. For this library and kit, we load at around 8.75 pM but keep a log to have a floating average to determine the loading concentration for the next run.

                          Also it might depend on the MiSeq because we have two MiSeq and one performs slightly better. I usually vary the concentration in increments of 0.25 0.50 pM to be careful not to over cluster.
                          Last edited by itstrieu; 11-06-2019, 02:42 PM.

                          Comment


                          • #14
                            Hi nucacidhunter,

                            Sorry do you mean manually changing the cluster density to 800k/mm2? Is this something I could tell the facility to do?
                            And I have been using Ilumina UD indexes so I am guessing there shouldn't be any issues on that end.
                            Thanks,
                            Sam

                            Comment


                            • #15
                              Hi SNPsaurus,

                              Interesting. I have heard of the spacer primers but I think I am past the point in my PhD of redoing everything. But it would be nice.

                              This might be a dumb question but I thought a Miseq spits out 25M but I am seeing you and others say 35M or 50M? How is this possible?

                              Comment

                              Working...
                              X