Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ddabiri
    replied
    Originally posted by SNPsaurus View Post
    The $800 is very very low. Even twice that would be on the low end at many service providers for 2x300 v3 MiSeq.

    Have you compared Qubit numbers to qPCR to see if there is a mismatch in those approaches to quantifying your library?

    In relation to this observation, I recently started a run that ended with a Qscore of 78% and a Read PF of 96%. It had a total read of 4.9M. Its underclustered as a result of using the Qubit value for loading rather than using the qPCR value and also we multiplexed 350 samples (amplicon). However, one surprising event is that half of the samples are little or no reads while the other indexed samples have the 4.9m reads generated. Any explanation for this.

    Thanks

    Leave a comment:


  • samd
    replied
    Hi itstrieu,

    I see. Well I guess I am getting very low outputs then.. I will run that suggestion by them. It is just strange because my first run at Berkeley which I consider "good" was done at 12pM and I even tried upping it to 13pM here at UCLA and ended up getting fewer reads.

    Leave a comment:


  • itstrieu
    replied
    Originally posted by samd View Post
    Ok so I am guessing the "25M total reads" on Basespace actually means 50M since I did PE. Thanks for the suggestion I will look into that.

    One thing I just remembered is the QC results were quite different from the first "good run" and the subsequent "bad runs". The good run has a nice skinny peak and the bad runs have lumpy peaks which I guess would be attributed to non-specific binding of primers? I've attached to the post in case anyone is interested or has any insight into that.
    Again, thanks for all the feedback!
    Sam
    If the total reads is 25M under the indexing QC tabs in BaseSpace, it is actually the total PE reads. Under the Metrics tab, READS PF will be half of that. I would ask if they could rerun the library but at a higher concentration to target for a cluster density of around 900 K/mm2 for more reads.

    Leave a comment:


  • samd
    replied
    Ok so I am guessing the "25M total reads" on Basespace actually means 50M since I did PE. Thanks for the suggestion I will look into that.

    One thing I just remembered is the QC results were quite different from the first "good run" and the subsequent "bad runs". The good run has a nice skinny peak and the bad runs have lumpy peaks which I guess would be attributed to non-specific binding of primers? I've attached to the post in case anyone is interested or has any insight into that.
    Again, thanks for all the feedback!
    Sam

    Leave a comment:


  • itstrieu
    replied
    You can increase cluster density by increasing the loading concentration.

    In my post they are paired end reads so double that if you where doing single end reads. Here is a link how many reads you should expect for various MiSeq kits but it is dependent on cluster density. https://www.illumina.com/systems/seq...fications.html

    Edit. Sometimes you hear the word cluster and reads used together. I believe that for a v3 kit, it can generate around 25M unique cluster and each cluster can do two reads for paired end so it would output 50 M reads
    Last edited by itstrieu; 11-06-2019, 03:06 PM.

    Leave a comment:


  • samd
    replied
    Hi SNPsaurus,

    Interesting. I have heard of the spacer primers but I think I am past the point in my PhD of redoing everything. But it would be nice.

    This might be a dumb question but I thought a Miseq spits out 25M but I am seeing you and others say 35M or 50M? How is this possible?

    Leave a comment:


  • samd
    replied
    Hi nucacidhunter,

    Sorry do you mean manually changing the cluster density to 800k/mm2? Is this something I could tell the facility to do?
    And I have been using Ilumina UD indexes so I am guessing there shouldn't be any issues on that end.
    Thanks,
    Sam

    Leave a comment:


  • itstrieu
    replied
    Originally posted by samd View Post
    Hi SNPsaurus,

    Ok so after discussing with them the 11M reads simply referred to the forward reads. The 23M PF refers to the forward and reverse. I see a 40% undetermined reads metric. Which is a bit high compared to 25% on my last run so I wonder if this is an indexing issue vs a sequencing issue?

    @itstrieu: Damn I wish I could get those numbers. Would you recommend upping my concentration to 17pM instead of 13? Or is that dependent on other factors.
    I would say it is library dependent and what metrics you are aiming for. For V3V4 sequencing, we use spacer primers to add diversity to the run.

    For V4 sequencing, we use 515F (Parada)–806R (Apprill) primers from EMB with a V2 500 cycle kit and we usually get about 35M PE reads that pass filter. For this library and kit, we load at around 8.75 pM but keep a log to have a floating average to determine the loading concentration for the next run.

    Also it might depend on the MiSeq because we have two MiSeq and one performs slightly better. I usually vary the concentration in increments of 0.25 0.50 pM to be careful not to over cluster.
    Last edited by itstrieu; 11-06-2019, 02:42 PM.

    Leave a comment:


  • nucacidhunter
    replied
    Originally posted by samd View Post
    Hi all,
    appreciate the responses

    @nucacidhunter
    1. Primers are the EMP 515-806 V4 region
    2. Cluster density was about 477 if I remember correctly
    3. Just 1 library. Usually I was doing 160 samples per run and this time I reduced the run to about 100 samples and still got crummy results
    4. Read output was actually 25M and then 23M PF which I guess is great actually but then 40% were undetermined and then when I import the fastq files into QIIME2 I only get about 7 million reads (this is before dada2, so still the entire files).
    Run stats seems within specs for the library type although they seem to be more cautious. To increase output safely following can be done:
    1- Increasing cluster density to around 800k/mm2
    2- Tunning PhiX% to 20 if after mapping undetermind reads majority of them origoinats from PhiX. Undetermind reads also could be reads that their index has not been assigned as a result of miss-matches in custom index primer or PCR primers itself.

    Leave a comment:


  • samd
    replied
    Hi SNPsaurus,

    Ok so after discussing with them the 11M reads simply referred to the forward reads. The 23M PF refers to the forward and reverse. I see a 40% undetermined reads metric. Which is a bit high compared to 25% on my last run so I wonder if this is an indexing issue vs a sequencing issue?

    @itstrieu: Damn I wish I could get those numbers. Would you recommend upping my concentration to 17pM instead of 13? Or is that dependent on other factors.

    Leave a comment:


  • nucacidhunter
    replied
    Originally posted by samd View Post
    Hi all,
    appreciate the responses

    @nucacidhunter
    1. Primers are the EMP 515-806 V4 region
    2. Cluster density was about 477 if I remember correctly
    3. Just 1 library. Usually I was doing 160 samples per run and this time I reduced the run to about 100 samples and still got crummy results
    4. Read output was actually 25M and then 23M PF which I guess is great actually but then 40% were undetermined and then when I import the fastq files into QIIME2 I only get about 7 million reads (this is before dada2, so still the entire files).
    From the information run stats looks good for the library type. They can increase cluster density but that would affect quality. Undetermined reads would be PhiX and reads that their index have not been assigned. You can map undetermined reads to PhiX to see how many of them are from PhiX. If it is more than intended 20-25% then it can be adjusted. With custom sequencing primers and the EMP library type it would be usual to expect unassigned reads due to miss-matches in custom indexing primers or primers itself.

    Leave a comment:


  • itstrieu
    replied
    Cluster density is kind of low for a v3 kit even for low diversity libraries. We usually sequence the V3V4 region on a V3 600 cycle kit and get around 50M PE reads on ~1000 K/mm2 with around 17 pM loading concentration.

    Leave a comment:


  • SNPsaurus
    replied
    Originally posted by samd View Post
    @SNPsaurus

    I have not done that. I know that is much more accurate we just don't exactly have the capabilities in my lab. However, I can use another lab's qPCR machine I would just have to learn the protocol and all. Would this likely help get much better runs?
    Sam
    After seeing the updates it probably wouldn't help much. Some library preps can have a high percentage of non-functional DNA fragments but a PCR amplicon should be pretty reliable. And if you are getting 25 million raw reads and then just 11 million the issue is somewhere in the demultiplexing perhaps? Have you looked at the undetermined fastq file and looked for index sequences to see if they have Ns or are not present in the index sequence list?

    Leave a comment:


  • samd
    replied
    Hi GenoMax,

    Under the Indexing QC tab it says Total Reads: 25,058,484 and then PF Reads: 23,419,084. And then the density is at 477. I believe this would be total reads: R1 +R2?
    However, when I go to the Lane Metrics I see 11M reads PF, with density at 472. There is also 0.128 / 0.215 Phase/PrePhase%. Let me know if there is anything else I can look for that might help clarify.
    Thanks
    Last edited by samd; 11-06-2019, 11:07 AM.

    Leave a comment:


  • samd
    replied
    @luc

    I used the Ilumina unique dual indexes so that should not be an issue right?

    @SNPsaurus

    I have not done that. I know that is much more accurate we just don't exactly have the capabilities in my lab. However, I can use another lab's qPCR machine I would just have to learn the protocol and all. Would this likely help get much better runs?
    Sam

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Strategies for Sequencing Challenging Samples
    by seqadmin


    Despite advancements in sequencing platforms and related sample preparation technologies, certain sample types continue to present significant challenges that can compromise sequencing results. Pedro Echave, Senior Manager of the Global Business Segment at Revvity, explained that the success of a sequencing experiment ultimately depends on the amount and integrity of the nucleic acid template (RNA or DNA) obtained from a sample. “The better the quality of the nucleic acid isolated...
    03-22-2024, 06:39 AM
  • seqadmin
    Techniques and Challenges in Conservation Genomics
    by seqadmin



    The field of conservation genomics centers on applying genomics technologies in support of conservation efforts and the preservation of biodiversity. This article features interviews with two researchers who showcase their innovative work and highlight the current state and future of conservation genomics.

    Avian Conservation
    Matthew DeSaix, a recent doctoral graduate from Kristen Ruegg’s lab at The University of Colorado, shared that most of his research...
    03-08-2024, 10:41 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Yesterday, 06:37 PM
0 responses
10 views
0 likes
Last Post seqadmin  
Started by seqadmin, Yesterday, 06:07 PM
0 responses
9 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-22-2024, 10:03 AM
0 responses
51 views
0 likes
Last Post seqadmin  
Started by seqadmin, 03-21-2024, 07:32 AM
0 responses
67 views
0 likes
Last Post seqadmin  
Working...
X