Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • gringer
    replied
    Originally posted by jkbonfield View Post
    Either way, I'm curious to know what the per-sequence-base (not per consensus base) accuracy is like with a 6mer model vs a 5mer model.
    But presumably not curious enough to download Nick Loman's publicly-available data and find out for yourself.

    That's perfectly understandable. I'm still trying to find time to do proper signal-level correlations from our mtDNA data from over a year ago.

    Leave a comment:


  • jkbonfield
    replied
    Originally posted by gringer View Post
    This doesn't work so well in the way that ONT is modelling for base-calling, because the event length actually depends quite strongly on the bases that are found about 20bp upstream from the signal site (where the DNA is being unwound and split). In other words, it's not particularly random.
    In that case, it makes it possible to be more accurate than just guessing based on the random distribution, albeit perhaps also (too?) complex to tease out all the correlations.

    Either way, I'm curious to know what the per-sequence-base (not per consensus base) accuracy is like with a 6mer model vs a 5mer model. We have some data of our own, but fundamentally the variabiity from run to run is high enough that I think it would need a large project to average out that variability to get robust numbers.

    Leave a comment:


  • gringer
    replied
    Originally posted by jkbonfield View Post
    However an alternative (and obvious) strategy is to observe the event length distributions for non-homopolymer. They're largely random, centred around a particular amount. They'll either be gaussian or poisson I'd guess. Given a homopolymer signal of length L we can hypothesise 5, 6, 7, 8, ... lengths and derive the probability of the homopolymer being more than 5. The error rate would likely be horrendous, but in theory it ought to be better than just say "5, never more".
    This doesn't work so well in the way that ONT is modelling for base-calling, because the event length actually depends quite strongly on the bases that are found about 20bp upstream from the signal site (where the DNA is being unwound and split). In other words, it's not particularly random.

    Leave a comment:


  • jkbonfield
    replied
    Consensus improvement isn't the same as per-base improvement. I wonder what the difference is there.

    Also Jared said it was mainly down to improvements in homopolymers or whether it is simply due to the frequency distribution of different homopolymer lengths. I wonder if this means they've changed their strategy.

    Obviously any homopolymer longer than 5 (now 6) looks like a single event. Eg TAAAAAAAAG yields TAAAA AAAAA AAAAA AAAAA AAAAA AAAAG. Those AAAAA events all get joined together into a single longer event, which meant the longest homopolymer previously reported was 5. I assume it's now 6.

    However an alternative (and obvious) strategy is to observe the event length distributions for non-homopolymer. They're largely random, centred around a particular amount. They'll either be gaussian or poisson I'd guess. Given a homopolymer signal of length L we can hypothesise 5, 6, 7, 8, ... lengths and derive the probability of the homopolymer being more than 5. The error rate would likely be horrendous, but in theory it ought to be better than just say "5, never more".

    No idea if the signal is strong enough (ie the distribution tight enough) to extract sufficient accuracy to make it anything better than a wild guess.

    Leave a comment:


  • gringer
    replied
    Originally posted by ymc View Post
    So now is hexamer model? How much base calling accuracy does it gain over the previous pentamer model?
    See Jared Simpson's post here. There's no raw sequence accuracy mentioned there, but in consensus it was a 0.4% improvement.

    I don't have results from any comparative sequencing that we've done, because getting *any* sequence is a bit of a challenge for us.

    Leave a comment:


  • ymc
    replied
    Originally posted by gringer View Post
    When was the sequencing done? The f1000 paper has the most recent error analysis that I'm aware of (total error 10-15%), and that is from sequencing done a couple of months ago, prior to changing to a hexamer model for base calling:

    http://f1000research.com/articles/4-1075/v1
    So now is hexamer model? How much base calling accuracy does it gain over the previous pentamer model?

    Leave a comment:


  • gringer
    replied
    How high is the rate of "phantasy sequences", that have no resemblance to the reference, with the latest versions?
    Other people call this "mismatch rate", or alternatively refer to "mapping rate". From the 2D reads, mapping rate was pretty close to 100% for all except two runs (see Figure 6, or section with heading "Proportion of target and control sample"). I'm a bit hesitant to trust this fully, because there's a high chance of false positive matches when you have a high error rate.

    The authors are writing often about "a run". Are they writing about an average run or the cherry picked best run they ever encountered?
    There were 20 runs which each tried to stick to the same specific sample preparation and sequencing protocol. You can call those cherry picked runs if you like, but these were the only runs that the groups did as part of their Phase 1 experiments. Only six runs were able to keep to this without variation, with the deviations from the standard protocol specified in Table S5.

    Leave a comment:


  • luc
    replied
    How high is the rate of "phantasy sequences", that have no resemblance to the reference, with the latest versions? I actually can't find many data of interest for me in the f1000 article. The athors are writing often a bout "a run". Are they writing about an average run or the cherry picked best run they ever encountered?

    Leave a comment:


  • gringer
    replied
    according to my research the overall error rate in the Minion sequencer is about 25%.
    When was the sequencing done? The f1000 paper has the most recent error analysis that I'm aware of (total error 10-15%), and that is from sequencing done a couple of months ago, prior to changing to a hexamer model for base calling:

    Read the original article in full on F1000Research: MinION Analysis and Reference Consortium: Phase 1 data release and analysis

    Leave a comment:


  • mido1951
    replied
    I want to make a local alignment Minion reads but I do not know how to put the parameters of: insertion, deletion and match / mismatch.
    I want to use "Affine GAP PENALTIES" but this method penalizes insertions and deletions. If I'm going to use "Affine GAP PENALTIES" but I also need a function that penalizes subsitutions.
    Is this is a good idea friends?

    Leave a comment:


  • GenoMax
    replied
    Homo-polymers are long stretches of identical nucleotides (e.g. AAAAAAAAAAAAAAA)

    This paper claims an error rate of 38%, but it is early days. This rate may be dependent on the sample and its composition too.

    Leave a comment:


  • mido1951
    replied
    according to my research the overall error rate in the Minion sequencer is about 25%.
    what's homopolymer regions?
    Thank you

    Leave a comment:


  • gringer
    replied
    If there's no reference, a reasonable ballpark (especially for passed reads) is about 10% total error, distributed about 1/3 insertions, 1/3 deletions, and 1/3 SNPs.

    It's possible to remap reads against other reads, but you need to be very careful with that, particularly across homopolymer regions. There's a lot of systematic error in the base calling (not the signal) which causes problems in consensus sequences.

    Leave a comment:


  • mido1951
    replied
    for now I am trying to look for the reference.
    If there is other solutions to determine the rate of error thank you for telling it to me.

    Leave a comment:


  • GenoMax
    replied
    You won't unless there is a reference available to compare the reads to.

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Exploring the Dynamics of the Tumor Microenvironment
    by seqadmin




    The complexity of cancer is clearly demonstrated in the diverse ecosystem of the tumor microenvironment (TME). The TME is made up of numerous cell types and its development begins with the changes that happen during oncogenesis. “Genomic mutations, copy number changes, epigenetic alterations, and alternative gene expression occur to varying degrees within the affected tumor cells,” explained Andrea O’Hara, Ph.D., Strategic Technical Specialist at Azenta. “As...
    07-08-2024, 03:19 PM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, Yesterday, 06:46 AM
0 responses
9 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-24-2024, 11:09 AM
0 responses
26 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-19-2024, 07:20 AM
0 responses
160 views
0 likes
Last Post seqadmin  
Started by seqadmin, 07-16-2024, 05:49 AM
0 responses
127 views
0 likes
Last Post seqadmin  
Working...
X