Seqanswers Leaderboard Ad

Collapse

Announcement

Collapse
No announcement yet.
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Heisman
    replied
    Looking at those ROC curves, it appears to me that Novoalign is the best mapper in the specified simulation that was run with respect to sensitivity and specificity. Is this a correct interpretation?

    Leave a comment:


  • sparks
    replied
    Originally posted by rskr View Post
    I disagree. If you look at hash based aligners there are certain patterns of indels, mismatches and errors, where they won't find the right result even if it is unique. For example if the word size is 15, and there are are two mismatches 10 bases apart in a 50mer, the hash won't return the region at all. Likewise for longer reads the number of mismatches is likely to be higher and the Suffix Array search will terminate before finding the ideal match.
    That's a ridiculous statement! Most hashed aligners using a 15-mer hash would need 3 equally spaced mismatches in a 50mer to miss an alignment. But there are some hashed based aligners that are can find a 50-mer alignment with 5 or 6 mismatches even if equally spaced. Novoalign can do this.

    Leave a comment:


  • nilshomer
    replied
    Originally posted by cjp View Post
    @nickloman

    The output of the wgsim_eval.pl program looks a bit like the data below - bowtie 1 always gives a mapping score of 255 (column1). I'm guessing that bowtie 2 has many FP's at a mapping score of 1 (column3 if column1 == 1), but cumulatively finds more TP's with all mapping scores (column2 if column1 == 1). But I was also wondering the exact meaning from the output of the wgsim_eval.pl script.

    % tail *.roc
    ==> bowtie2.roc <==
    14 172922 11
    13 172925 12
    12 177943 27
    11 177945 28
    10 179990 37
    9 179995 40
    4 180250 40
    3 187273 578
    2 187324 580
    1 199331 5877

    ==> bowtie.roc <==
    255 86206 1740

    ==> bwa.roc <==
    10 192354 72
    9 192560 107
    8 192595 107
    7 192628 110
    6 192652 115
    5 192669 116
    4 192681 117
    3 192731 117
    2 192741 118
    1 192762 119

    Chris
    I forked Heng's code a while back into the dwgsim project (links below). I also added user documentation:

    Download dnaa for free. DNAA is the DNA analysis package, for analyzing next-generation post-alignment whole genome resequencing data. Specifically, DNAA is able to find structural variation, SNP and indel variants, as well as evaluating the mapping and data quality.

    Leave a comment:


  • lh3
    replied
    Each line consists of mapping quality threshold, #mapped reads with mapQ no less than the 1st column and #mismapped reads. It does not show #reads with mapQ=0. If we include mapQ=0 mappings, the sensitivity of bwa is also good for simulated data, but on single-end real data, the low-quality tail on reads makes bwa much worse. This is what Steven and Ben have observed. This is also why it is recommended to enable trimming when using bwa.

    BWA always gives mapQ 0 to repetitive hits, but other mappers (gsnap, bowtie2 and novoalign) may give mapQ<=3 to repetitive hits. This is theoretically correct. I may further set a mapQ threshold 1-4 when plotting.

    Leave a comment:


  • cjp
    replied
    @nickloman

    The output of the wgsim_eval.pl program looks a bit like the data below - bowtie 1 always gives a mapping score of 255 (column1). I'm guessing that bowtie 2 has many FP's at a mapping score of 1 (column3 if column1 == 1), but cumulatively finds more TP's with all mapping scores (column2 if column1 == 1). But I was also wondering the exact meaning from the output of the wgsim_eval.pl script.

    % tail *.roc
    ==> bowtie2.roc <==
    14 172922 11
    13 172925 12
    12 177943 27
    11 177945 28
    10 179990 37
    9 179995 40
    4 180250 40
    3 187273 578
    2 187324 580
    1 199331 5877

    ==> bowtie.roc <==
    255 86206 1740

    ==> bwa.roc <==
    10 192354 72
    9 192560 107
    8 192595 107
    7 192628 110
    6 192652 115
    5 192669 116
    4 192681 117
    3 192731 117
    2 192741 118
    1 192762 119

    Chris

    Leave a comment:


  • nickloman
    replied
    Hi Brent - that would make sense - varying minimum mapping quality thresholds and seeing the result. It would be nice if those values were also plotted on the graph somehow.

    Leave a comment:


  • brentp
    replied
    nickloman, I believe the thing that's changing in the figures for the other mappers is the mapping quality. GSNAP, bowtie and (apparently) soap2 do not calculate the mapping quality so there is nothing to vary to get a line.

    Leave a comment:


  • nickloman
    replied
    Originally posted by lh3 View Post
    Knowing the average FNR/FPR is not enough. This is where the ROC curve shows its power. It gives the full spectrum of the accuracy.
    Heng - I like the look of the ROC curve, but I cannot work out exactly how it is derived from reading your web page. For example I don't understand why some mappers have many data points, but Bowtie, Soap2 and Gsnap have only one. Could you give a brief explanation how you get from the (single file?) SAM output of a specific aligner to the plot?

    Sorry if this is a dumb question!

    Leave a comment:


  • lh3
    replied
    Originally posted by jkbonfield View Post
    I don't particularly wish to get drawn into a mapper war, and I'll say here that I haven't benchmarked these tools to compare. However thinking more downstream I think averaged sensitivity and specificity metrics aren't sufficient to show the whole story.
    Knowing the average FNR/FPR is not enough. This is where the ROC curve shows its power. It gives the full spectrum of the accuracy.

    Originally posted by jkbonfield View Post
    So say we have 100mers of a simulated genome with X% of SNPs. We can algorithmically produce 100x depth by starting a new 100mer on every position in the genome, and then give them appropriate real looking quality profiles with error rates from real data, etc. (So as real as can be, but perfectly uniform distribution with known mapping locations.)

    Then we can plot the depth distribution. How many sites are there were a particular combination of SNPs or errors has caused a dip in coverage? Given we're almost always looking for very specific locations, often around discrepancies, this is perhaps a key metric in analysis.
    But you are right that even the ROC for mappers is not informative enough for real applications. Gerton shares with your view, too. What is more informative is to know how the mapper reacts to variants, especially clustered variants or in semi-repetitive regions. The ROC for variants should be more indicative. It is just more difficult to do such analysis because we have to simulate and map much more reads to get a good picture. Most, if not all, read simulators do not get the SNP distribution right, either.

    Leave a comment:


  • rskr
    replied
    Originally posted by jkbonfield View Post
    I think most would feel happier with the 90% sensitivity aligner.
    Sensitivity in this context is a liability, since a high sensitivity aligner is likely to produce many erroneous alignments and base calls, which will be on the order of thousands or millions, for which there is no subsequent higher cost procedure to resolve, and manual curation is prohibitive. Furthermore they are likely to be both precise and biased, so given several identical reads it will make the same errors in the same way. Using a sensitive aligner for scaffolding for example would be a very large problem.

    Leave a comment:


  • jkbonfield
    replied
    I don't particularly wish to get drawn into a mapper war, and I'll say here that I haven't benchmarked these tools to compare. However thinking more downstream I think averaged sensitivity and specificity metrics aren't sufficient to show the whole story.

    I agree with Heng that quality of the mapping score is very important for some forms of analysis. Furthermore I'd go to say the variance of depth is important too. Eg imagine we have two aligners that can map 95% of data and 90% of data each. The one mapping 95% maps well to 95% of the genome and atrociously to 5% of the genome, while the one mapping 90% maps across the entire genome in a relatively uniform manner - I think most would feel happier with the 90% sensitivity aligner.

    So say we have 100mers of a simulated genome with X% of SNPs. We can algorithmically produce 100x depth by starting a new 100mer on every position in the genome, and then give them appropriate real looking quality profiles with error rates from real data, etc. (So as real as can be, but perfectly uniform distribution with known mapping locations.)

    Then we can plot the depth distribution. How many sites are there were a particular combination of SNPs or errors has caused a dip in coverage? Given we're almost always looking for very specific locations, often around discrepancies, this is perhaps a key metric in analysis.

    Leave a comment:


  • lh3
    replied
    For a single mapper, it is true that the more it maps, the higher FPR it has. But when you compare two mappers, it is possible for one mapper to both map more reads and have lower FPR. Then that is the better one.

    Leave a comment:


  • adaptivegenome
    replied
    This is not at all what Heng, Steve, or I suggested.

    Leave a comment:


  • rskr
    replied
    So an algorithm that has a high sensitivity is likely to have a low specificity? I don't think these terms mean much outside of a hospital type test. What we want is accuracy.

    Leave a comment:


  • lh3
    replied
    Yes, it is my fault to use a wrong term. Sorry for the confusion. To clarify, I mean we want to achieve low false positive rate (this should be right).

    Bowtie2 is definitely a substantial improvement over bowtie1 in almost every aspect, and I can really see the encouraging improvement in terms of low FPR between beta2 and beta3, all in the right direction. When you also focus your development on low FPR, probably you will gain further improvement. This will be good for everyone.

    Leave a comment:

Latest Articles

Collapse

  • seqadmin
    Investigating the Gut Microbiome Through Diet and Spatial Biology
    by seqadmin




    The human gut contains trillions of microorganisms that impact digestion, immune functions, and overall health1. Despite major breakthroughs, we’re only beginning to understand the full extent of the microbiome’s influence on health and disease. Advances in next-generation sequencing and spatial biology have opened new windows into this complex environment, yet many questions remain. This article highlights two recent studies exploring how diet influences microbial...
    02-24-2025, 06:31 AM

ad_right_rmr

Collapse

News

Collapse

Topics Statistics Last Post
Started by seqadmin, 03-03-2025, 01:15 PM
0 responses
176 views
0 likes
Last Post seqadmin  
Started by seqadmin, 02-28-2025, 12:58 PM
0 responses
267 views
0 likes
Last Post seqadmin  
Started by seqadmin, 02-24-2025, 02:48 PM
0 responses
652 views
0 likes
Last Post seqadmin  
Started by seqadmin, 02-21-2025, 02:46 PM
0 responses
266 views
0 likes
Last Post seqadmin  
Working...
X